Skip to content

Commit

Permalink
[8.x] [Security Solution] AI Assistant: LLM Connector model chooser b…
Browse files Browse the repository at this point in the history
…ug. New chat does not use connector's model (#199303) (#204014) (#204308)

# Backport

This will backport the following commits from `main` to `8.x`:
- [[Security Solution] AI Assistant: LLM Connector model chooser bug.
New chat does not use connector's model (#199303)
(#204014)](#204014)

<!--- Backport version: 9.4.3 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)

<!--BACKPORT [{"author":{"name":"Ievgen
Sorokopud","email":"ievgen.sorokopud@elastic.co"},"sourceCommit":{"committedDate":"2024-12-14T08:54:54Z","message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","v9.0.0","Team:
SecuritySolution","Team:Security Generative
AI","backport:version","v8.18.0","v8.16.3","v8.17.1"],"title":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model
(#199303)","number":204014,"url":"https://github.com/elastic/kibana/pull/204014","mergeCommit":{"message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},"sourceBranch":"main","suggestedTargetBranches":["8.x","8.16","8.17"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/204014","number":204014,"mergeCommit":{"message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.16","label":"v8.16.3","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.17","label":"v8.17.1","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->

Co-authored-by: Ievgen Sorokopud <ievgen.sorokopud@elastic.co>
  • Loading branch information
kibanamachine and e40pud authored Dec 14, 2024
1 parent a8b702b commit 3e3c937
Show file tree
Hide file tree
Showing 5 changed files with 53 additions and 26 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -265,18 +265,24 @@ export const useCurrentConversation = ({
}
const newSystemPrompt = getDefaultNewSystemPrompt(allSystemPrompts);

let conversation: Partial<Conversation> = {};
if (currentConversation?.apiConfig) {
const { defaultSystemPromptId: _, ...restApiConfig } = currentConversation?.apiConfig;
conversation =
restApiConfig.actionTypeId != null
? {
apiConfig: {
...restApiConfig,
...(newSystemPrompt?.id != null
? { defaultSystemPromptId: newSystemPrompt.id }
: {}),
},
}
: {};
}
const newConversation = await createConversation({
title: NEW_CHAT,
...(currentConversation?.apiConfig != null &&
currentConversation?.apiConfig?.actionTypeId != null
? {
apiConfig: {
connectorId: currentConversation.apiConfig.connectorId,
actionTypeId: currentConversation.apiConfig.actionTypeId,
...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),
},
}
: {}),
...conversation,
});

if (newConversation) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,23 +11,48 @@ import { FormattedMessage } from '@kbn/i18n-react';
import { EuiLink } from '@elastic/eui';
import { DEFAULT_OPENAI_MODEL, OpenAiProviderType } from '../../../common/openai/constants';
import * as i18n from './translations';
import { Config } from './types';

export const DEFAULT_URL = 'https://api.openai.com/v1/chat/completions' as const;
export const DEFAULT_URL_AZURE =
'https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/chat/completions?api-version={api-version}' as const;

export const DEFAULT_BODY = `{
const DEFAULT_BODY = `{
"messages": [{
"role":"user",
"content":"Hello world"
}]
}`;
export const DEFAULT_BODY_AZURE = `{
const DEFAULT_BODY_AZURE = `{
"messages": [{
"role":"user",
"content":"Hello world"
}]
}`;
const DEFAULT_BODY_OTHER = (defaultModel: string) => `{
"model": "${defaultModel}",
"messages": [{
"role":"user",
"content":"Hello world"
}]
}`;

export const getDefaultBody = (config?: Config) => {
if (!config) {
// default to OpenAiProviderType.OpenAi sample data
return DEFAULT_BODY;
}
if (config?.apiProvider === OpenAiProviderType.Other) {
// update sample data if Other (OpenAI Compatible Service)
return config.defaultModel ? DEFAULT_BODY_OTHER(config.defaultModel) : DEFAULT_BODY;
}
if (config?.apiProvider === OpenAiProviderType.AzureAi) {
// update sample data if AzureAi
return DEFAULT_BODY_AZURE;
}
// default to OpenAiProviderType.OpenAi sample data
return DEFAULT_BODY;
};

export const openAiConfig: ConfigFieldSchema[] = [
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ import React from 'react';
import { fireEvent, render } from '@testing-library/react';
import ParamsFields from './params';
import { OpenAiProviderType, SUB_ACTION } from '../../../common/openai/constants';
import { DEFAULT_BODY, DEFAULT_BODY_AZURE, DEFAULT_URL } from './constants';
import { DEFAULT_URL, getDefaultBody } from './constants';

const messageVariables = [
{
Expand Down Expand Up @@ -73,14 +73,15 @@ describe('Gen AI Params Fields renders', () => {
);
expect(editAction).toHaveBeenCalledTimes(2);
expect(editAction).toHaveBeenCalledWith('subAction', SUB_ACTION.RUN, 0);
const body = getDefaultBody(actionConnector.config);
if (apiProvider === OpenAiProviderType.OpenAi) {
expect(editAction).toHaveBeenCalledWith('subActionParams', { body: DEFAULT_BODY }, 0);
expect(editAction).toHaveBeenCalledWith('subActionParams', { body }, 0);
}
if (apiProvider === OpenAiProviderType.AzureAi) {
expect(editAction).toHaveBeenCalledWith('subActionParams', { body: DEFAULT_BODY_AZURE }, 0);
expect(editAction).toHaveBeenCalledWith('subActionParams', { body }, 0);
}
if (apiProvider === OpenAiProviderType.Other) {
expect(editAction).toHaveBeenCalledWith('subActionParams', { body: DEFAULT_BODY }, 0);
expect(editAction).toHaveBeenCalledWith('subActionParams', { body }, 0);
}
}
);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ import {
ActionConnectorMode,
JsonEditorWithMessageVariables,
} from '@kbn/triggers-actions-ui-plugin/public';
import { OpenAiProviderType, SUB_ACTION } from '../../../common/openai/constants';
import { DEFAULT_BODY, DEFAULT_BODY_AZURE } from './constants';
import { SUB_ACTION } from '../../../common/openai/constants';
import { getDefaultBody } from './constants';
import { OpenAIActionConnector, ActionParams } from './types';

const ParamsFields: React.FunctionComponent<ActionParamsProps<ActionParams>> = ({
Expand Down Expand Up @@ -41,16 +41,10 @@ const ParamsFields: React.FunctionComponent<ActionParamsProps<ActionParams>> = (

useEffect(() => {
if (!subActionParams) {
// default to OpenAiProviderType.OpenAi sample data
let sampleBody = DEFAULT_BODY;

if (typedActionConnector?.config?.apiProvider === OpenAiProviderType.AzureAi) {
// update sample data if AzureAi
sampleBody = DEFAULT_BODY_AZURE;
}
const sampleBody = getDefaultBody(typedActionConnector?.config);
editAction('subActionParams', { body: sampleBody }, index);
}
}, [typedActionConnector?.config?.apiProvider, editAction, index, subActionParams]);
}, [typedActionConnector?.config, editAction, index, subActionParams]);

const editSubActionParams = useCallback(
(params: ActionParams['subActionParams']) => {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ export interface ActionParams {
export interface Config {
apiProvider: OpenAiProviderType;
apiUrl: string;
defaultModel?: string;
}

export interface Secrets {
Expand Down

0 comments on commit 3e3c937

Please sign in to comment.