Skip to content

Commit

Permalink
Merge pull request #67 from tak-bro/feature/fix-dashscope
Browse files Browse the repository at this point in the history
Feature/fix dashscope
  • Loading branch information
tak-bro authored Aug 16, 2024
2 parents ffc6d75 + 2832a79 commit f788538
Show file tree
Hide file tree
Showing 4 changed files with 81 additions and 74 deletions.
79 changes: 46 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ npm install -g aicommit2

```sh
aicommit2 config set OPENAI.key=<your key>
aicommit2 config set OLLAMA.model=<your local model>
aicommit2 config set ANTHROPIC.key=<your key>
# ... (similar commands for other providers)
```

Expand Down Expand Up @@ -381,13 +381,13 @@ aicommit2 config set ignoreBody="true"
### OpenAI

| Setting | Description | Default |
|--------------------|---------------------------------------------------------------------|------------------------|
| `key` | API key | - |
| `model` | Model to use | `gpt-3.5-turbo` |
| `url` | API endpoint URL | https://api.openai.com |
| `path` | API path | /v1/chat/completions |
| `proxy` | Proxy settings | - |
| Setting | Description | Default |
|--------------------|--------------------|------------------------|
| `key` | API key | - |
| `model` | Model to use | `gpt-3.5-turbo` |
| `url` | API endpoint URL | https://api.openai.com |
| `path` | API path | /v1/chat/completions |
| `proxy` | Proxy settings | - |

##### OPENAI.key

Expand Down Expand Up @@ -425,13 +425,26 @@ Default: `/v1/chat/completions`

The OpenAI Path.

##### OPENAI.topP

Default: `1`

The `top_p` parameter selects tokens whose combined probability meets a threshold. Please see [detail](https://platform.openai.com/docs/api-reference/chat/create#chat-create-top_p)

```sh
aicommit2 config set OPENAI.topP=0
```

> NOTE: If `topP` is less than 0, it does not deliver the `top_p` parameter to the request.
> - You can use it when you don't need a `top_p` parameter on other compatible platform.
### Ollama

| Setting | Description | Default |
|--------------------|------------------------------------------------------------------------------------------------------------------|------------------------|
| `model` | Model(s) to use (comma-separated list) | - |
| `host` | Ollama host URL | http://localhost:11434 |
| `timeout` | Request timeout (milliseconds) | 100_000 (100sec) |
| Setting | Description | Default |
|--------------------|----------------------------------------------|------------------------|
| `model` | Model(s) to use (comma-separated list) | - |
| `host` | Ollama host URL | http://localhost:11434 |
| `timeout` | Request timeout (milliseconds) | 100_000 (100sec) |

##### OLLAMA.model

Expand Down Expand Up @@ -474,10 +487,10 @@ Ollama does not support the following options in General Settings.

### HuggingFace

| Setting | Description | Default |
|--------------------|------------------------------------------------------------------------------------------------------------------|----------------------------------------|
| `cookie` | Authentication cookie | - |
| `model` | Model to use | `CohereForAI/c4ai-command-r-plus` |
| Setting | Description | Default |
|--------------------|----------------------------|----------------------------------------|
| `cookie` | Authentication cookie | - |
| `model` | Model to use | `CohereForAI/c4ai-command-r-plus` |

##### HUGGINGFACE.cookie

Expand Down Expand Up @@ -516,10 +529,10 @@ Huggingface does not support the following options in General Settings.

### Gemini

| Setting | Description | Default |
|--------------------|------------------------------------------------------------------------------------------------------------------|-------------------|
| `key` | API key | - |
| `model` | Model to use | `gemini-1.5-pro` |
| Setting | Description | Default |
|--------------------|------------------------|-------------------|
| `key` | API key | - |
| `model` | Model to use | `gemini-1.5-pro` |

##### GEMINI.key

Expand Down Expand Up @@ -581,10 +594,10 @@ Anthropic does not support the following options in General Settings.

### Mistral

| Setting | Description | Default |
|--------------------|------------------------------------------------------------------------------------------------------------------|----------------|
| `key` | API key | - |
| `model` | Model to use | `mistral-tiny` |
| Setting | Description | Default |
|--------------------|--------------|----------------|
| `key` | API key | - |
| `model` | Model to use | `mistral-tiny` |

##### MISTRAL.key

Expand Down Expand Up @@ -612,10 +625,10 @@ Supported:

### Codestral

| Setting | Description | Default |
|--------------------|------------------------------------------------------------------------------------------------------------------|--------------------|
| `key` | API key | - |
| `model` | Model to use | `codestral-latest` |
| Setting | Description | Default |
|--------------------|-----------------|--------------------|
| `key` | API key | - |
| `model` | Model to use | `codestral-latest` |

##### CODESTRAL.key

Expand All @@ -635,10 +648,10 @@ aicommit2 config set CODESTRAL.model="codestral-2405"

#### Cohere

| Setting | Description | Default |
|--------------------|------------------------------------------------------------------------------------------------------------------|-------------|
| `key` | API key | - |
| `model` | Model to use | `command` |
| Setting | Description | Default |
|--------------------|--------------|-------------|
| `key` | API key | - |
| `model` | Model to use | `command` |

##### COHERE.key

Expand Down
4 changes: 1 addition & 3 deletions src/services/ai/openai.service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -81,13 +81,11 @@ export class OpenAIService extends AIService {
this.params.config.path,
this.params.config.key,
this.params.config.model,
locale,
diff,
generate,
type,
timeout,
maxTokens,
temperature,
this.params.config.topP,
generatedSystemPrompt,
logging,
proxy
Expand Down
19 changes: 13 additions & 6 deletions src/utils/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -177,6 +177,15 @@ const modelConfigParsers: Record<ModelName, Record<string, (value: any) => any>>
},
path: (path?: string) => path || '/v1/chat/completions',
proxy: (proxy?: string) => proxy || '',
topP: (topP?: string) => {
if (!topP) {
return 1;
}

const parsedTopP = Number(topP);
parseAssert('OPENAI.topP', parsedTopP <= 1.0, 'Must be less than or equal to 1');
return parsedTopP;
},
systemPrompt: generalConfigParsers.systemPrompt,
systemPromptPath: generalConfigParsers.systemPromptPath,
timeout: generalConfigParsers.timeout,
Expand Down Expand Up @@ -246,12 +255,10 @@ const modelConfigParsers: Record<ModelName, Record<string, (value: any) => any>>
return 'claude-3-haiku-20240307';
}
const supportModels = [
'claude-2.1',
'claude-2.0',
'claude-instant-1.2',
'claude-3-haiku-20240307',
'claude-3-sonnet-20240229',
'claude-3-opus-20240229',
`claude-3-haiku-20240307`,
`claude-3-sonnet-20240229`,
`claude-3-opus-20240229`,
`claude-3-5-sonnet-20240620`,
];
parseAssert('ANTHROPIC.model', supportModels.includes(model), 'Invalid model type of Anthropic');
return model;
Expand Down
53 changes: 21 additions & 32 deletions src/utils/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ import createHttpsProxyAgent from 'https-proxy-agent';
import { KnownError } from './error.js';
import { createLogResponse } from './log.js';

import type { CommitType } from './config.js';
import type { ClientRequest, IncomingMessage } from 'http';
import type { CreateChatCompletionRequest, CreateChatCompletionResponse } from 'openai';

Expand Down Expand Up @@ -150,51 +149,41 @@ export const generateCommitMessage = async (
path: string,
apiKey: string,
model: TiktokenModel,
locale: string,
diff: string,
generate: number,
type: CommitType,
timeout: number,
maxTokens: number,
temperature: number,
topP: number,
systemPrompt: string,
logging: boolean,
proxy?: string
) => {
try {
const completion = await createChatCompletion(
url,
path,
apiKey,
{
model,
messages: [
{
role: 'system',
content: systemPrompt,
},
{
role: 'user',
content: `Here are diff: ${diff}`,
},
],
temperature,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0,
max_tokens: maxTokens,
stream: false,
n: 1,
},
timeout,
proxy
);
const request: CreateChatCompletionRequest = {
model,
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: `Here are diff: ${diff}` },
],
temperature,
max_tokens: maxTokens,
stream: false,
n: 1,
top_p: topP,
frequency_penalty: 0,
presence_penalty: 0,
};
// NOTE: remove top_p. please see https://github.com/tak-bro/aicommit2/issues/66
if (topP <= 0) {
delete request.top_p;
}

const completion = await createChatCompletion(url, path, apiKey, request, timeout, proxy);
const fullText = completion.choices
.filter(choice => choice.message?.content)
.map(choice => sanitizeMessage(choice.message!.content as string))
.join();
logging && createLogResponse('OPEN AI', diff, systemPrompt, fullText);
logging && createLogResponse('OPENAI', diff, systemPrompt, fullText);

return completion.choices
.filter(choice => choice.message?.content)
Expand Down

0 comments on commit f788538

Please sign in to comment.