Skip to content

Commit

Permalink
Merge pull request #68 from tak-bro/feature/add-top-p
Browse files Browse the repository at this point in the history
feat: add topP configuration for Mistral, Codestral, and Perplexity A…
  • Loading branch information
tak-bro authored Aug 16, 2024
2 parents ebfe210 + e2e873a commit 7e5bab7
Show file tree
Hide file tree
Showing 5 changed files with 88 additions and 25 deletions.
77 changes: 55 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -381,13 +381,14 @@ aicommit2 config set ignoreBody="true"
### OpenAI

| Setting | Description | Default |
|--------------------|--------------------|------------------------|
| `key` | API key | - |
| `model` | Model to use | `gpt-3.5-turbo` |
| `url` | API endpoint URL | https://api.openai.com |
| `path` | API path | /v1/chat/completions |
| `proxy` | Proxy settings | - |
| Setting | Description | Default |
|---------|--------------------|------------------------|
| `key` | API key | - |
| `model` | Model to use | `gpt-3.5-turbo` |
| `url` | API endpoint URL | https://api.openai.com |
| `path` | API path | /v1/chat/completions |
| `proxy` | Proxy settings | - |
| `topP` | Nucleus sampling | 1 |

##### OPENAI.key

Expand Down Expand Up @@ -429,14 +430,13 @@ The OpenAI Path.

Default: `1`

The `top_p` parameter selects tokens whose combined probability meets a threshold. Please see [detail](https://platform.openai.com/docs/api-reference/chat/create#chat-create-top_p)
The `top_p` parameter selects tokens whose combined probability meets a threshold. Please see [detail](https://platform.openai.com/docs/api-reference/chat/create#chat-create-top_p).

```sh
aicommit2 config set OPENAI.topP=0
aicommit2 config set OPENAI.topP=0.2
```

> NOTE: If `topP` is less than 0, it does not deliver the `top_p` parameter to the request.
> - You can use it when you don't need a `top_p` parameter on other compatible platform.
### Ollama

Expand Down Expand Up @@ -594,10 +594,11 @@ Anthropic does not support the following options in General Settings.

### Mistral

| Setting | Description | Default |
|--------------------|--------------|----------------|
| `key` | API key | - |
| `model` | Model to use | `mistral-tiny` |
| Setting | Description | Default |
|----------|------------------|----------------|
| `key` | API key | - |
| `model` | Model to use | `mistral-tiny` |
| `topP` | Nucleus sampling | 1 |

##### MISTRAL.key

Expand All @@ -623,12 +624,23 @@ Supported:
- `mistral-large-2402`
- `mistral-embed`

##### MISTRAL.topP

Default: `1`

Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.

```sh
aicommit2 config set MISTRAL.topP=0.2
```

### Codestral

| Setting | Description | Default |
|--------------------|-----------------|--------------------|
| `key` | API key | - |
| `model` | Model to use | `codestral-latest` |
| Setting | Description | Default |
|---------|------------------|--------------------|
| `key` | API key | - |
| `model` | Model to use | `codestral-latest` |
| `topP` | Nucleus sampling | 1 |

##### CODESTRAL.key

Expand All @@ -646,6 +658,16 @@ Supported:
aicommit2 config set CODESTRAL.model="codestral-2405"
```

##### CODESTRAL.topP

Default: `1`

Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.

```sh
aicommit2 config set CODESTRAL.topP=0.1
```

#### Cohere

| Setting | Description | Default |
Expand Down Expand Up @@ -708,10 +730,11 @@ aicommit2 config set GROQ.model="llama3-8b-8192"

### Perplexity

| Setting | Description | Default |
|--------------------|------------------|-----------------------------------|
| `key` | API key | - |
| `model` | Model to use | `llama-3.1-sonar-small-128k-chat` |
| Setting | Description | Default |
|----------|------------------|-----------------------------------|
| `key` | API key | - |
| `model` | Model to use | `llama-3.1-sonar-small-128k-chat` |
| `topP` | Nucleus sampling | 1 |

##### PERPLEXITY.key

Expand All @@ -737,6 +760,16 @@ Supported:
aicommit2 config set PERPLEXITY.model="llama-3.1-70b"
```

##### PERPLEXITY.topP

Default: `1`

Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.

```sh
aicommit2 config set PERPLEXITY.topP=0.3
```

#### Usage

1. Stage your files and commit:
Expand Down
2 changes: 1 addition & 1 deletion src/services/ai/codestral.service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ export class CodestralService extends AIService {
},
],
temperature: this.params.config.temperature,
top_p: 1,
top_p: this.params.config.topP,
max_tokens: this.params.config.maxTokens,
stream: false,
safe_prompt: false,
Expand Down
2 changes: 1 addition & 1 deletion src/services/ai/mistral.service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ export class MistralService extends AIService {
},
],
temperature: this.params.config.temperature,
top_p: 1,
top_p: this.params.config.topP,
max_tokens: this.params.config.maxTokens,
stream: false,
safe_prompt: false,
Expand Down
2 changes: 1 addition & 1 deletion src/services/ai/perplexity.service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ export class PerplexityService extends AIService {
},
],
temperature: this.params.config.temperature,
top_p: 1,
top_p: this.params.config.topP,
max_tokens: this.params.config.maxTokens,
stream: false,
})
Expand Down
30 changes: 30 additions & 0 deletions src/utils/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -300,6 +300,16 @@ const modelConfigParsers: Record<ModelName, Record<string, (value: any) => any>>
parseAssert('MISTRAL.model', supportModels.includes(model), 'Invalid model type of Mistral AI');
return model;
},
topP: (topP?: string) => {
if (!topP) {
return 1;
}
parseAssert('MISTRAL.topP', /^(1|\d)(\.\d{1,2})?$/.test(topP), 'Must be decimal between 0 and 1');
const parsed = Number(topP);
parseAssert('MISTRAL.topP', parsed > 0.0, 'Must be greater than 0');
parseAssert('MISTRAL.topP', parsed <= 1.0, 'Must be less than or equal to 1');
return parsed;
},
systemPrompt: generalConfigParsers.systemPrompt,
systemPromptPath: generalConfigParsers.systemPromptPath,
timeout: generalConfigParsers.timeout,
Expand All @@ -323,6 +333,16 @@ const modelConfigParsers: Record<ModelName, Record<string, (value: any) => any>>
parseAssert('CODESTRAL.model', supportModels.includes(model), 'Invalid model type of Codestral');
return model;
},
topP: (topP?: string) => {
if (!topP) {
return 1;
}
parseAssert('CODESTRAL.topP', /^(1|\d)(\.\d{1,2})?$/.test(topP), 'Must be decimal between 0 and 1');
const parsed = Number(topP);
parseAssert('CODESTRAL.topP', parsed > 0.0, 'Must be greater than 0');
parseAssert('CODESTRAL.topP', parsed <= 1.0, 'Must be less than or equal to 1');
return parsed;
},
systemPrompt: generalConfigParsers.systemPrompt,
systemPromptPath: generalConfigParsers.systemPromptPath,
timeout: generalConfigParsers.timeout,
Expand Down Expand Up @@ -443,6 +463,16 @@ const modelConfigParsers: Record<ModelName, Record<string, (value: any) => any>>
parseAssert('PERPLEXITY.model', supportModels.includes(model), 'Invalid model type of Perplexity');
return model;
},
topP: (topP?: string) => {
if (!topP) {
return 1;
}
parseAssert('PERPLEXITY.topP', /^(1|\d)(\.\d{1,2})?$/.test(topP), 'Must be decimal between 0 and 1');
const parsed = Number(topP);
parseAssert('PERPLEXITY.topP', parsed > 0.0, 'Must be greater than 0');
parseAssert('PERPLEXITY.topP', parsed <= 1.0, 'Must be less than or equal to 1');
return parsed;
},
systemPrompt: generalConfigParsers.systemPrompt,
systemPromptPath: generalConfigParsers.systemPromptPath,
timeout: generalConfigParsers.timeout,
Expand Down

0 comments on commit 7e5bab7

Please sign in to comment.