Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Prefix assistant answer #11536

Open
4 tasks done
99991 opened this issue Jan 31, 2025 · 6 comments
Open
4 tasks done

Feature Request: Prefix assistant answer #11536

99991 opened this issue Jan 31, 2025 · 6 comments
Labels
enhancement New feature or request

Comments

@99991
Copy link

99991 commented Jan 31, 2025

Prerequisites

  • I am running the latest code. Mention the version if possible as well.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

Mistral's API allows to prefix the answer of the assistant with a specified string. Excerpt from the documentation:

    messages=[
        {"role": "system", "content": system},
        {"role": "user", "content": question},
        {"role": "assistant", "content": prefix, "prefix": True}, # <------- this line here is new
    ],

This makes it so that the next answer by the assistant starts with the given prefix.

Motivation

The option to prefix the assistant's prompt gives a great deal of control over the generation of the model while being much simpler to use than the alternatives.

For example, to force the model to answer directly with code in Java with a specific function signature, the prefix could be "```java\nint add(int x, int y){". This technique is used to generate code for benchmarks such as HumanEval to prevent the models from going of the rails.

Possible Implementation

A full usage example could look something like this:

# Example to generate a function named "quacksort".
# Currently, llama-server ignores the prefix and generates "quicksort" instead.
import requests

def does_not_work_yet():
    url = "http://localhost:8080/v1/chat/completions"

    prefix = "```go\nfunc quacksort"

    data =  {
        "messages": [
            {"role": "system", "content": "Only provide code. Do not write explanations."},
            {"role": "user", "content": "Implement quicksort."},
            {"role": "assistant", "content": prefix, "prefix": True}, # <----- this line here is new
        ],
        "seed": 0,
    }

    with requests.post(url, json=data) as response:
        content = response.json()["choices"][0]["message"]["content"]

    print(content)

if __name__ == "__main__":
    does_not_work_yet()

(I used the qwen2.5-coder-7b-instruct-q3_k_m model: llama-server --model qwen2.5-coder-7b-instruct-q3_k_m.gguf --host 127.0.0.1 --port 8080)

The expected result can be obtained with the raw completion API, but this is not portable from model to model since it requires knowledge of the prompt format, is more complicated and generally error prone since a single misplaced white space or line break can have significant impact on the generation quality.

import requests

def works_but_ugly():
    url = "http://localhost:8080/completion"

    prefix = "```go\nfunc quacksort"

    prompt = f"""<|im_start|>system
Only provide code. Do not write explanations.<|im_end|>
<|im_start|>user
Implement quicksort.<|im_end|>
<|im_start|>assistant
{prefix}"""

    data = {
        "prompt": prompt,
        "seed": 0,
    }

    with requests.post(url, json=data) as response:
        content = prefix + response.json()["content"]

    print(content)

if __name__ == "__main__":
    works_but_ugly()
@99991 99991 added the enhancement New feature or request label Jan 31, 2025
@matteoserva
Copy link
Contributor

right now the workaround is to use the new /apply-template endpoint in llama-server, added in a recent commit.
It's explained here: https://github.com/ggerganov/llama.cpp/tree/master/examples/server#post-apply-template-apply-chat-template-to-a-conversation

@99991
Copy link
Author

99991 commented Jan 31, 2025

right now the workaround is to use the new /apply-template endpoint in llama-server, added in a recent commit. It's explained here: https://github.com/ggerganov/llama.cpp/tree/master/examples/server#post-apply-template-apply-chat-template-to-a-conversation

Great! With this new /apply-template endpoint, we are already half-way there.

Is there an equivalent /parse-template endpoint to convert the raw chat template string back to JSON?

import requests

def apply_template():
    url = "http://localhost:8080/apply-template"

    prefix = "```go\nfunc quacksort"

    data =  {
        "messages": [
            {"role": "system", "content": "Only provide code. Do not write explanations."},
            {"role": "user", "content": "Implement quicksort."},
        ],
    }

    with requests.post(url, json=data) as response:
        prompt = response.json()["prompt"]

    data = {
        "prompt": prompt + prefix,
        "seed": 0,
    }

    url = "http://localhost:8080/completion"

    with requests.post(url, json=data) as response:
        content = prefix + response.json()["content"]

    print(content)

if __name__ == "__main__":
    apply_template()

@matteoserva
Copy link
Contributor

The templating system used by the models doesn't support parsing. It's not llama.cpp's fault.
Anyway, you can put your answer back in your messages array

import requests

def perform_inference(messages, prefix):
    url = "http://localhost:8080/apply-template"

    data =  {
        "messages": messages
    }

    with requests.post(url, json=data) as response:
        prompt = response.json()["prompt"]

    data = {
        "prompt": prompt + prefix,
        "seed": 0,
    }

    url = "http://localhost:8080/completion"

    with requests.post(url, json=data) as response:
        content = prefix + response.json()["content"]

    messages =  messages + [{"role": "assistant", "content":content}]
    return messages

if __name__ == "__main__":
    messages = [
            {"role": "system", "content": "Only provide code. Do not write explanations."},
            {"role": "user", "content": "Implement quicksort."},
        ]
    prefix = "```go\nfunc quacksort"
    updated_messages = perform_inference(messages, prefix)
    print(updated_messages)
    
    

@Dango233
Copy link

Dango233 commented Feb 6, 2025

+1 for this - not supporting prefix in /v1/chat/completion for me is the largest gap between llama.cpp vs common API providers & lmstudio...

@hdu-hh
Copy link

hdu-hh commented Feb 7, 2025

The feature already exists in the form of custom GBNF grammars!
You can use the custom GBNF as grammar parameter in a server completion requests or in the --grammer or --grammar-file command line option.
An example grammar file is:
root ::= "```go\nfunc quacksort" .*

@99991
Copy link
Author

99991 commented Feb 7, 2025

The feature already exists in the form of custom GBNF grammars!

Great! It works!

import requests

url = "http://localhost:8080/v1/chat/completions"

def prefix_using_grammar():
    prefix = "```go\nfunc quacksort"

    data =  {
        "messages": [
            {"role": "system", "content": "Only provide code. Do not write explanations."},
            {"role": "user", "content": "Implement quicksort."},
        ],
        "grammar": f'root ::= "{prefix}" .*', # <---------- this line here is new
        "seed": 0,
    }

    with requests.post(url, json=data) as response:
        content = response.json()["choices"][0]["message"]["content"]
    print(content)

if __name__ == "__main__":
    prefix_using_grammar()

All that is required is to add the grammar to the data object:

data = {
    ...
    "grammar": f'root ::= "{prefix}" .*',
}

For me, this is good enough, but I wonder whether "prefix": True should be implemented anyway to have API compatibility with Mistral.

EDIT: I tested this a bit and I think there is an optimization missing: Sequences of consecutive tokens which are uniquely determined should be batch-computed. The performance makes me think that they are computed sequentially.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants