Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Help in defining my use case #65

Closed
snassimr opened this issue Jun 20, 2024 · 5 comments
Closed

[Question] Help in defining my use case #65

snassimr opened this issue Jun 20, 2024 · 5 comments

Comments

@snassimr
Copy link

snassimr commented Jun 20, 2024

Hi @Eladlev ,

First of all - Thank for your work . I am eager to evaluate your method for my case.

I want to use some open source model for some task and to tune a prompt for this model . I would like to generate some texts and annotate them as "Good" or "Bad" . I want to use also GPT-4 to learn what makes text "Good" and "Bad" to tune the prompt above.

What is my case in your examples here : https://github.com/Eladlev/AutoPrompt/blob/main/docs/examples.md
What model is llm and predictor llm for my case ?

@Eladlev
Copy link
Owner

Eladlev commented Jun 21, 2024

Hi,
So if I understand correctly you want to optimize an open-source model prompt according to GPT-4 annotation.
This can easily be done by following these instructions:
https://github.com/Eladlev/AutoPrompt/blob/main/docs/installation.md#configure-llm-annotator

And using HuggingFacePipeline as the predictor (with some open-source model llama-3 for example).
This can be done by modifying the predictor LLM according to #40 (comment)

@snassimr
Copy link
Author

snassimr commented Jun 24, 2024

Hi , @Eladlev

Thanks for your tips. After some-rethinking I need Argilla (human) annotator and HuggingFacePipeline predictor .
I am still working on it , but I have a question : what is standalone "llm" section in config and what the role of this llm ?

image

@Eladlev
Copy link
Owner

Eladlev commented Jun 25, 2024

This is the optimizer LLM (probably we should have put it under meta_prompts).
This LLM will be used to generate the synthetic data, the new prompt suggestion and the error analysis.
I'm suggesting using a strong LLM in this part of the configuration (even if you are using HuggingFacePipeline LLM as the predictor)

@snassimr
Copy link
Author

Hi @Eladlev ,
My prompt looks a bit complex . It contains of two parts : one that should by optimized and second part is input to this prompt
that shouldn't be tuned. Here the example :
prompt = """
Summarize text . Keep key events. # This part only is subject for tuning
Text :
{text_str}
Summary:
"""
Let's assume that we are talking about given value of text_string.
I'd like to present several summaries to user via Argilla and he/she will annotate it as "Good" or "Bad"
Is it generation case and not classification ? Should text_str appear in "task_description" ?

Thanks

@Eladlev
Copy link
Owner

Eladlev commented Jun 28, 2024

Hi,
Your use case is very similar to this example:
https://github.com/Eladlev/AutoPrompt/blob/main/docs/examples.md#generating-movie-reviews-generation-task

There also the prompt is an instruction prompt that is modified and there is a part of the user prompt which is given (the movie description).

If you have any questions regarding the adjustment for your specific use case we can also iterate on it on AutoPrompt Discord channel

@snassimr snassimr closed this as completed Jul 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants