Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using custom prompts and postprocessing #245

Open
anil-gurbuz opened this issue Jun 14, 2024 · 1 comment
Open

Using custom prompts and postprocessing #245

anil-gurbuz opened this issue Jun 14, 2024 · 1 comment

Comments

@anil-gurbuz
Copy link
Contributor

Hi,

We want to make a submission for the leaderboard with our fine-tuned model used in www.jdoodle.com

It seems like for some of the humaneval questions, our model getting the logic right but we are having issues due to the format of the output which could be potentially fixed by postprocessing and using a slightly different prompting -- our fine-tuning process was using a different prompt template then instruction prompting--

I was wondering would it be a problem if we use custom postprocessing steps and/or prompt template for generating responses? Would we be eligible to have a place in the leaderboard in that case?

Thanks!

@loubnabnl
Copy link
Collaborator

Hi, you can use HumanEvalSynthesize for Python and edit the prompt similar to here https://github.com/bigcode-project/bigcode-evaluation-harness/pull/219/files I think the post-processing should work by default, what other changes did you want to introduce?
For the other languages we're using plain MultiPL-E prompts in the leaderboard even for the chat models, you can add another stop token here if that's what's messing up the post-processing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants