forked from EleutherAI/lm-evaluation-harness
-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add results for line-corporation large models #81
Merged
mkshing
merged 1 commit into
Stability-AI:jp-stable
from
kumapo:eval-line-corp-large-models
Sep 29, 2023
Merged
add results for line-corporation large models #81
mkshing
merged 1 commit into
Stability-AI:jp-stable
from
kumapo:eval-line-corp-large-models
Sep 29, 2023
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
kumapo
force-pushed
the
eval-line-corp-large-models
branch
from
August 21, 2023 13:06
9681af6
to
70dfab3
Compare
kumapo
force-pushed
the
eval-line-corp-large-models
branch
from
August 22, 2023 06:20
70dfab3
to
1164c11
Compare
kumapo
changed the title
evaluate line-corp large models
evaluate line-corporation large models
Aug 22, 2023
kumapo
changed the title
evaluate line-corporation large models
add results for line-corporation large models
Aug 22, 2023
could you check this PR? |
mkshing
approved these changes
Sep 29, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks~
polm-stability
pushed a commit
to polm-stability/lm-evaluation-harness
that referenced
this pull request
Oct 11, 2023
polm-stability
added a commit
that referenced
this pull request
Nov 6, 2023
* Initial working refactor This just pulls the argparse stuff into a separate function. * Do some rearrangement for the refactor Eval args are necessary, other params are optional. The print output is only needed when called from the cli, plus it assumes that various keys are present (even if None), which is not the case when calling from Python. * Move main script to scripts dir, add symlink Other scripts can't import the main script since it's in the top level. This moves it into the scripts dir and adds a symlink so it's still usable at the old location. * Work on adding example Python harness script * Add notify script * Fix arg * task cleanup * Add versions to tasks * Fix typo * Fix versions * Read webook url from env var * evaluate line-corporation large models (#81) * compare results between Jsquad prompt with title and without title (#84) * re-evaluate models with jsquad prompt with title * update jsquad to include titles into the prompt * re-evaluate models with jsquad prompt with title * inherit JSQuAD v1.2 tasks from v1.1 for readability * re-evaluate models with jsquad prompt with title * wont need jsquad_v11 * revert result.json and harness.sh in models * fix format * Verbose output for more tasks (#92) * Add output to jaqket v2 * Add details to jsquad * Add versbose output to xlsum --------- Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com> * Add gptq support (#87) * add EleutherAI PR519 autoGPTQ * add comma * change type * change type2 * change path * Undo README modifications --------- Co-authored-by: webbigdata-jp <dahara1@webbigdata.jp> * Add Balanced Accuracy (#95) * First implementation of balanced accuracy * Add comment * Make JNLI a balanced acc task * Add mcc and balanced f1 scores --------- Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com> * Remove 3.8 version spec from pre-commit config The version here makes it so that pre-commit can only run in an environment with python3.8 in the path, but there's no compelling reason for that. Removing the spec just uses system python. * Fix Linter Related Issues (#96) * Change formatting to make the linter happy This is mostly: - newlines at end of files - removing blank lines at end of files - changing single to double quotes - black multi-line formatting rules - other whitespace edits * Remove codespell Has a lot of false positives * boolean style issue * bare except These seem harmless enough, so just telling the linter to ignore them * More linter suggestions --------- Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com> * Simplify neologdn version This was pointing to a commit, but the relevant PR has been merged and released for a while now, so a normal version spec can be used. * Update xwinograd dataset The old dataset was deleted. * won't need llama2/llama2-2.7b due to duplication (#99) * add gekko (#98) Co-authored-by: webbigdata-jp <dahara1@webbigdata.jp> * add llama2 format (#100) * add llama2 format * add 0.6 in prompt_templates.md * make pre-commit pass * remove debugging line * fix bug on `mgsm` for prompt version `0.3` (#101) * Add JCoLA task (#93) * WIP: need JCoLA * Update harness.jcola.sh * update prompt * update prompt * update prompt * update prompt * Revert "update prompt" This reverts commit cd9a914. * WIP: evaluate on JCoLA * Add new metrics to cola This modifies cola, since jcola just inherits this part. It's not a problem to modify the parent task because it just adds some output. * Linter edits * evaluate on JCoLA * need JCoLAWithLlama2 * JCoLA's prompt version should be 0.0 https://github.com/Stability-AI/lm-evaluation-harness/blob/jp-stable/docs/prompt_templates.md * documentation jptasks.md and prompt_templates.md * won't need harness and result for JCoLA * fix linter related issue * Delete harness.jcola.sh --------- Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com> Co-authored-by: mkshing <33302880+mkshing@users.noreply.github.com> * Linter fixes * Remove example - script is used instead of function * Cleanup * Cleanup / linter fixes There were some things related to the old shell script usage that weren't working, this should fix it. * Add README section describing cluster usage --------- Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com> Co-authored-by: kumapo <kumapo@users.noreply.github.com> Co-authored-by: webbigdata-jp <87654083+webbigdata-jp@users.noreply.github.com> Co-authored-by: webbigdata-jp <dahara1@webbigdata.jp> Co-authored-by: mkshing <33302880+mkshing@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
https://engineering.linecorp.com/ja/blog/3.6-billion-parameter-japanese-language-model
TODO