-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spearman as metric #1919
Comments
You may need to make your individual examples contain entire batches of sub-examples :D |
@okhat any clue on how to code this? |
trainset = [dspy.Example(examples=[...])] |
@okhat thank you so much!!!! |
@okhat When using the MIPROV2 optimizer, I need to provide a metric as a parameter, but I'm uncertain about the format or structure of the metric. Any clue considering that the target is optimize Spearman? |
You need a batch metric! That takes |
@okhat Essentially, the Spearman score is being reported. The MiproV2 optimizer, operating in the background, recognizes that a higher score indicates a better program. It then utilizes these weights to identify and report the optimal program, correct? |
Yeah |
I'm confused about how to create a metric (function or program) that accumulates predictions and gold data to compute the Spearman correlation. My confusion stems from the concept of accuracy, which evaluates each individual prediction. I want to ensure that all predictions are considered collectively at the end of the evaluation process.
I need help!
The text was updated successfully, but these errors were encountered: