-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: expose explore eval stats to python interface #4451
Conversation
ex_l_str = '{"_label_cost":-0.9,"_label_probability":0.5,"_label_Action":1,"_labelIndex":0,"o":[{"v":1.0,"EventId":"38cbf24f-70b2-4c76-aa0c-970d0c8d388e","ActionTaken":false}],"Timestamp":"2020-11-15T17:09:31.8350000Z","Version":"1","EventId":"38cbf24f-70b2-4c76-aa0c-970d0c8d388e","a":[1,2],"c":{ "GUser":{"id":"person5","major":"engineering","hobby":"hiking","favorite_character":"spock"}, "_multi": [ { "TAction":{"topic":"SkiConditions-VT"} }, { "TAction":{"topic":"HerbGarden"} } ] },"p":[0.5,0.5],"VWState":{"m":"N/A"}}\n' | ||
ex_l = vw.parse(ex_l_str) | ||
vw.learn(ex_l) | ||
vw.finish_example(ex_l) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you have to call finish example before fetching this metrics right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe add comment for someone checking this test to make sure that they do that
Wondering if there should be an explict arg asking for stats. Library can then avoid calculating stats ... |
Explore Eval doesn't make sense without these stats really, we discussed yesterday and are going to move these to metrics so that they can be exposed to python via that |
Stats cannot be skipped as GD's implementation depends on them currently. |
c585151
to
5b328ce
Compare
@@ -137,6 +137,12 @@ void predict_or_learn(metrics_data& data, T& base, E& ec) | |||
|
|||
void VW::reductions::additional_metrics(VW::workspace& all, VW::metric_sink& sink) | |||
{ | |||
sink.set_float("sum_loss", all.sd->sum_loss); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
todo: shared_data should have a func to calculate this instead of being interleaved with the string output func.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jackgerrits you ran into this with py bindings next
3dfdc74
to
cf41995
Compare
{ | ||
"id": 456, | ||
"desc": "Spin off epsilon decay model to lower bitsize to cb_explore with explore eval", | ||
"vw_command": "--cb_explore_adf -d train-sets/automl_spin_off.txt --noconstant -b 18 --predict_only_model --epsilon_decay --model_count 2 --explore_eval", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this meant to be in this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not related to metrics or explore eval though
No description provided.