Replies: 1 comment 1 reply
-
I experimented with some basic explainable AI methods like saliency maps, but they were ineffective. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Very cool project. I know it's a hard problem, but just wondering if there are related efforts to not only show the model numeric scores, but also a human understandable reasoning, similar to basic “何切” discussions or even ML pro “复盘”.
Context: I am interested on exploring it and thus want to know what has been tried.
Thanks
Beta Was this translation helpful? Give feedback.
All reactions