-
Notifications
You must be signed in to change notification settings - Fork 303
Add SemanticSegmentationEvaluator #238
Add SemanticSegmentationEvaluator #238
Conversation
9d808de
to
f8d0014
Compare
Since the content of this PR is not purely about |
9011c1e
to
2ccc93f
Compare
…yuyu2172/chainercv into semantic-segmentation-evaluator
I changed directory structure of |
@Hakuyume |
@yuyu2172 OK |
following tuple :obj:`img, label`. | ||
:obj:`img` is an image, :obj:`label` is pixel-wise label. | ||
target (chainer.Link): A semantic segmentation link. This link should | ||
have :meth:`predict` method which takes a list of images and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
which
-> that
? #229 (comment)
returns :obj:`labels`. | ||
label_names (iterable of strings): An iterable of names of classes. | ||
If this value is specified, IoU and class accuracy for each class | ||
is also reported with the keys |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is
-> are
docs/source/reference/extensions.rst
Outdated
DetectionVisReport | ||
~~~~~~~~~~~~~~~~~~ | ||
.. autofunction:: DetectionVisReport | ||
|
||
DetectionVOCEvaluator | ||
~~~~~~~~~~~~~~~~~~~~~ | ||
.. autofunction:: DetectionVOCEvaluator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is a class (not a function)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've been using autofunction
for many classes (e.g. Dataset class).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They should be fixed too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
np.testing.assert_equal(eval_['target/pixel_accuracy'], 1.) | ||
np.testing.assert_equal(eval_['target/mean_class_accuracy'], 1.) | ||
np.testing.assert_equal(eval_['target/iou/a'], 1.) | ||
np.testing.assert_equal(eval_['target/iou/b'], 1.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you make iou/a
and iou/b
different from each other?
Thanks for reviewing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
merge after #217.
The current evaluate.py is wrong because n_positive adds up for pixels with "ignore" GT labels.
I made the following modification.
I set batchsize to one. This was critical to reproduce the results with SemanticSegmentationEvaluator.
With SemanticSegmentationEvaluator, the scores were as follows. This is identical to the score above.