Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metrics example #610

Closed
nikky4D opened this issue Jan 20, 2021 · 8 comments
Closed

Metrics example #610

nikky4D opened this issue Jan 20, 2021 · 8 comments
Labels
documentation Improvements or additions to documentation example request good first issue Good for newcomers help wanted Extra attention is needed

Comments

@nikky4D
Copy link

nikky4D commented Jan 20, 2021

📓 New <Tutorial/Example>

Is this a request for a tutorial or for an example?
Example

What is the task?
I would like to write custom metrics for my task. Is there an example of how to define metrics for input to the training loop? In fastai, we have the ability to define our metric, and pass that in to the training. Can this be done here as well?
Second question: how do I plot training and validation losses? or accuracy/metric plots?

Is this example for a specific model?
No

Is this example for a specific dataset?
No


Don't remove
Main issue for examples: #39

@nikky4D nikky4D added documentation Improvements or additions to documentation example request good first issue Good for newcomers help wanted Extra attention is needed labels Jan 20, 2021
@lgvaz
Copy link
Collaborator

lgvaz commented Jan 20, 2021

Yes yes, you can, we currently don't have a tutorial for this rn, but I can help you through the steps, it should be straight forward. If everything works at the end you can make a PR with a tutorial =)

Similarly to fastai, the only necessary step is to inherit from Metric and override the abstract methods:

class MyMetric(Metric):
    def accumulate(self, records, preds) -> None:
        """Accumulate stats for a single batch"""

    def finalize(self) -> Dict[str, float]:
        """Called at the end of the validation loop"""

finalize returns a dict with {'<METRIC_NAME>': VALUE}

@nikky4D
Copy link
Author

nikky4D commented Jan 21, 2021

Thanks for the help. I'll get working on it, and look forward to the PR.

For my second question: how do I go about plotting losses? I tried to use faster_rcnn.interp.plot_top_losses(...) but I keep getting the error, on any of the loss:


samples_plus_losses, preds, losses_stats =  faster_rcnn.interp.plot_top_losses(model=model, dataset=valid_ds, sort_by="loss_rpn_box_reg", n_samples=6)
​
INFO     - Losses returned by model: ['loss_classifier', 'loss_box_reg', 'loss_objectness', 'loss_rpn_box_reg'] | icevision.models.interpretation:plot_top_losses:205
0%
0/1799 [00:00<?, ?it/s]

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-27-87eb526af10d> in <module>
----> 1 samples_plus_losses, preds, losses_stats =  faster_rcnn.interp.plot_top_losses(model=model, dataset=valid_ds, sort_by="loss_rpn_box_reg", n_samples=6)

~\anaconda3\envs\open-mmlab38\lib\site-packages\icevision\models\interpretation.py in plot_top_losses(self, model, dataset, sort_by, n_samples, batch_size)
    208 
    209         dl = self.valid_dl(dataset, batch_size=1, num_workers=0, shuffle=False)
--> 210         samples, losses_stats = self.get_losses(model, dataset)
    211         samples = add_annotations(samples)
    212 

~\anaconda3\envs\open-mmlab38\lib\site-packages\icevision\models\interpretation.py in get_losses(self, model, dataset)
    159                 x, y = _move_to_device(x, y, device)
    160                 loss = model(x, y)
--> 161                 loss = {k: float(v.cpu().numpy()) for k, v in loss.items()}
    162                 loss = self._rename_losses(loss)
    163                 loss = self._sum_losses(loss)

AttributeError: 'list' object has no attribute 'items'

@lgvaz
Copy link
Collaborator

lgvaz commented Jan 22, 2021

I believe your model might be in eval mode, can you explicitly put into training model.train() and see if it happens again?

@nikky4D
Copy link
Author

nikky4D commented Jan 25, 2021

led at the end of the validation loop"""

For the finalize function: I am calculating top-k accuracy, using the classification labels. I want to create a function like this one:

def top_k_accuracy_nu(inp, targ, k=3, axis=-1):
    "Computes the Top-k accuracy (`targ` is in the top `k` predictions of `inp`)"
    inp = inp.topk(k=k, dim=axis)[1]
    targ = targ.unsqueeze(dim=axis).expand_as(inp)
    return (inp == targ).sum(dim=-1).float().mean()

In this case, is it correct to assume that I would need to accumulate the labels from each record in a batch, put these in a torch tensor, then just run the above?

@lgvaz
Copy link
Collaborator

lgvaz commented Jan 26, 2021

In this case, is it correct to assume that I would need to accumulate the labels from each record in a batch, put these in a torch tensor, then just run the above?

That would work, but I think a better way would be to calculate the accuracy per batch and then average everything out on finalize

@nikky4D
Copy link
Author

nikky4D commented Jan 26, 2021

I believe your model might be in eval mode, can you explicitly put into training model.train() and see if it happens again?

Thank you. Explicitly setting model.train() fixed the issue.

In this case, is it correct to assume that I would need to accumulate the labels from each record in a batch, put these in a torch tensor, then just run the above?

That would work, but I think a better way would be to calculate the accuracy per batch and then average everything out on finalize

Would you have a sample I can borrow from?

@lgvaz
Copy link
Collaborator

lgvaz commented Jan 27, 2021

Would you have a sample I can borrow from?

I was thinking of something like this:

class MyMetric(Metric):
    def __init__(self):
        super().__init__
        self._accs = []

    def accumulate(self, records, preds):
        accuracy = calculate_accuracy(records, preds)
        self._accs.append(accuracy)

    def finalize(self) -> Dict[str, float]:
        final_accuracy = np.mean(self._accs)
        return {'accuracy': final_accuracy}

@nikky4D
Copy link
Author

nikky4D commented Feb 8, 2021

Yes, thank you. I'll get working on it.

@nikky4D nikky4D closed this as completed Feb 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation example request good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants