-
-
Notifications
You must be signed in to change notification settings - Fork 389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: wrapper for callbacks #842
Feature: wrapper for callbacks #842
Conversation
from catalyst.core.runner import IRunner | ||
|
||
|
||
class IgnoreMetricCallback(Callback): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just to be sure, could you please update docs with this new callbacks & changelog :)
I now it only draft yet :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
speaking about architecture and implementation,
what do you think about CallbackFilterCallback
?
not only with name filtering, but also with some condition one?
callback_filter = CallbackFilterCallback(
key={"train": "metric-1"},
lambda={"valid": lambda key, value: isinstance(value, MetricCallback)},
# or, even we can also add something like
value={"valid": MetricCallback}
)
long story shot, with callback filtering, most of the time you have 2 options:
- filter by name (
key
field) - filter by class (
value
field) - some very custom condition -> lambda
what do you think about such proposal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
moreover, if we would like to think about filtering in a broader sense....
why do we filter only on loader name? during stage, we have 2 fors - epoch and loader one.
In this case, I think, we could filter not only on loader name, but also on epoch index.
nevertheless, still now sure about user friendly API for this :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe, only maybe, something like
callback_filter = CallbackFilterCallback(
condition=lambda epoch, loader: loader == "train",
filter=lambda key, value: isinstance(value, MetricCallback),
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Scitator what do you think about FilterCallback
or SkipCallback
or FilterMetricCallback
as a name for a class?
(I think that we can generate a better name without repeated callback word)
Also what do you think about use cases for arguments:
callback = FilterCallback(
loader0="to_filter", # base case - ignore callback for loader
loader1=["to_fitler1", "to_filter2"], # ignore multiple callbacks
loader2={"to_filter1": [1, 22, 333], "to_filter2": [1, 2, 3]}, # turn off callbacks for epochs
loader3=lambda epoch, loader_name, loader_obj: loader_obj, # filter with custom function
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ditwoo I think, FilterCallback
would be good choice :)
First of all, we should not concentrate on Metric-based callbacks only.
catalyst/dl/experiment/config.py
Outdated
@@ -430,6 +431,23 @@ def _get_callback(**params): | |||
return ConfigExperiment._get_callback(**wrapper_params) | |||
return callback | |||
|
|||
@staticmethod | |||
def _is_same_types(first: Callback, second: Callback) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's move this function to
catalyst/dl/utils/callbacks.py
so it would be easier to use it from anywhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and what do you think about splitting it into two separate callbacks, like
def get_original_callback(callback) -> callback:
while not callback and is-wrapper:
return callback.callback # unwrap
def check_isinstance(first, second) -> bool:
first = get_original_callback(first)
second = get_original_callback(second)
output = isinstance(first, second)
return output
EXP_OUTPUT=./tests/output.txt | ||
|
||
PYTHONPATH=./examples:./catalyst:${PYTHONPATH} \ | ||
python3 -c " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw, could you please move your tests to .py files,
like https://github.com/catalyst-team/catalyst/blob/master/catalyst/contrib/dl/callbacks/tests/test_tracer_callback.py
?
catalyst/core/callbacks/wrapper.py
Outdated
) | ||
runner = SupervisedRunner() | ||
runner.train( | ||
... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 what do you think about full-featured examples?
we have really tiny one:
import torch
from torch.utils.data import DataLoader, TensorDataset
from catalyst.dl import SupervisedRunner
num_samples, num_features = int(1e4), int(1e1)
X, y = torch.rand(num_samples, num_features), torch.rand(num_samples)
loader = DataLoader(TensorDataset(X, y), batch_size=32, num_workers=1)
loaders = {"train": loader, "valid": loader}
model = torch.nn.Linear(num_features, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [3, 6])
runner = SupervisedRunner()
runner.train(
model=model, criterion=criterion, optimizer=optimizer, scheduler=scheduler,
loaders=loaders, logdir="./logdir", num_epochs=8, verbose=True,
)
catalyst/dl/utils/callbacks.py
Outdated
return callback | ||
|
||
|
||
def check_isinstance(first: Callback, second: Callback) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check_callback_isinstance
?
catalyst/core/callbacks/wrapper.py
Outdated
callbacks=[ | ||
WrapperCallback( | ||
base_callback=CriterionCallback(), | ||
loaders=["valid"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 based on provided example, should we split it's functionality to
loaders=[] # loader names, where callback should be executed
ignore_loaders=[] # loader names, where we should ignore the callback
?
just wondering, how it would be more user-friendly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw, it also looks like this examples should not work because by default our metric - "loss" on "valid" loader, that is not computed here :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, we could easily use dl.AccuracyCallback
instead
catalyst/core/callbacks/wrapper.py
Outdated
@@ -8,6 +8,113 @@ | |||
FILTER_FN = Callable[[str, int, str], bool] | |||
|
|||
|
|||
class WrapperCallback(Callback): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe, we should move it to catalyst/core/callback
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure
Before submitting
catalyst-make-codestyle && catalyst-check-codestyle
(pip install -U catalyst-codestyle
).make check-docs
?Description
Related Issue
Type of Change
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.