-
Notifications
You must be signed in to change notification settings - Fork 479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Major] Speedup dataset get_item #1636
Conversation
Model Benchmark
|
fdd8043
to
3eb0f52
Compare
@MaiBe-ctrl Looking great! 2x Speedup!! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking great! I did a detailed review and noted things that you can make more consistent and raised a few questions. Thank you!
neuralprophet/forecaster.py
Outdated
# config_train=self.config_train, # no longer needed since JIT tabularization. | ||
) | ||
loader = DataLoader(dataset, batch_size=min(4096, len(df)), shuffle=False, drop_last=False) | ||
predicted = {} | ||
for name in self.config_seasonality.periods: | ||
predicted[name] = list() | ||
for inputs, _, meta in loader: | ||
for inputs_tensor, meta in loader: | ||
inputs = unpack_sliced_tensor( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was this not needed for the other predict_xyz_components
methods?
"; ".join(["{}: {}".format(inp, values.shape) for inp, values in inputs.items()]) | ||
) | ||
) | ||
input, meta = dataset.__getitem__(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should have one test that tests the unpacking of the tensor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the unpacking logic will be removed inside of the forward call and should be done incrementally when needed instead.
neuralprophet/utils.py
Outdated
@@ -987,3 +987,153 @@ def configure_trainer( | |||
# config["replace_sampler_ddp"] = False | |||
|
|||
return pl.Trainer(**config), checkpoint_callback | |||
|
|||
|
|||
def unpack_sliced_tensor( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be good to have this function in the same file as stack_all_features
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's also add a docstring to this function and mention that the returned tensors may not be contiguous.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function will be completely removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, we need anyways to make the tensors of the targets contiguous anyways, because doing it like this didn't really solve the problem.
@@ -774,7 +775,17 @@ def loss_func(self, inputs, predicted, targets): | |||
return loss, reg_loss | |||
|
|||
def training_step(self, batch, batch_idx): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might there be a "create_batch" function called before, where we could place the unpack_sliced_tensor?
Or should we just move the unpacking of each component to each of the components' respective forward call?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: we can do this in a subsequent PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of the unpacking will be done incrementally in the forward pass.
neuralprophet/time_net.py
Outdated
@@ -51,6 +52,7 @@ def __init__( | |||
config_regressors: Optional[configure.ConfigFutureRegressors] = None, | |||
config_events: Optional[configure.ConfigEvents] = None, | |||
config_holidays: Optional[configure.ConfigCountryHolidays] = None, | |||
config_model: Optional[configure.ConfigModel] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this optional?
neuralprophet/utils.py
Outdated
|
||
else: | ||
start_idx, end_idx = feature_indices["time"] | ||
inputs["time"] = sliced_tensor[:, start_idx : end_idx + 1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why include the end_idx here, unlike when max_lags>0?
neuralprophet/utils.py
Outdated
|
||
if "targets" in feature_indices: | ||
targets_start_idx, targets_end_idx = feature_indices["targets"] | ||
inputs["targets"] = sliced_tensor[:, targets_start_idx : targets_end_idx + 1].unsqueeze(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why include the end_idx here, unlike when max_lags>0?
neuralprophet/utils.py
Outdated
lagged_regressor_start_idx, | ||
] | ||
|
||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a comment explaining dimensions and why most have a unsqueeze(1)
added.
neuralprophet/utils.py
Outdated
# Unpack multiplicative event and holiday features | ||
if "multiplicative_events" in feature_indices: | ||
events_start_idx, events_end_idx = feature_indices["multiplicative_events"] | ||
if "events" not in inputs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add check also for additive
neuralprophet/utils.py
Outdated
# Unpack multiplicative regressor features | ||
if "multiplicative_regressors" in feature_indices: | ||
regressors_start_idx, regressors_end_idx = feature_indices["multiplicative_regressors"] | ||
if "regressors" not in inputs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add check also for additive
neuralprophet/time_net.py
Outdated
# Unpack and process seasonalities | ||
seasonalities_input = None | ||
if self.config_seasonality and self.config_seasonality.periods: | ||
print("++++seasonalities ++++") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print statement from debugging
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still working on cleaning the code 🙈. Let you know once it's ready for review!
max_lags=0, | ||
n_forecasts=1, | ||
config_seasonality=self.config_seasonality, | ||
lagged_regressor_config=self.config_lagged_regressors, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should set elf.config_lagged_regressors
to none
@@ -142,11 +147,16 @@ def __init__( | |||
# General | |||
self.config_model = config_model | |||
self.n_forecasts = n_forecasts | |||
self.train_components_stacker = train_components_stacker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
store as dict
@@ -310,6 +320,16 @@ def ar_weights(self) -> torch.Tensor: | |||
if isinstance(layer, nn.Linear): | |||
return layer.weight | |||
|
|||
def set_components_stacker(self, components_stacker, mode): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
store/call as dict
def forward( | ||
self, | ||
input_tensor: torch.Tensor, | ||
components_stacker=ComponentStacker, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
components stacker should not be passed to forward, but rather inferred from a mode-flag
No description provided.