-
Notifications
You must be signed in to change notification settings - Fork 631
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
max_encoder_length and log_prediction issue with TFT and TimeSeriesDataset #864
Comments
After digging a bit deeper into the code, it seems like in this line, the |
I faced the same issue. Strongly suspect that This does not cause any issues for training but fails during logging if you have nonzero It happens in the following lines:
As a temporary fix, you can turn off logging by supplying Anyway, even if there is a different reason for such behavior, the presence of such inconsistent batches with sequence length larger than tft.predict(test_dataloader, mode="raw") just as #449 which I also suspect has the reason discussed here. |
Thanks @NazyS, this is super helpful! Yea it seems like when |
Thank you SO SO much for creating this issue - I wasted over a week completely stumped on why I was getting CUDA errors relating to indexes. At least I can train my model now. Setting log_interval=0 worked. |
"Also you can fill missing timesteps by yourself and use allow_missing_timesteps=False" works for me. Thanks |
Hi team, I found some strange issues with
TimeSeriesDataset
. I initialized it with the following code:Basically, the
max_encoder_length
was set to24
. However, when I tried to check the values by:I got a tensor containing values greater than
24
: e.g.,tensor([24, 14, 24, ... , 27, 29, ..., 30, ..., 24])
.This caused some runtime errors in training as follows:
Basically, here since the "encoder_lengths" contains values greater than 24, then out of bounds error occurred in
integer_histogram
.I am not sure how to fix this since I didn't find any checks in
timeseries.py
about limiting the lengths of encoders to be smaller thanmax_encoder_lengths
.The text was updated successfully, but these errors were encountered: