-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can we change predict_len if we use TTM model ? #102
Comments
Hi @JieNi4 thanks for your interest. You can set |
Thanks! Can we add multi-variance into zero-shot or fine-tuning(few-show) process? I have seen it's possible in your paper: "Decoder Channel-Mixing can be enabled during fine-tuning for capturing strong channel-correlation patterns across time-series variates, a critical capability lacking in existing counterparts." Thank you very much for your reply. |
Hi @JieNi4 yes -- if you fine-tune you can freeze the backbone, then enable channel mixing in the decoder and tune. Some code snippets:
|
@wgifford Thanks a lot. I have another question...We use scaling in both zero shot and few shot. But it seems that the final evaluation loss and plot_fig are also according to this Normalisation data. Can we use inverse method before we compute the loss and plot fig? And when the num of input channel is greater than 1, the loss seems to the mean of all channels? can we print all the loss separately? |
Since the |
Pretty good model! There are two models available (512-96 and 1024-96). The predict_len is set to 96, can we change it?
The text was updated successfully, but these errors were encountered: