Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Informer must except pred_len > 1 #108

Open
Hadar933 opened this issue Dec 24, 2023 · 0 comments
Open

Informer must except pred_len > 1 #108

Hadar933 opened this issue Dec 24, 2023 · 0 comments

Comments

@Hadar933
Copy link

Hadar933 commented Dec 24, 2023

when using the Informermodel and a pred_len == 1, we observe an index error

IndexError: max(): Expected reduction dim 2 to have non-zero size

the reason, we believe, is this:

  • the forward method takes in x_dec that has shape (b,1,f) and called in the self.decoder module (a DecoderLayer layer)
  • inside, a self-attention mechanism is performed with x_dec (denoted with x ), which in turn called the self.inner_attention module, which is the forward call of the ProbAttention class.
  • in this step, keys=x=x_dec, hence the line _, L_K, _, _ = keys.shape sets L_K == 1, which in turns sets U_part to 0 as there is a log taken over L_K in the definition U_part = self.factor * np.ceil(np.log(L_K)).astype('int').item() # c*ln(L_k)
  • the last call is the _prob_QK method of the ProbAttention class, that performs M = Q_K_sample.max(-1)[0] - torch.div(Q_K_sample.sum(-1), L_K), resulting in a IndexError: max(): Expected reduction dim 2 to have non-zero size. (Q_K_sample is an empty tensor)

the command we used is:

python run_longExp.py --is_training 1 --root_path ./dataset/ --data_path ETTh1.csv --model_id ETTh1_96 --model Informer --data ETTh1 --features M --seq_len 128 --pred_len 1 --label_len 0 --enc_in 7 --des Exp --itr 1 --batch_size 32 --learning_rate 0.005
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant