You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The LMU layer currently accepts return_sequences and passes that through to tf.keras.layers.RNN with the created LMUCell. However, there are a number of other RNN flags that are ignored (https://www.tensorflow.org/api_docs/python/tf/keras/layers/RNN):
return_state
go_backwards
stateful
unroll
time_major
In order to use any of these additional flags, the LMUCell must be invoked directly then and passed to tf.keras.layers.RNN alongside the flags. Supporting the flags at the layer level would mirror the pattern for other recurrent layers such as tf.keras.layers.LSTM.
The text was updated successfully, but these errors were encountered:
@arvoelke does the absence of support for go_backwards mean that using LMU in Bidirectional manner is not going to perform as expected using the standard keras Bidirectional wrapper, but only if using LMU layer not LMUCell?
Is something something like this going to work as expected?
That example should work as expected. If you were to try tf.keras.layers.Bidirectional(keras_lmu.LMU(...)) instead then you should see a KeyError: 'go_backwards' (at least I do on tensorflow==2.3.1).
The
LMU
layer currently acceptsreturn_sequences
and passes that through totf.keras.layers.RNN
with the createdLMUCell
. However, there are a number of other RNN flags that are ignored (https://www.tensorflow.org/api_docs/python/tf/keras/layers/RNN):return_state
go_backwards
stateful
unroll
time_major
In order to use any of these additional flags, the
LMUCell
must be invoked directly then and passed totf.keras.layers.RNN
alongside the flags. Supporting the flags at the layer level would mirror the pattern for other recurrent layers such as tf.keras.layers.LSTM.The text was updated successfully, but these errors were encountered: