Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Support projection feature for LSTM on CPU (Only Inference) #17702

Merged
merged 2 commits into from
Mar 2, 2020

Conversation

zixuanweeei
Copy link
Contributor

Description

gluon.rnn.LSTM has an argument - projection_size - which enables the projection feature for LSTM operator. Previously, this feature is not supported on the CPU backend. This PR aims to add it to the CPU backend. It should be noticed that only the forward pass is ready in this PR. Backward pass and needs_grads scenario is not adapted to this feature. When it runs into those contents, it throws an error.

@ciyongch @pengzhao-intel @TaoLv

@pengzhao-intel
Copy link
Contributor

@eric-haibin-lin

@@ -385,7 +382,9 @@ The definition of GRU here is slightly different from paper but compatible with
})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the projection support is clear in the documentation. Could you update the documentation with LSTMP support when projection_size is set? You can refer to https://github.com/apache/incubator-mxnet/blob/62a85f365b819829fedb60116f803e0c0a3c554c/python/mxnet/gluon/contrib/rnn/rnn_cell.py#L197

Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. Thanks for pointing out that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Please take a review again. Thanks.

@zixuanweeei
Copy link
Contributor Author

CI has passed last time. The latest commit just added some documents for the projection feature. Accordingly, it should have no impact on functionality. Let's wait for CI validation. Please take a review. Thanks. @ciyongch @pengzhao-intel

Copy link
Contributor

@ciyongch ciyongch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@TaoLv TaoLv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the contribution. Minor comments.

@@ -385,7 +399,9 @@ The definition of GRU here is slightly different from paper but compatible with
})
.set_attr<mxnet::FInferShape>("FInferShape", RNNShape)
.set_attr<nnvm::FInferType>("FInferType", RNNType)
#if MXNET_USE_MKLDNN == 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's merge this check with the one at L407.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sure you know that, if MKL-DNN is not used, FInferStorageType will not be registered.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. The default storage type inference function will be executed when FInferStorageType is not registered.

@@ -427,7 +443,9 @@ NNVM_REGISTER_OP(_backward_RNN)
.set_attr_parser(ParamParser<RNNParam>)
.set_attr<bool>("TIsLayerOpBackward", true)
.set_attr<nnvm::TIsBackward>("TIsBackward", true)
#if MXNET_USE_MKLDNN == 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, merge this check with the one at L450.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Thanks.

const Tensor<cpu, 2, DType> wh(w_ptr + I * H * 4, Shape2(H * 4, (P ? P : H)));
Tensor<cpu, 2, DType> whr(w_ptr, Shape2(1, 1));
if (P > 0)
whr = Tensor<cpu, 2, DType>(wh.dptr_ + P * 4 * H, Shape2(P, H));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's put this into the same line of if (P > 0) or add {. .. } for it, like what you're doing at L236.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put them into the same line, as well as L236.

@TaoLv TaoLv mentioned this pull request Feb 29, 2020
4 tasks
* test solution for -Werror=maybe-uninitialized

* Check device type when create state

* Document the projection feature of LSTM for RNN operator

* Minor fix
@pengzhao-intel
Copy link
Contributor

Thanks for all of your great works. I will merge PR after passed CI.

@pengzhao-intel pengzhao-intel merged commit ac77974 into apache:master Mar 2, 2020
MoisesHer pushed a commit to MoisesHer/incubator-mxnet that referenced this pull request Apr 10, 2020
…7702)

* Support projection feature for LSTM on CPU

* test solution for -Werror=maybe-uninitialized

* Check device type when create state

* Document the projection feature of LSTM for RNN operator

* Minor fix

* Re-run CI
zixuanweeei added a commit to zixuanweeei/mxnet that referenced this pull request Apr 13, 2020
…7702)

* Support projection feature for LSTM on CPU

* test solution for -Werror=maybe-uninitialized

* Check device type when create state

* Document the projection feature of LSTM for RNN operator

* Minor fix

* Re-run CI
pengzhao-intel pushed a commit that referenced this pull request Apr 15, 2020
* Support projection feature for LSTM on CPU (Only Inference) (#17702)

* Support projection feature for LSTM on CPU

* test solution for -Werror=maybe-uninitialized

* Check device type when create state

* Document the projection feature of LSTM for RNN operator

* Minor fix

* Re-run CI

* Fix issue of zeros gradients w.r.t. RNN bias when num_layers > 1 (#17872)

* Fix issue of zeros gradients w.r.t. RNN bias when num_layers > 1

* Use nd.copy() to initialize parameters of new operator

* Add check for output states

* Initialize i2h/h2h_weights with zeros for rnn_relu/tanh, and reduce size

* Split fused rnn layer test into tests of individual mode

* Skip lstm and gru tests on CPU context without DNNL
stu1130 pushed a commit to stu1130/incubator-mxnet that referenced this pull request Apr 15, 2020
…18038)

* Support projection feature for LSTM on CPU (Only Inference) (apache#17702)

* Support projection feature for LSTM on CPU

* test solution for -Werror=maybe-uninitialized

* Check device type when create state

* Document the projection feature of LSTM for RNN operator

* Minor fix

* Re-run CI

* Fix issue of zeros gradients w.r.t. RNN bias when num_layers > 1 (apache#17872)

* Fix issue of zeros gradients w.r.t. RNN bias when num_layers > 1

* Use nd.copy() to initialize parameters of new operator

* Add check for output states

* Initialize i2h/h2h_weights with zeros for rnn_relu/tanh, and reduce size

* Split fused rnn layer test into tests of individual mode

* Skip lstm and gru tests on CPU context without DNNL
anirudh2290 pushed a commit to anirudh2290/mxnet that referenced this pull request May 29, 2020
…7702)

* Support projection feature for LSTM on CPU

* test solution for -Werror=maybe-uninitialized

* Check device type when create state

* Document the projection feature of LSTM for RNN operator

* Minor fix

* Re-run CI
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants