-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix _convert_simple_rnn #15723
fix _convert_simple_rnn #15723
Conversation
Hello @haoyang9804! I've looked through your code, but I'm not so familiar with it. It is good that you add test and it works successfully, but I should warn that numpy does not work with bfloat16, but TVM and other frameworks do. |
Thanks for the reply. But I think my patch is not related to bfloat16 |
python/tvm/relay/frontend/keras.py
Outdated
weightList0 = weightList[0].transpose([1, 0]) | ||
assert len(in_data.type_annotation.shape) == 3 | ||
for i in range(in_data.type_annotation.shape[1].value - 1): | ||
weightList0 = np.hstack((weightList0, weightList[0].transpose([1, 0]))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm aware of this line
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you plz elaborate? I still cannot see any data type issue here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you check your code for bfloat16 weights? numpy.hstack has dtype arg and I guess it possibly checks it if so numpy fails when dtype is bfloat16
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I see. I will check it later. Thx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's true that numpy.hstack does not support bfloat16. But weightList[0] and weightList0 can never be bfloat16 to my best understanding. These two vars are from weightList = keras_layer.get_weights(), and they are NumPy arrays. If numpy does not support bfloat16, I think the dtype of weightList[0] should never be bfloat16. So this worry seems unnecessary here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Please fix lint and other CI issues
cc @echuraev |
Seems that everything is good except for gpu/docs ci tests. May I ask how to fix it? I never met this issue before @vvchernov. |
Hello @echuraev! Looks like it is problem from jenkins:
Do you know who can help us? |
Hello @haoyang9804! I think it is not your issue. I've rechecked PRs: #15714 and #15709 have the same issue, but the next PRs do not have. Possibly it is the best way to restart this CI. |
@tvm-bot rerun |
@haoyang9804, I have restarted CI. In case of further errors, please try to rebase your branch to the latest mainline. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks
Fix this issue
Just as @echuraev guessed,
_convert_simple_rnn
has some logical errors. I'm not very sure if I fix it correctly. All in all, after this fix, running the following bug-triggered script will feedback a good compilation result, andInferType()
can successfully infer all types/shapes in the model and the inference is correct.The compilation result is