-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Frontend] [Tensorflow2] Added test infrastructure for TF2 frozen models #8074
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One more question, does the tf2 model go with the same from_tensorflow
as tf1?
from tensorflow.python.eager.def_function import Function | ||
|
||
|
||
def vmobj_to_list(o): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is another vmobj_to_list in tests/python/frontend/tensorflow/test_forward.py
, it would be good to merge this same-name function together, looks they are quite similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @yongwww for the comments. I addressed them in the new commits.
In this commit I do not introduce a new frontend code for TF2 models. That I will address in a new PR that is designed to handle TF2 style control flows and will also inherit some code from TF1 frontend code, mostly the ops. The structure of the PRs should follow the pattern as discussed in this thread, especially in this comment as below:
return vmobj_to_list(_out) | ||
|
||
|
||
def run_graph(lib, input_, ctx=tvm.cpu(0)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
personally, I like namerun_graph_runtime
more
|
||
Parameters | ||
---------- | ||
func: tf function. can be from saved model or not. different ways to pass input |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't see func in the arg list of the function compare_tf_tvm
, please update them to keep docstring and definition identical
return _out | ||
|
||
|
||
def compare_tf_tvm(gdef, input_, output_, vm=True, output_sig=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about changing vm
to "runtime_mode" or runtime
, since we have vm, graphruntime, interpreter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @rohanmukh
I made some suggestions
|
||
|
||
def compile_graph_runtime( | ||
mod, params, target="llvm", target_host="llvm", opt_level=3, output_sig=None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
output_sig
is unused here and in compile_vm()
, is this intended? What is output_sig
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is a mistake from refactoring, output_sig are the outputs that should only be passed to frontend parser code, not to compiler
mod, params, target="llvm", target_host="llvm", opt_level=3, disabled_pass=None, output_sig=None | ||
): | ||
with tvm.transform.PassContext(opt_level, disabled_pass=disabled_pass): | ||
mod = relay.transform.InferType()(mod) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think InferType is required here
return a | ||
|
||
|
||
def compile_graph_runtime( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Graph Runtime is now Graph Executor, may want to rename these functions
from common import run_tf_code | ||
|
||
|
||
class AddOne(tf.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For all the tests in this file, I think it would be best to encapsulate the tf module class(es) inside the test function. For example:
def test_expand_dims():
class ExpandDims(tf.Module):
def get_input(self):
return np.ones((1, 30), dtype=np.float32)
@tf.function(input_signature=[tf.TensorSpec(shape=(1, 30), dtype=tf.float32)])
def func(self, x):
return tf.expand_dims(x, axis=2)
run_all(ExpandDims)
compare_tf_tvm(*model_graph(model_fn, input_shape), vm=True, output_sig=None) | ||
|
||
|
||
def dense_model(input_shape, num_units=128): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, I would put these model functions inside the test functions which they are used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @trevor-m I addressed the comments in the new commits.
Co-authored-by: David Huang <davhuan@amazon.com> Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com> Co-authored-by: Xiao <weix@amazon.com>
Co-authored-by: David Huang <davhuan@amazon.com> Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com> Co-authored-by: Xiao <weix@amazon.com>
Co-authored-by: David Huang <davhuan@amazon.com> Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com> Co-authored-by: Xiao <weix@amazon.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for implementing the changes! LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
||
|
||
def run_tf_code(func, input_): | ||
print(type(func)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove or change to log
@@ -73,6 +75,46 @@ def convert_to_list(x): | |||
return x | |||
|
|||
|
|||
def vmobj_to_list(o): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this copied from somewhere? We can add this to testing to remove the other places
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is copied from tests/python/frontend/tensorflow/test_forward.py
Do you mean I should remove it from there and import its invocations from here (i.e. relay.testing.tf
) in this PR? Or do you mean that should be done in another PR? Similar code exists in other places as well like test_vm.py
, test_tensorrt.py
etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Thanks @rohanmukh @yongwww @trevor-m |
…els (apache#8074) * added test infrastructure for frozen TF2 models * linting with black * removing some comments * change in comment in sequential test * addressed the comments * refactored to place vmobj_to_list in a common file * Added helper function in python/tvm/relay/testing/tf.py Co-authored-by: David Huang <davhuan@amazon.com> Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com> Co-authored-by: Xiao <weix@amazon.com> * Refactor tf according to CI error Co-authored-by: David Huang <davhuan@amazon.com> Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com> Co-authored-by: Xiao <weix@amazon.com> * Added docstring Co-authored-by: David Huang <davhuan@amazon.com> Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com> Co-authored-by: Xiao <weix@amazon.com> * removing print Co-authored-by: David Huang <davhuan@amazon.com> Co-authored-by: Xiao <weix@amazon.com>
…els (apache#8074) * added test infrastructure for frozen TF2 models * linting with black * removing some comments * change in comment in sequential test * addressed the comments * refactored to place vmobj_to_list in a common file * Added helper function in python/tvm/relay/testing/tf.py Co-authored-by: David Huang <davhuan@amazon.com> Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com> Co-authored-by: Xiao <weix@amazon.com> * Refactor tf according to CI error Co-authored-by: David Huang <davhuan@amazon.com> Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com> Co-authored-by: Xiao <weix@amazon.com> * Added docstring Co-authored-by: David Huang <davhuan@amazon.com> Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com> Co-authored-by: Xiao <weix@amazon.com> * removing print Co-authored-by: David Huang <davhuan@amazon.com> Co-authored-by: Xiao <weix@amazon.com>
Test infrastructure for TF2 models that are frozen inside the script. Both ways of creating models in TF2 are tested -
tf.function
calls as available in tests/frontend/tensorflow2/test_functional.pytf.keras.Sequential
API as available in tests/frontend/tensorflow2/test_sequential.pyCo-authored-by: David Huang davhuan@amazon.com
Co-authored-by: Rohan Mukherjee mukrohan@amazon.com
Co-authored-by: Wei Xiao weix@amazon.com