Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend] [Tensorflow2] Added test infrastructure for TF2 frozen models #8074

Merged
merged 10 commits into from
May 25, 2021

Conversation

rohanmukh
Copy link
Contributor

@rohanmukh rohanmukh commented May 19, 2021

Test infrastructure for TF2 models that are frozen inside the script. Both ways of creating models in TF2 are tested -

  1. Using tf.function calls as available in tests/frontend/tensorflow2/test_functional.py
  2. Using tf.keras.Sequential API as available in tests/frontend/tensorflow2/test_sequential.py

Co-authored-by: David Huang davhuan@amazon.com
Co-authored-by: Rohan Mukherjee mukrohan@amazon.com
Co-authored-by: Wei Xiao weix@amazon.com

@rohanmukh
Copy link
Contributor Author

@rohanmukh rohanmukh marked this pull request as ready for review May 19, 2021 00:29
@rohanmukh
Copy link
Contributor Author

@trevor-m

Copy link
Member

@yongwww yongwww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more question, does the tf2 model go with the same from_tensorflow as tf1?

from tensorflow.python.eager.def_function import Function


def vmobj_to_list(o):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is another vmobj_to_list in tests/python/frontend/tensorflow/test_forward.py, it would be good to merge this same-name function together, looks they are quite similar.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @yongwww for the comments. I addressed them in the new commits.

In this commit I do not introduce a new frontend code for TF2 models. That I will address in a new PR that is designed to handle TF2 style control flows and will also inherit some code from TF1 frontend code, mostly the ops. The structure of the PRs should follow the pattern as discussed in this thread, especially in this comment as below:

#4102 (comment)

return vmobj_to_list(_out)


def run_graph(lib, input_, ctx=tvm.cpu(0)):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

personally, I like namerun_graph_runtime more


Parameters
----------
func: tf function. can be from saved model or not. different ways to pass input
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't see func in the arg list of the function compare_tf_tvm, please update them to keep docstring and definition identical

return _out


def compare_tf_tvm(gdef, input_, output_, vm=True, output_sig=None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about changing vm to "runtime_mode" or runtime, since we have vm, graphruntime, interpreter

Copy link
Contributor

@trevor-m trevor-m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @rohanmukh
I made some suggestions



def compile_graph_runtime(
mod, params, target="llvm", target_host="llvm", opt_level=3, output_sig=None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

output_sig is unused here and in compile_vm(), is this intended? What is output_sig?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a mistake from refactoring, output_sig are the outputs that should only be passed to frontend parser code, not to compiler

mod, params, target="llvm", target_host="llvm", opt_level=3, disabled_pass=None, output_sig=None
):
with tvm.transform.PassContext(opt_level, disabled_pass=disabled_pass):
mod = relay.transform.InferType()(mod)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think InferType is required here

return a


def compile_graph_runtime(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Graph Runtime is now Graph Executor, may want to rename these functions

from common import run_tf_code


class AddOne(tf.Module):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For all the tests in this file, I think it would be best to encapsulate the tf module class(es) inside the test function. For example:

def test_expand_dims():
    class ExpandDims(tf.Module):
        def get_input(self):
            return np.ones((1, 30), dtype=np.float32)

        @tf.function(input_signature=[tf.TensorSpec(shape=(1, 30), dtype=tf.float32)])
        def func(self, x):
            return tf.expand_dims(x, axis=2)

    run_all(ExpandDims)

compare_tf_tvm(*model_graph(model_fn, input_shape), vm=True, output_sig=None)


def dense_model(input_shape, num_units=128):
Copy link
Contributor

@trevor-m trevor-m May 19, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, I would put these model functions inside the test functions which they are used.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @trevor-m I addressed the comments in the new commits.

rohanmukh and others added 5 commits May 20, 2021 02:33
Co-authored-by: David Huang <davhuan@amazon.com>
Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com>
Co-authored-by: Xiao <weix@amazon.com>
Co-authored-by: David Huang <davhuan@amazon.com>
Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com>
Co-authored-by: Xiao <weix@amazon.com>
Co-authored-by: David Huang <davhuan@amazon.com>
Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com>
Co-authored-by: Xiao <weix@amazon.com>
Copy link
Contributor

@trevor-m trevor-m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for implementing the changes! LGTM

Copy link
Member

@yongwww yongwww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM



def run_tf_code(func, input_):
print(type(func))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove or change to log

@@ -73,6 +75,46 @@ def convert_to_list(x):
return x


def vmobj_to_list(o):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this copied from somewhere? We can add this to testing to remove the other places

Copy link
Contributor Author

@rohanmukh rohanmukh May 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is copied from tests/python/frontend/tensorflow/test_forward.py
Do you mean I should remove it from there and import its invocations from here (i.e. relay.testing.tf ) in this PR? Or do you mean that should be done in another PR? Similar code exists in other places as well like test_vm.py, test_tensorrt.py etc.

Copy link
Member

@zhiics zhiics left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@zhiics zhiics merged commit 65cd19f into apache:main May 25, 2021
@zhiics
Copy link
Member

zhiics commented May 25, 2021

Thanks @rohanmukh @yongwww @trevor-m

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Jun 17, 2021
…els (apache#8074)

* added test infrastructure for frozen TF2 models

* linting with black

* removing some comments

* change in comment in sequential test

* addressed the comments

* refactored to place vmobj_to_list in a common file

* Added helper function in python/tvm/relay/testing/tf.py

Co-authored-by: David Huang <davhuan@amazon.com>
Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com>
Co-authored-by: Xiao <weix@amazon.com>

* Refactor tf according to CI error

Co-authored-by: David Huang <davhuan@amazon.com>
Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com>
Co-authored-by: Xiao <weix@amazon.com>

* Added docstring

Co-authored-by: David Huang <davhuan@amazon.com>
Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com>
Co-authored-by: Xiao <weix@amazon.com>

* removing print

Co-authored-by: David Huang <davhuan@amazon.com>
Co-authored-by: Xiao <weix@amazon.com>
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Jun 17, 2021
…els (apache#8074)

* added test infrastructure for frozen TF2 models

* linting with black

* removing some comments

* change in comment in sequential test

* addressed the comments

* refactored to place vmobj_to_list in a common file

* Added helper function in python/tvm/relay/testing/tf.py

Co-authored-by: David Huang <davhuan@amazon.com>
Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com>
Co-authored-by: Xiao <weix@amazon.com>

* Refactor tf according to CI error

Co-authored-by: David Huang <davhuan@amazon.com>
Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com>
Co-authored-by: Xiao <weix@amazon.com>

* Added docstring

Co-authored-by: David Huang <davhuan@amazon.com>
Co-authored-by: Rohan Mukherjee <mukrohan@amazon.com>
Co-authored-by: Xiao <weix@amazon.com>

* removing print

Co-authored-by: David Huang <davhuan@amazon.com>
Co-authored-by: Xiao <weix@amazon.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants