-
Notifications
You must be signed in to change notification settings - Fork 261
[Examples] Add TensorFlow examples - ResNet50 and BERT models #2530
Conversation
Signed-off-by: Satya <satyanaraya.illa@intel.com>
Signed-off-by: Satya <satyanaraya.illa@intel.com>
Signed-off-by: Satya <satyanaraya.illa@intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 7 of 7 files at r1.
Reviewable status: all files reviewed, 17 unresolved discussions, not enough approvals from maintainers (3 more required), not enough approvals from different teams (2 more required, approved so far: ) (waiting on @Satya1493)
a discussion (no related file):
There are currently two commits. From what I understand, the second commit is supposed to be a fixup to the first commit. So during rebase, these two commits should be merged into one.
Examples/tensorflow/README.md, line 1 at r1 (raw file):
## Run inference on TensorFlow BERT and ResNet50 models
Could you re-wrap this file to 100 characters per line? Hard to read.
Examples/tensorflow/README.md, line 2 at r1 (raw file):
## Run inference on TensorFlow BERT and ResNet50 models This directory contains steps and artifacts to run inference with TensorFlow BERT and ResNet50 sample workloads on Graphene. Specifically, both these examples use pre-trained models to run inference. We tested this on Ubuntu 18.04 and uses the package version of Python 3.6.
and uses the package version of Python 3.6
-> with Python 3.6
Examples/tensorflow/README.md, line 2 at r1 (raw file):
## Run inference on TensorFlow BERT and ResNet50 models This directory contains steps and artifacts to run inference with TensorFlow BERT and ResNet50 sample workloads on Graphene. Specifically, both these examples use pre-trained models to run inference. We tested this on Ubuntu 18.04 and uses the package version of Python 3.6.
Could you give a sentence each about BERT and ResNet50?
Examples/tensorflow/README.md, line 24 at r1 (raw file):
## Run inference on BERT model - To run int8 inference on graphene-sgx(SGX version)<br> ``KMP_BLOCKTIME=1 KMP_SETTINGS=1 OMP_NUM_THREADS=36 KMP_AFFINITY=granularity=fine,verbose,compact,1,0 taskset -c 0-35 graphene-sgx ./python models/models/language_modeling/tensorflow/bert_large/inference/run_squad.py --init_checkpoint=data/bert_large_checkpoints/model.ckpt-3649 --vocab_file=data/wwm_uncased_L-24_H-1024_A-16/vocab.txt --bert_config_file=data/wwm_uncased_L-24_H-1024_A-16/bert_config.json --predict_file=data/wwm_uncased_L-24_H-1024_A-16/dev-v1.1.json --precision=int8 --output_dir=output/bert-squad-output --predict_batch_size=32 --experimental_gelu=True --optimized_softmax=True --input_graph=data/asymmetric_per_channel_bert_int8.pb --do_predict=True --mode=benchmark --inter_op_parallelism_threads=1 --intra_op_parallelism_threads=36``
Could you split this huge line into multiple lines? This is unreadable.
Like this:
KMP_BLOCKTIME=1 KMP_SETTINGS=1 KMP_AFFINITY=granularity=fine,verbose,compact,1,0 \
OMP_NUM_THREADS=36 taskset -c 0-35 \
graphene-sgx ./python \
... and so on ...
Same for everything here.
Examples/tensorflow/README.md, line 24 at r1 (raw file):
## Run inference on BERT model - To run int8 inference on graphene-sgx(SGX version)<br> ``KMP_BLOCKTIME=1 KMP_SETTINGS=1 OMP_NUM_THREADS=36 KMP_AFFINITY=granularity=fine,verbose,compact,1,0 taskset -c 0-35 graphene-sgx ./python models/models/language_modeling/tensorflow/bert_large/inference/run_squad.py --init_checkpoint=data/bert_large_checkpoints/model.ckpt-3649 --vocab_file=data/wwm_uncased_L-24_H-1024_A-16/vocab.txt --bert_config_file=data/wwm_uncased_L-24_H-1024_A-16/bert_config.json --predict_file=data/wwm_uncased_L-24_H-1024_A-16/dev-v1.1.json --precision=int8 --output_dir=output/bert-squad-output --predict_batch_size=32 --experimental_gelu=True --optimized_softmax=True --input_graph=data/asymmetric_per_channel_bert_int8.pb --do_predict=True --mode=benchmark --inter_op_parallelism_threads=1 --intra_op_parallelism_threads=36``
What are all these magic options? Please describe them. Do we really need to specify them all?
Examples/tensorflow/README.md, line 30 at r1 (raw file):
``KMP_BLOCKTIME=1 KMP_SETTINGS=1 OMP_NUM_THREADS=36 KMP_AFFINITY=granularity=fine,verbose,compact,1,0 taskset -c 0-35 python3.6 models/models/language_modeling/tensorflow/bert_large/inference/run_squad.py --init_checkpoint=data/bert_large_checkpoints/model.ckpt-3649 --vocab_file=data/wwm_uncased_L-24_H-1024_A-16/vocab.txt --bert_config_file=data/wwm_uncased_L-24_H-1024_A-16/bert_config.json --predict_file=data/wwm_uncased_L-24_H-1024_A-16/dev-v1.1.json --precision=int8 --output_dir=output/bert-squad-output --predict_batch_size=32 --experimental_gelu=True --optimized_softmax=True --input_graph=data/asymmetric_per_channel_bert_int8.pb --do_predict=True --mode=benchmark --inter_op_parallelism_threads=1 --intra_op_parallelism_threads=36`` - Above commands are for a 36 core system. Please set the following options accordingly for optimal performance. - OMP_NUM_THREADS='Core(s) per socket'
Please replace all tabs with spaces.
Examples/tensorflow/README.md, line 36 at r1 (raw file):
## Run inference on ResNet50 model - To run inference on graphene-sgx(SGX version)<br>
What are these <br>
HTML tags? They are not needed in Markdown.
Examples/tensorflow/README.md, line 48 at r1 (raw file):
>**NOTE** To get 'Core(s) per socket', do ``lscpu | grep 'Core(s) per socket'`` # GSC :
I don't think we should have this in our Examples/
directory. This should belong to GSC examples/tests. I would suggest to remove this GSC part completely from this PR, and then later submit a separate PR on this.
Examples/tensorflow/README.md, line 60 at r1 (raw file):
4. Build docker image : - ``cd test`` - ``docker build --rm -t ubuntu18.04-tensorflow-bert -f ubuntu18.04-tensorflow-bert.dockerfile ../../../Examples``
What is ubuntu18.04-tensorflow-bert.dockerfile
?
Examples/tensorflow/BERT/python.manifest.template, line 15 at r1 (raw file):
loader.insecure__use_host_env = 1 # Disable address space layour randomization. Don't use this on production!
layout
Examples/tensorflow/BERT/python.manifest.template, line 66 at r1 (raw file):
sgx.trusted_files.usr_arch_libdir = "file:/usr/{{ arch_libdir }}/" sgx.trusted_files.libcpp = "file:/usr/lib/x86_64-linux-gnu/libstdc++.so.6" sgx.trusted_files.libgcc = "file:/lib/x86_64-linux-gnu/libgcc_s.so.1"
Are these two last files needed? I think they are already "covered" by arch_libdir
and usr_arch_libdir
Examples/tensorflow/BERT/python.manifest.template, line 70 at r1 (raw file):
sgx.allowed_files.tmp = "file:/tmp/" sgx.allowed_files.etc = "file:/etc/" sgx.allow_file_creation = "1"
This manifest option is not present in Graphene anymore. Please just remove it.
Examples/tensorflow/BERT/python.manifest.template, line 72 at r1 (raw file):
sgx.allow_file_creation = "1" sgx.allowed_files.output = "file:output/" sgx.allowed_files.scripts = "file:models/models/language_modeling/tensorflow/bert_large/inference/"
Why not just have sgx.allowed_files.models = "file:models/
?
Examples/tensorflow/BERT/python.manifest.template, line 77 at r1 (raw file):
sgx.allowed_files.pyhome = "file:{{ python.stdlib }}" sgx.allowed_files.pydisthome = "file:{{ python.distlib }}" sgx.allowed_files.pydistpath = "file:{{ pythondistpath }}"
Why not have these four as sgx.trusted_files
? This is what we do in our Python example: https://github.com/oscarlab/graphene/blob/4cb98219d8e302055587a8952c1102415a72ba42/Examples/python/python.manifest.template#L89
Examples/tensorflow/BERT/root/.keras/keras.json, line 6 at r1 (raw file):
"backend": "tensorflow", "image_data_format": "channels_last" }
What is this file exactly? What does it do?
Examples/tensorflow/ResNet50/Makefile, line 1 at r1 (raw file):
# ResNet50 sample for Tensorflow
Same comments as for BERT.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 3 of 7 files reviewed, 17 unresolved discussions, not enough approvals from maintainers (3 more required), not enough approvals from different teams (2 more required, approved so far: ), "fixup! " found in commit messages' one-liners (waiting on @dimakuv and @Satya1493)
Examples/tensorflow/README.md, line 1 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Could you re-wrap this file to 100 characters per line? Hard to read.
Done
Examples/tensorflow/README.md, line 2 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
and uses the package version of Python 3.6
->with Python 3.6
done
Examples/tensorflow/README.md, line 2 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Could you give a sentence each about BERT and ResNet50?
done
Examples/tensorflow/README.md, line 24 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Could you split this huge line into multiple lines? This is unreadable.
Like this:
KMP_BLOCKTIME=1 KMP_SETTINGS=1 KMP_AFFINITY=granularity=fine,verbose,compact,1,0 \ OMP_NUM_THREADS=36 taskset -c 0-35 \ graphene-sgx ./python \ ... and so on ...
Same for everything here.
done
Examples/tensorflow/README.md, line 24 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
What are all these magic options? Please describe them. Do we really need to specify them all?
Removed KMP_BLOCKTIME and KMP_SETTINGS now as they are default values anyway. Added a one line description about OMP_NUM_THREADS and KMP_AFFINITY.
Examples/tensorflow/README.md, line 30 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please replace all tabs with spaces.
done
Examples/tensorflow/README.md, line 36 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
What are these
<br>
HTML tags? They are not needed in Markdown.
also does line breaks. But yeah, its ugly while reading. Removed them now.
Examples/tensorflow/README.md, line 48 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
I don't think we should have this in our
Examples/
directory. This should belong to GSC examples/tests. I would suggest to remove this GSC part completely from this PR, and then later submit a separate PR on this.
Yeah, removed GSC section. Will push another PR for GSC.
Examples/tensorflow/README.md, line 60 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
What is
ubuntu18.04-tensorflow-bert.dockerfile
?
This is GSC related file, will push another PR for GSC.
Examples/tensorflow/BERT/python.manifest.template, line 15 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
layout
done
Examples/tensorflow/BERT/python.manifest.template, line 66 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Are these two last files needed? I think they are already "covered" by
arch_libdir
andusr_arch_libdir
Removed them now.
Examples/tensorflow/BERT/python.manifest.template, line 70 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
This manifest option is not present in Graphene anymore. Please just remove it.
done
Examples/tensorflow/BERT/python.manifest.template, line 72 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Why not just have
sgx.allowed_files.models = "file:models/
?
done
Examples/tensorflow/BERT/python.manifest.template, line 77 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Why not have these four as
sgx.trusted_files
? This is what we do in our Python example: https://github.com/oscarlab/graphene/blob/4cb98219d8e302055587a8952c1102415a72ba42/Examples/python/python.manifest.template#L89
done
Examples/tensorflow/BERT/root/.keras/keras.json, line 6 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
What is this file exactly? What does it do?
Keras is a set of high-level neural network APIs for TF 2.0+ versions.
Using same keras code, one can change the backend to be used - tensorflow, theano, CNTK etc.
keras.json has this configuration.
Examples/tensorflow/ResNet50/Makefile, line 1 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Same comments as for BERT.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 3 of 7 files reviewed, 17 unresolved discussions, not enough approvals from maintainers (2 more required), not enough approvals from different teams (2 more required, approved so far: ), "fixup! " found in commit messages' one-liners (waiting on @dimakuv and @Satya1493)
Examples/tensorflow/README.md, line 1 at r1 (raw file):
Previously, Satya1493 wrote…
Done
Sorry, that's not what I meant. You don't need to add \
at the end of the line, otherwise your whole file (when rendered) is wrapped in 100 characters: https://github.com/Satya1493/graphene/tree/tensorflow/Examples/tensorflow.
So by simply removing all these \
characters, you'll get what we want:
- Render in GitHub correctly,
- but be readable when opened in e.g. Vim (without line wraps, as is typically used by developers)
Signed-off-by: Satya <satyanaraya.illa@intel.com>
Signed-off-by: Satya <satyanaraya.illa@intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 3 of 7 files reviewed, 17 unresolved discussions, not enough approvals from maintainers (2 more required), not enough approvals from different teams (2 more required, approved so far: ), "fixup! " found in commit messages' one-liners (waiting on @dimakuv and @Satya1493)
Examples/tensorflow/README.md, line 1 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Sorry, that's not what I meant. You don't need to add
\
at the end of the line, otherwise your whole file (when rendered) is wrapped in 100 characters: https://github.com/Satya1493/graphene/tree/tensorflow/Examples/tensorflow.So by simply removing all these
\
characters, you'll get what we want:
- Render in GitHub correctly,
- but be readable when opened in e.g. Vim (without line wraps, as is typically used by developers)
Got it.. Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 3 of 4 files at r2, 1 of 1 files at r3.
Reviewable status: all files reviewed, 21 unresolved discussions, not enough approvals from maintainers (2 more required), not enough approvals from different teams (2 more required, approved so far: ), "fixup! " found in commit messages' one-liners (waiting on @dimakuv and @Satya1493)
Examples/tensorflow/README.md, line 24 at r1 (raw file):
Previously, Satya1493 wrote…
done
Thanks, looks much better now!
Examples/tensorflow/README.md, line 48 at r1 (raw file):
Previously, Satya1493 wrote…
Yeah, removed GSC section. Will push another PR for GSC.
Thanks! This is much easier to review now.
Examples/tensorflow/README.md, line 4 at r3 (raw file):
This directory contains steps and artifacts to run inference with TensorFlow BERT and ResNet50 sample workloads on Graphene. Specifically, both these examples use pre-trained models to run inference. We tested this on Ubuntu 18.04 and uses the package version with Python 3.6.
Actually, could you remove this whole sentence We tested this ...
because we agreed to remove it from all our examples (see #2566).
Examples/tensorflow/README.md, line 26 at r3 (raw file):
## Pre-requisites - Install python3.6.
Python is pre-installed on all known Linux distros, so just remove this line.
Examples/tensorflow/README.md, line 35 at r3 (raw file):
- To clean the sample, do ``make clean`` - To clean and remove downloaded models and datasets, do ``make distclean`` - To build the non-SGX version, do ``make PYTHONDISTPATH=path_to_python_dist_packages/``
Why do you need PYTHONDISTPATH
? graphene-manifest
utility (the one that generates the manifest) already provides the convinience macro {{ python.distlib }}
, so you don't need an additional PYTHONDISTPATH
. Please remove it from everywhere.
Examples/tensorflow/README.md, line 39 at r3 (raw file):
- Typically, path_to_python_dist_packages is '/usr/local/lib/python3.6/dist-packages', but can change based on python's installation directory. >**WARNING:** Building BERT sample downloads about 5GB of data.
Please don't use >
to mark warnings/notes. See also discussions in #2535
Examples/tensorflow/README.md, line 42 at r3 (raw file):
## Run inference on BERT model - To run int8 inference on graphene-sgx(SGX version)
Please add a whitespace between graphene-sgx
and (SGX version)
. Same for everywhere else.
Examples/tensorflow/README.md, line 98 at r3 (raw file):
- Above commands are for a 36 core system. Please set the following options accordingly for optimal
performance.
Instead of a dot, please use :
at the end
Examples/tensorflow/README.md, line 99 at r3 (raw file):
- Above commands are for a 36 core system. Please set the following options accordingly for optimal performance. - OMP_NUM_THREADS='Core(s) per socket', OMP_NUM_THREADS sets the maximum number of threads to
Please put OMP_NUM_THREADS
in backticks
Examples/tensorflow/README.md, line 102 at r3 (raw file):
use for OpenMP parallel regions. - taskset to 'Core(s) per socket' - intra_op_parallelism_threads='Core(s) per socket'
Instead of using 'Core(s) per socket'
in three sentences here, maybe something like this?
- Assuming that X is the number of cores per socket, set `OMP_NUM_THREADS=X`
and `intra_op_parallelism_threads=X`.
- Specify the whole range of cores available on one of the sockets in `taskset`.
- ... hyperthreading stuff ...
- Note that `OMP_NUM_THREADS` sets the maximum number of threads to
use for OpenMP parallel regions, and `KVM_AFFINITY` binds OpenMP threads
to physical processing units.
Examples/tensorflow/README.md, line 103 at r3 (raw file):
- taskset to 'Core(s) per socket' - intra_op_parallelism_threads='Core(s) per socket' - If hyperthreading is enabled : use ``KMP_AFFINITY=granularity=fine,verbose,compact,1,0``
Please remove a redundant whitespace in between enabled
and :
. Apply everywhere in this PR.
Examples/tensorflow/README.md, line 105 at r3 (raw file):
- If hyperthreading is enabled : use ``KMP_AFFINITY=granularity=fine,verbose,compact,1,0`` - If hyperthreading is disabled : use ``KMP_AFFINITY=granularity=fine,verbose,compact`` - KMP_AFFINITY binds OpenMP threads to physical processing units.
Please put KMP_AFFINITY
in backticks
Examples/tensorflow/README.md, line 106 at r3 (raw file):
- If hyperthreading is disabled : use ``KMP_AFFINITY=granularity=fine,verbose,compact`` - KMP_AFFINITY binds OpenMP threads to physical processing units. >**NOTE:** To get 'Core(s) per socket', do ``lscpu | grep 'Core(s) per socket'``
Remove >
Examples/tensorflow/README.md, line 151 at r3 (raw file):
- If hyperthreading is disabled : use ``KMP_AFFINITY=granularity=fine,verbose,compact`` - KMP_AFFINITY binds OpenMP threads to physical processing units. - The options batch-size, warmup-steps and steps can be varied.
Please add backticks around option names.
Examples/tensorflow/README.md, line 152 at r3 (raw file):
- KMP_AFFINITY binds OpenMP threads to physical processing units. - The options batch-size, warmup-steps and steps can be varied. >**NOTE:** To get 'Core(s) per socket', do ``lscpu | grep 'Core(s) per socket'``
Remove >
Examples/tensorflow/README.md, line 175 at r3 (raw file):
loader.env.LD_PRELOAD = "/usr/local/lib/mimalloc-1.7/libmimalloc.so.1.7" sgx.trusted_files.libmimalloc = "file:/usr/local/lib/mimalloc-1.7/libmimalloc.so.1.7"
This seems to be exactly the same text as in OpenVINO example, could you check how it is formatted there: #2535 and apply the same changes.
Examples/tensorflow/BERT/Makefile, line 41 at r3 (raw file):
-Darch_libdir=$(ARCH_LIBDIR) \ -Dentrypoint=$(realpath $(shell sh -c "command -v python3")) \ -Dpythondistpath=$(PYTHONDISTPATH) \
PYTHONDISTPATH
and pythondistpath
are not needed because graphene-manifest
already provides {{ python.distlib }}
convinience macro
Examples/tensorflow/BERT/python.manifest.template, line 1 at r3 (raw file):
# This manifest was tested on Ubuntu 18.04 with python3.6.
We decided to remove all non-manifest-specific comments from all our examples. Please also do the same here. Check #2566 and #2535
Examples/tensorflow/BERT/python.manifest.template, line 10 at r3 (raw file):
# Read application arguments directly from the command line. Don't use this on production! loader.insecure__use_cmdline_argv = 1
All these 1
should be replaced with true
Examples/tensorflow/BERT/python.manifest.template, line 71 at r3 (raw file):
sgx.trusted_files.pyhome = "file:{{ python.stdlib }}" sgx.trusted_files.pydisthome = "file:{{ python.distlib }}" sgx.trusted_files.pydistpath = "file:{{ pythondistpath }}"
This can be removed.
Examples/tensorflow/BERT/root/.keras/keras.json, line 6 at r1 (raw file):
Previously, Satya1493 wrote…
Keras is a set of high-level neural network APIs for TF 2.0+ versions.
Using same keras code, one can change the backend to be used - tensorflow, theano, CNTK etc.
keras.json has this configuration.
Could you add this quick description in the README file? Otherwise it's unclear why these files are here.
…RT models Signed-off-by: Satya <satyanaraya.illa@intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 4 of 7 files reviewed, 20 unresolved discussions, not enough approvals from maintainers (2 more required), not enough approvals from different teams (2 more required, approved so far: ), "fixup! " found in commit messages' one-liners (waiting on @dimakuv and @Satya1493)
Examples/tensorflow/README.md, line 4 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Actually, could you remove this whole sentence
We tested this ...
because we agreed to remove it from all our examples (see #2566).
done
Examples/tensorflow/README.md, line 26 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Python is pre-installed on all known Linux distros, so just remove this line.
done
Examples/tensorflow/README.md, line 35 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Why do you need
PYTHONDISTPATH
?graphene-manifest
utility (the one that generates the manifest) already provides the convinience macro{{ python.distlib }}
, so you don't need an additionalPYTHONDISTPATH
. Please remove it from everywhere.
The pip version that comes with the distro is pip 9.0.1.
TensorFlow 2.4 version can only be installed with latest pip version 21.2.1.
Once pip is upgraded, it installs all packages into /usr/local/lib/python3.6/dist-packages.
{{ python.distlib }} gives /usr/lib/python3/dist-packages.
In addition to this path, TensorFlow samples also need /usr/local/lib/python3.6/dist-packages
or any custom path that tensorflow is installed into.
Hence, I've added this as a configuration option during build.
Examples/tensorflow/README.md, line 39 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please don't use
>
to mark warnings/notes. See also discussions in #2535
done
Examples/tensorflow/README.md, line 42 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please add a whitespace between
graphene-sgx
and(SGX version)
. Same for everywhere else.
done
Examples/tensorflow/README.md, line 98 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Instead of a dot, please use
:
at the end
done
Examples/tensorflow/README.md, line 99 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please put
OMP_NUM_THREADS
in backticks
done
Examples/tensorflow/README.md, line 102 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Instead of using
'Core(s) per socket'
in three sentences here, maybe something like this?- Assuming that X is the number of cores per socket, set `OMP_NUM_THREADS=X` and `intra_op_parallelism_threads=X`. - Specify the whole range of cores available on one of the sockets in `taskset`. - ... hyperthreading stuff ... - Note that `OMP_NUM_THREADS` sets the maximum number of threads to use for OpenMP parallel regions, and `KVM_AFFINITY` binds OpenMP threads to physical processing units.
done
Examples/tensorflow/README.md, line 103 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please remove a redundant whitespace in between
enabled
and:
. Apply everywhere in this PR.
done
Examples/tensorflow/README.md, line 105 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please put
KMP_AFFINITY
in backticks
done
Examples/tensorflow/README.md, line 106 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Remove
>
done
Examples/tensorflow/README.md, line 151 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please add backticks around option names.
done
Examples/tensorflow/README.md, line 152 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Remove
>
done
Examples/tensorflow/README.md, line 175 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
This seems to be exactly the same text as in OpenVINO example, could you check how it is formatted there: #2535 and apply the same changes.
done
Examples/tensorflow/BERT/Makefile, line 41 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
PYTHONDISTPATH
andpythondistpath
are not needed becausegraphene-manifest
already provides{{ python.distlib }}
convinience macro
Same as mentioned in other comment. Extra path than {{ python.distlib }} is needed for tensorflow samples.
Examples/tensorflow/BERT/python.manifest.template, line 1 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
We decided to remove all non-manifest-specific comments from all our examples. Please also do the same here. Check #2566 and #2535
done
Examples/tensorflow/BERT/python.manifest.template, line 10 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
All these
1
should be replaced withtrue
done
Examples/tensorflow/BERT/python.manifest.template, line 71 at r3 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
This can be removed.
same as earlier comment - need for extra path option for tensorflow samples.
Examples/tensorflow/BERT/root/.keras/keras.json, line 6 at r1 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Could you add this quick description in the README file? Otherwise it's unclear why these files are here.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 4 of 7 files reviewed, 21 unresolved discussions, not enough approvals from maintainers (2 more required), not enough approvals from different teams (2 more required, approved so far: ), "fixup! " found in commit messages' one-liners (waiting on @dimakuv and @Satya1493)
a discussion (no related file):
Quick note (I will do the proper review tomorrow): could you please add .gitignore
file to your example? This file informs git to "not track" (ignore) files that were downloaded or auto-generated. Basically, all files created during builds and runs (other than your manifest.template and Makefile files) should be git-ignored.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 4 of 7 files reviewed, 23 unresolved discussions, not enough approvals from maintainers (2 more required), not enough approvals from different teams (2 more required, approved so far: ), "fixup! " found in commit messages' one-liners (waiting on @dimakuv and @Satya1493)
Examples/tensorflow/README.md, line 37 at r4 (raw file):
## Run inference on BERT model - To run int8 inference on graphene-sgx (SGX version)
Another quick note: this README is quite hard to read because the command is duplicated three times. Could we replace this copy-paste with another structure? Like you give a command first, and then you describe that to run in non-SGX mode, replace graphene-sgx
with graphene
, and to run natively, replace graphene-sgx ./python
with python3
.
Examples/tensorflow/README.md, line 76 at r4 (raw file):
- To run int8 inference on native baremetal (outside Graphene)
OMP_NUM_THREADS=36 KMP_AFFINITY=granularity=fine,verbose,compact,1,0 taskset -c 0-35 python3.6 \
No need to hard-code the version of Python. So replace python3.6
with simply python3
. Here and everywhere else in this README.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 4 of 9 files reviewed, 23 unresolved discussions, not enough approvals from maintainers (2 more required), not enough approvals from different teams (2 more required, approved so far: ), "fixup! " found in commit messages' one-liners (waiting on @dimakuv and @Satya1493)
a discussion (no related file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Quick note (I will do the proper review tomorrow): could you please add
.gitignore
file to your example? This file informs git to "not track" (ignore) files that were downloaded or auto-generated. Basically, all files created during builds and runs (other than your manifest.template and Makefile files) should be git-ignored.
Done
Examples/tensorflow/README.md, line 37 at r4 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Another quick note: this README is quite hard to read because the command is duplicated three times. Could we replace this copy-paste with another structure? Like you give a command first, and then you describe that to run in non-SGX mode, replace
graphene-sgx
withgraphene
, and to run natively, replacegraphene-sgx ./python
withpython3
.
done
Examples/tensorflow/README.md, line 76 at r4 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
No need to hard-code the version of Python. So replace
python3.6
with simplypython3
. Here and everywhere else in this README.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 2 of 3 files at r4, 3 of 3 files at r5.
Reviewable status: all files reviewed, 27 unresolved discussions, not enough approvals from maintainers (2 more required), not enough approvals from different teams (2 more required, approved so far: ), "fixup! " found in commit messages' one-liners (waiting on @Satya1493)
Examples/tensorflow/README.md, line 35 at r3 (raw file):
Previously, Satya1493 wrote…
The pip version that comes with the distro is pip 9.0.1.
TensorFlow 2.4 version can only be installed with latest pip version 21.2.1.
Once pip is upgraded, it installs all packages into /usr/local/lib/python3.6/dist-packages.
{{ python.distlib }} gives /usr/lib/python3/dist-packages.
In addition to this path, TensorFlow samples also need /usr/local/lib/python3.6/dist-packages
or any custom path that tensorflow is installed into.
Hence, I've added this as a configuration option during build.
Ok, got it.
Examples/tensorflow/README.md, line 8 at r5 (raw file):
### Bidirectional Encoder Representations from Transformers (BERT): BERT is a method of pre-training language representations and then use that trained model for downstream NLP tasks like 'question answering'. BERT is an unsupervised, deeply birectional system
What is birectional
? I guess bidirectional
Examples/tensorflow/README.md, line 10 at r5 (raw file):
downstream NLP tasks like 'question answering'. BERT is an unsupervised, deeply birectional system for pre-training NLP. In this BERT sample, we use 'BERT-Large, Uncased (Whole Word Masking)' model and perform int8
I would prefer to bold this 'BERT-Large, Uncased (Whole Word Masking)'
instead of putting into quotes. So it will be: **BERT-Large, Uncased (Whole Word Masking)**
Examples/tensorflow/README.md, line 15 at r5 (raw file):
### Residual Network (ResNet): ResNet50 is a convolutional neural network that is 50 layers deep. In this ResNet50(v1.5) sample, we use a pre-trained model and perform int8 inference.
Can you add a space: ResNet50(v1.5)
-> ResNet50 (v1.5)
(unless this is how it is supposed to be written, then don't change)
Examples/tensorflow/README.md, line 20 at r5 (raw file):
## Pre-requisites - Upgrade pip/pip3. - Install tensorflow using ``pip install intel-tensorflow-avx512==2.4.0`` or by downloading whl
tensorflow
-> TensorFlow
Examples/tensorflow/README.md, line 20 at r5 (raw file):
## Pre-requisites - Upgrade pip/pip3. - Install tensorflow using ``pip install intel-tensorflow-avx512==2.4.0`` or by downloading whl
whl package
-> the Wheel (.whl) package
Examples/tensorflow/README.md, line 21 at r5 (raw file):
- Upgrade pip/pip3. - Install tensorflow using ``pip install intel-tensorflow-avx512==2.4.0`` or by downloading whl package from https://pypi.org/project/intel-tensorflow-avx512/2.4.0/#files.
Actually, why do you specify two different ways of installing TensorFlow? Why would a user choose one over the other? Could we just specify one way (I would go with pip
installation then)?
Examples/tensorflow/README.md, line 24 at r5 (raw file):
## Build BERT or ResNet50 samples - To build BERT sample, do ``cd BERT`` or to build ResNet50 sample, do ``cd ResNet50``.
Can you separate into two sentences? Hard to read this. Like this:
- To build BERT sample, do ``cd BERT``. To build ResNet50 sample, do ``cd ResNet50``.
Examples/tensorflow/README.md, line 29 at r5 (raw file):
- To build the non-SGX version, do ``make PYTHONDISTPATH=path_to_python_dist_packages/`` - To build the SGX version, do ``make PYTHONDISTPATH=path_to_python_dist_packages/ SGX=1`` - Typically, path_to_python_dist_packages is '/usr/local/lib/python3.6/dist-packages', but can
Please place path_to_python_dist_packages
in backticks. Also put /usr/local/...
in backticks (instead of current single quotes).
Examples/tensorflow/README.md, line 31 at r5 (raw file):
- Typically, path_to_python_dist_packages is '/usr/local/lib/python3.6/dist-packages', but can change based on python's installation directory. - Keras settings are configured in the file root/.keras/keras.json. It is configured to use
Please place root/.keras/...
in backticks.
Examples/tensorflow/README.md, line 32 at r5 (raw file):
change based on python's installation directory. - Keras settings are configured in the file root/.keras/keras.json. It is configured to use tensorflow as backend.
tensorflow
-> TensorFlow
Examples/tensorflow/README.md, line 37 at r5 (raw file):
## Run inference on BERT model - To run int8 inference on graphene-sgx (SGX version)
Please place graphene-sgx
into backticks.
Examples/tensorflow/README.md, line 37 at r5 (raw file):
## Run inference on BERT model - To run int8 inference on graphene-sgx (SGX version)
Please add :
at the end
Examples/tensorflow/README.md, line 55 at r5 (raw file):
--intra_op_parallelism_threads=36
- To run int8 inference on graphene-direct (non-SGX version), replace
graphene-sgx
with
Please place graphene-direct
into backticks.
Examples/tensorflow/README.md, line 60 at r5 (raw file):
`python3` in the above command. - Above commands are for a 36 core system. Please set the following options accordingly for optimal
This whole part is almost identical for both BERT and ResNet. Could you extract this part into a separate section (e.g., ## Notes on optimal performance
)? Then you won't have this replication of the same text.
The only difference I see is intra_op_parallelism_threads=X
vs num_intra_threads=X
. Just add a text that the first one is for BERT and the second one is for ResNet.
Examples/tensorflow/README.md, line 71 at r5 (raw file):
to physical processing units. **NOTE:** To get number of cores per socket, do ``lscpu | grep 'Core(s) per socket'``.
To get the number of cores...
Examples/tensorflow/README.md, line 74 at r5 (raw file):
## Run inference on ResNet50 model - To run inference on graphene-sgx (SGX version)
Please place graphene-sgx
into backticks.
Examples/tensorflow/README.md, line 74 at r5 (raw file):
## Run inference on ResNet50 model - To run inference on graphene-sgx (SGX version)
Please add :
at the end
Examples/tensorflow/README.md, line 85 at r5 (raw file):
--steps=500
- To run inference on graphene-direct (non-SGX version), replace
graphene-sgx
with
Please place graphene-direct
into backticks.
Examples/tensorflow/README.md, line 117 at r5 (raw file):
improve performance significantly based on the workloads. At any point, only one of these allocators can be used. - TCMalloc (Please update the binary location and name if different from default)
Please add :
Examples/tensorflow/README.md, line 123 at r5 (raw file):
- ``sgx.trusted_files.libtcmalloc = "file:/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4"`` - ``sgx.trusted_files.libunwind = "file:/usr/lib/x86_64-linux-gnu/libunwind.so.8"`` - mimalloc (Please update the binary location and name if different from default)
Please add :
Examples/tensorflow/BERT/Makefile, line 3 at r5 (raw file):
# BERT sample for Tensorflow GRAPHENEDIR ?= ../../..
Please remove GRAPHENEDIR
. See our newly updated Examples how we do it now.
Examples/tensorflow/BERT/Makefile, line 6 at r5 (raw file):
SGX_SIGNER_KEY ?= $(GRAPHENEDIR)/Pal/src/host/Linux-SGX/signer/enclave-key.pem include $(GRAPHENEDIR)/Scripts/Makefile.configs
Please remove this include
Examples/tensorflow/BERT/python.manifest.template, line 4 at r5 (raw file):
loader.preload = "file:{{ graphene.libos }}" # Graphene log level
Please remove all comments from this file (see our newly updated Examples -- we decided to remove comments from all files, leaving only really workload-specific ones)
Examples/tensorflow/ResNet50/Makefile, line 3 at r5 (raw file):
# ResNet50 sample for Tensorflow GRAPHENEDIR ?= ../../..
Please remove GRAPHENEDIR
. See our newly updated Examples how we do it now.
Examples/tensorflow/ResNet50/Makefile, line 6 at r5 (raw file):
SGX_SIGNER_KEY ?= $(GRAPHENEDIR)/Pal/src/host/Linux-SGX/signer/enclave-key.pem include $(GRAPHENEDIR)/Scripts/Makefile.configs
Please remove this include
Examples/tensorflow/ResNet50/python.manifest.template, line 4 at r5 (raw file):
loader.preload = "file:{{ graphene.libos }}" # Graphene log level
Please remove all comments from this file (see our newly updated Examples -- we decided to remove comments from all files, leaving only really workload-specific ones)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 4 of 9 files reviewed, 27 unresolved discussions, not enough approvals from maintainers (2 more required), not enough approvals from different teams (2 more required, approved so far: ), "fixup! " found in commit messages' one-liners (waiting on @dimakuv and @Satya1493)
Examples/tensorflow/README.md, line 8 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
What is
birectional
? I guessbidirectional
Done.
Examples/tensorflow/README.md, line 10 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
I would prefer to bold this
'BERT-Large, Uncased (Whole Word Masking)'
instead of putting into quotes. So it will be:**BERT-Large, Uncased (Whole Word Masking)**
Done.
Examples/tensorflow/README.md, line 15 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Can you add a space:
ResNet50(v1.5)
->ResNet50 (v1.5)
(unless this is how it is supposed to be written, then don't change)
Done.
Examples/tensorflow/README.md, line 20 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
tensorflow
->TensorFlow
Done.
Examples/tensorflow/README.md, line 20 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
whl package
->the Wheel (.whl) package
Removed installation step from .whl package as per the below comment.
Examples/tensorflow/README.md, line 21 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Actually, why do you specify two different ways of installing TensorFlow? Why would a user choose one over the other? Could we just specify one way (I would go with
pip
installation then)?
Done.
Examples/tensorflow/README.md, line 24 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Can you separate into two sentences? Hard to read this. Like this:
- To build BERT sample, do ``cd BERT``. To build ResNet50 sample, do ``cd ResNet50``.
Done.
Examples/tensorflow/README.md, line 29 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please place
path_to_python_dist_packages
in backticks. Also put/usr/local/...
in backticks (instead of current single quotes).
Done.
Examples/tensorflow/README.md, line 31 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please place
root/.keras/...
in backticks.
Done.
Examples/tensorflow/README.md, line 32 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
tensorflow
->TensorFlow
Done.
Examples/tensorflow/README.md, line 37 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please place
graphene-sgx
into backticks.
Done.
Examples/tensorflow/README.md, line 37 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please add
:
at the end
Done.
Examples/tensorflow/README.md, line 55 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please place
graphene-direct
into backticks.
Done.
Examples/tensorflow/README.md, line 60 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
This whole part is almost identical for both BERT and ResNet. Could you extract this part into a separate section (e.g.,
## Notes on optimal performance
)? Then you won't have this replication of the same text.The only difference I see is
intra_op_parallelism_threads=X
vsnum_intra_threads=X
. Just add a text that the first one is for BERT and the second one is for ResNet.
Done.
Examples/tensorflow/README.md, line 71 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
To get the number of cores...
Done.
Examples/tensorflow/README.md, line 74 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please place
graphene-sgx
into backticks.
Done.
Examples/tensorflow/README.md, line 74 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please add
:
at the end
Done.
Examples/tensorflow/README.md, line 85 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please place
graphene-direct
into backticks.
Done.
Examples/tensorflow/README.md, line 117 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please add
:
Done.
Examples/tensorflow/README.md, line 123 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please add
:
Done.
Examples/tensorflow/BERT/Makefile, line 3 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please remove
GRAPHENEDIR
. See our newly updated Examples how we do it now.
Done.
Examples/tensorflow/BERT/Makefile, line 6 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please remove this include
Done.
Examples/tensorflow/BERT/python.manifest.template, line 4 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please remove all comments from this file (see our newly updated Examples -- we decided to remove comments from all files, leaving only really workload-specific ones)
Done.
Examples/tensorflow/ResNet50/Makefile, line 3 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please remove
GRAPHENEDIR
. See our newly updated Examples how we do it now.
Done.
Examples/tensorflow/ResNet50/Makefile, line 6 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please remove this include
Done.
Examples/tensorflow/ResNet50/python.manifest.template, line 4 at r5 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
Please remove all comments from this file (see our newly updated Examples -- we decided to remove comments from all files, leaving only really workload-specific ones)
Done.
… and BERT models Signed-off-by: Satya <satyanaraya.illa@intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 5 of 5 files at r6.
Reviewable status: all files reviewed, 2 unresolved discussions, not enough approvals from maintainers (2 more required), not enough approvals from different teams (2 more required, approved so far: ), "fixup! " found in commit messages' one-liners (waiting on @Satya1493)
Examples/tensorflow/README.md, line 77 at r6 (raw file):
## Notes on optimal performance - Above commands are for a 36 core system. Please set the following options accordingly for optimal
There is no need to make these two sentences a list. Please remove -
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 8 of 9 files reviewed, 2 unresolved discussions, not enough approvals from maintainers (2 more required), not enough approvals from different teams (2 more required, approved so far: ), "fixup! " found in commit messages' one-liners (waiting on @dimakuv and @Satya1493)
Examples/tensorflow/README.md, line 77 at r6 (raw file):
Previously, dimakuv (Dmitrii Kuvaiskii) wrote…
There is no need to make these two sentences a list. Please remove
-
here.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 1 files at r7.
Reviewable status: all files reviewed, 1 unresolved discussion, not enough approvals from maintainers (2 more required), not enough approvals from different teams (2 more required, approved so far: ), "fixup! " found in commit messages' one-liners (waiting on @Satya1493)
a discussion (no related file):
Thanks for the PR! Good from my side (but still blocking on the "merge two commits into one during final rebase").
…esNet50 and BERT models Signed-off-by: Satya <satyanaraya.illa@intel.com>
Moved to new gramine 'examples' repository at pr #7 |
Signed-off-by: Satya satyanaraya.illa@intel.com
Description of the changes
TensorFlow examples for ResNet50 and BERT models. The samples run inference using pre-trained models.
How to test this PR?
Please follow steps present at Examples/tensorflow/README.md
This change is