Skip to content

Commit

Permalink
TFLite: some fixes to build with Bazel on Windows
Browse files Browse the repository at this point in the history
In this commit:
- make the -Wno-implicit-fallthrough compiler flag
  in flatbuffers' BUILD file be conditional to
  non-Windows builds, because MSVC doesn't know
  this flag
- fix the Bazel build command in README.md by
  removing single quotes around --cxxflags,
  because it's not needed on Bash and is harmful
  on Windows (because cmd.exe doesn't remove the
  single quotes)
- fix non-ASCII quotes and apostrophes, as well as
  some formatting issues in README.md

See bazelbuild/bazel#4148
  • Loading branch information
laszlocsomor committed Nov 22, 2017
1 parent 5fbda9d commit cbc7791
Show file tree
Hide file tree
Showing 2 changed files with 31 additions and 23 deletions.
47 changes: 26 additions & 21 deletions tensorflow/contrib/lite/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# TensorFlow Lite
TensorFlow Lite is TensorFlows lightweight solution for mobile and embedded devices. It enables low-latency inference of on-device machine learning models with a small binary size and fast performance supporting hardware acceleration.
TensorFlow Lite is TensorFlow's lightweight solution for mobile and embedded devices. It enables low-latency inference of on-device machine learning models with a small binary size and fast performance supporting hardware acceleration.

TensorFlow Lite uses many techniques for achieving low latency like optimizing the kernels for specific mobile apps, pre-fused activations, quantized kernels that allow smaller and faster (fixed-point math) models, and in the future, leverage specialized machine learning hardware to get the best possible performance for a particular model on a particular device.

Expand All @@ -20,18 +20,18 @@ In the demo app, inference is done using the TensorFlow Lite Java API. The demo
The fastest path to trying the demo, is to download the pre-built binary
[TfLiteCameraDemo.apk](https://storage.googleapis.com/download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk)

Once the apk is installed, click the app icon to start the app. The first-time the app is opened, the app asks for runtime permissions to access the device camera. The demo app opens the back-camera of the device and recognizes the objects in the cameras field of view. At the bottom of the image (or at the left of the image if the device is in landscape mode), it shows the latency of classification and the top three objects classified.
Once the apk is installed, click the app icon to start the app. The first-time the app is opened, the app asks for runtime permissions to access the device camera. The demo app opens the back-camera of the device and recognizes the objects in the camera's field of view. At the bottom of the image (or at the left of the image if the device is in landscape mode), it shows the latency of classification and the top three objects classified.

## Building in Android Studio using TensorFlow Lite AAR from JCenter
The simplest way to compile the demo app, and try out changes to the project code is to use AndroidStudio.

- Install the latest version of Android Studio 3 as specified [here](https://developer.android.com/studio/index.html).
- Make sure the Android SDK version is greater than 26 and NDK version is greater than 14 (in the Android Studio Settings).
- Import the tensorflow/contrib/lite/java/demo directory as a new Android Studio project.
- Import the `tensorflow/contrib/lite/java/demo` directory as a new Android Studio project.
- Click through installing all the Gradle extensions it requests.
- Download the quantized Mobilenet TensorFlow Lite model from [here](https://storage.googleapis.com/download.tensorflow.org/models/tflite/mobilenet_v1_224_android_quant_2017_11_08.zip)
- unzip and copy mobilenet_quant_v1_224.tflite to the assets directory:
tensorflow/contrib/lite/java/demo/app/src/main/assets/
`tensorflow/contrib/lite/java/demo/app/src/main/assets/`
- Build and run the demo app

## Building TensorFlow Lite and the demo app from source
Expand All @@ -43,7 +43,7 @@ The simplest way to compile the demo app, and try out changes to the project cod
### Install Bazel
If bazel is not installed on your system, install it now by following [these directions](https://bazel.build/versions/master/docs/install.html)

NOTE: Bazel does not currently support building for Android on Windows. Full support for gradle/cmake builds is coming soon, but in the meantime Windows users should download the [prebuilt binary](https://storage.googleapis.com/download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk) instead.
NOTE: Bazel does not fully support building Android on Windows yet. Full support for Gradle/CMake builds is coming soon, but in the meantime Windows users should download the [prebuilt binary](https://storage.googleapis.com/download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk) instead.

### Install Android NDK and SDK
Bazel is the primary build system for TensorFlow. Bazel and the Android NDK and SDK must be installed on your system.
Expand All @@ -53,25 +53,30 @@ Bazel is the primary build system for TensorFlow. Bazel and the Android NDK and
- In the root of the TensorFlow repository update the `WORKSPACE` file with the `api_level` and location of the SDK and NDK. If you installed it with AndroidStudio the SDK path can be found in the SDK manager, and the default NDK path is:`{SDK path}/ndk-bundle.`

```
Android_sdk_repository (
name = "androidsdk",
api_level = 23,
build_tools_version = "23.0.2",
path = "/home/xxxx/android-sdk-linux/", )
android_sdk_repository (
name = "androidsdk",
api_level = 23,
build_tools_version = "23.0.2",
path = "/home/xxxx/android-sdk-linux/",
)
android_ndk_repository(
name="androidndk",
path="/home/xxxx/android-ndk-r10e/",
api_level=19)
name = "androidndk",
path = "/home/xxxx/android-ndk-r10e/",
api_level = 19,
)
```

Additional details on building with Android can be found [here](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo/README.md).

### Build the source code
Run bazel with the following command to build the demo.

Build the demo app:
bazel build --cxxopt='--std=c++11' //tensorflow/contrib/lite/java/demo/app/src/main:TfLiteCameraDemo

```
bazel build --cxxopt=--std=c++11 //tensorflow/contrib/lite/java/demo/app/src/main:TfLiteCameraDemo
```

### Note

Expand Down Expand Up @@ -105,7 +110,7 @@ The [TensorFlow for Poets](https://codelabs.developers.google.com/codelabs/tenso


### Train a custom model
A developer may choose to train a custom model using Tensorflow. TensorFlow documentation has [several tutorials](https://www.tensorflow.org/tutorials/) for building and training models. If the user has written a model using TensorFlows Slim Framework the first step is to export this to a GraphDef file. This is necessary because Slim does not store the model structure outside the code, so to communicate with other parts of the framework it needs to be exported. Documentation for the export can be found [here](https://github.com/tensorflow/models/tree/master/research/slim#Export). The output of this step will be a .pb file for the custom model.
A developer may choose to train a custom model using Tensorflow. TensorFlow documentation has [several tutorials](https://www.tensorflow.org/tutorials/) for building and training models. If the user has written a model using TensorFlow's Slim Framework the first step is to export this to a GraphDef file. This is necessary because Slim does not store the model structure outside the code, so to communicate with other parts of the framework it needs to be exported. Documentation for the export can be found [here](https://github.com/tensorflow/models/tree/master/research/slim#Export). The output of this step will be a .pb file for the custom model.

TensorFlow Lite currently supports a subset of TensorFlow operators. Please refer to [this document](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/g3doc/tf_ops_compatibility.md) for details of supported operators and their usage. This
set will continue to expand in future releases of Tensorflow Lite.
Expand All @@ -129,7 +134,7 @@ Since we employ several formats, the following definitions may be useful:
- TensorFlow lite model (.lite) - a serialized flatbuffer, containing TensorFlow lite operators and Tensors for the TensorFlow lite interpreter. This is most analogous to TensorFlow frozen GraphDefs.

### Freeze Graph
To use this .pb GraphDef file within TensorFlow Lite, the application developer will need checkpoints containing trained weight parameters. The .pb contains only the structure of the graph. The process of merging the checkpoint values with the graph structure is known as freezing the graph.
To use this .pb GraphDef file within TensorFlow Lite, the application developer will need checkpoints containing trained weight parameters. The .pb contains only the structure of the graph. The process of merging the checkpoint values with the graph structure is known as "freezing" the graph.

The developer should know where the checkpoints folder is present or checkpoints can also be downloaded for a pre-trained model (Example: Here is a link to the [MobileNets](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md)).

Expand All @@ -156,7 +161,7 @@ Here is a sample command line to convert the frozen Graphdef to '.lite' format f
bazel build tensorflow/contrib/lite/toco:toco
bazel run --config=opt tensorflow/contrib/lite/toco:toco -- \
--input_file=(pwd)/mobilenet_v1_1.0_224/frozen_graph.pb \
--input_file=$(pwd)/mobilenet_v1_1.0_224/frozen_graph.pb \
--input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE \
--output_file=/tmp/mobilenet_v1_1.0_224.lite --inference_type=FLOAT \
--input_type=FLOAT --input_arrays=input \
Expand Down Expand Up @@ -184,7 +189,7 @@ with tf.Session() as sess:
```
For detailed instructions on how to use the Tensorflow Optimizing Converter, please see [here](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md).

You may refer to the [Ops compatibility guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/g3doc/tf_ops_compatibility.md) for troubleshooting help. If that doesnt help, please file an [issue](https://github.com/tensorflow/tensorflow/issues).
You may refer to the [Ops compatibility guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/g3doc/tf_ops_compatibility.md) for troubleshooting help. If that doesn't help, please file an [issue](https://github.com/tensorflow/tensorflow/issues).

## Step 3. Use the TensorFlow Lite model for inference in a mobile app

Expand All @@ -193,9 +198,9 @@ After completion of Step 2 the developer should have a .lite model.
### For Android
Because Android apps need to be written in Java, and core TensorFlow is in C++, a JNI library is provided to interface between the two. Its interface is aimed only at inference, so it provides the ability to load a graph, set up inputs, and run the model to calculate particular outputs. The full documentation for the set of methods can be seen [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/g3doc/). The demo app is also open sourced on [github](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo/app).

The [demo app](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo/app) uses this interface, so its a good place to look for example usage. You can also download the prebuilt binary [here](http://download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk).
The [demo app](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo/app) uses this interface, so it's a good place to look for example usage. You can also download the prebuilt binary [here](http://download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk).

Note that youd need to follow instructions for installing TensorFlow on Android, setting up bazel and Android Studio outlined [here](https://www.tensorflow.org/mobile/android_build).
Note that you'd need to follow instructions for installing TensorFlow on Android, setting up bazel and Android Studio outlined [here](https://www.tensorflow.org/mobile/android_build).

### For iOS
Follow the documentation [here](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/g3doc/ios.md) to get integrate a TFLite model into your app.
7 changes: 5 additions & 2 deletions third_party/flatbuffers/flatbuffers.BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,11 @@ licenses(["notice"]) # Apache 2.0

FLATBUFFERS_COPTS = [
"-fexceptions",
"-Wno-implicit-fallthrough",
]
] + select({
"@bazel_tools//src:windows": [],
"@bazel_tools//src:windows_msvc": [],
"//conditions:default": ["-Wno-implicit-fallthrough"],
})

# Public flatc library to compile flatbuffer files at runtime.
cc_library(
Expand Down

0 comments on commit cbc7791

Please sign in to comment.