amm amm - 1 month ago 136
Android Question

Running a Tensorflow model on Android

I'm trying to figure out the workflow for training and deploying a Tensorflow model on Android. I'm aware of the other questions similar to this one on StackOverflow, but none of them seem to address the problems I've run into.

After studying the Android example from the Tensorflow repository, this is what I think the workflow should be:


  1. Build and train Tensorflow model in Python.

  2. Create a new graph, and transfer all relevant nodes (i.e. not the nodes responsible for training) to this new graph. Trained weight variables are imported as constants so that the C++ API can read them.

  3. Develop Android GUI in Java, using the native keyword to stub out a call to the Tensorflow model.

  4. Run javah to generate the C/C++ stub code for the Tensorflow native call.

  5. Fill in the stub by using the Tensorflow C++ API to read in and access the trained/serialized model.

  6. Use Bazel to build BOTH the Java app, the native Tensorflow interface (as a .so file), and generate the APK.

  7. Use adb to deploy the APK.

    Step 6 is the problem. Bazel will happily compile a native (to OSX) .dylib that I can call from Java via the JNI. Android Studio, likewise, will generate a whole bunch of XML code that makes the GUI I want. However, Bazel wants all of the java app code to be inside the 'WORKSPACE' top-level directory (in the Tensorflow repo), and Android Studio immediately links in all sorts of external libraries from the SDK to make GUIs (I know because my Bazel compile run fails when it can't find these resources). The only way I can find to force Bazel to cross-compile a .so file is by making it a dependent rule of an Android rule. Directly cross-compiling a native lib is what I'd prefer to porting my A.S. code to a Bazel project.

    How do I square this? Bazel will supposedly compile Android code, but Android Studio generates code that Bazel can't compile. All the examples from Google simply give you code from a repo without any clue as to how it was generated. As far as I know, the XML that's part of an Android Studio app is supposed to be generated, not made by hand. If it can be made by hand, how do I avoid the need for all those external libraries?

    Maybe I'm getting the workflow wrong, or there's some aspect of Bazel/Android Studio that I'm not understanding. Any help appreciated.



Thanks!

Edit:

There were several things that I ended up doing that might have contributed to the library building successfully:


  1. I upgraded to the latest Bazel.

  2. I rebuilt TensorFlow from source.

  3. I implemented the recommended Bazel BUILD file below, with a few additions (taken from the Android example):

    cc_binary(
    name = "libName.so",
    srcs = ["org_tensorflowtest_MyActivity.cc",
    "org_tensorflowtest_MyActivity.h",
    "jni.h",
    "jni_md.h",
    ":libpthread.so"],
    deps = ["//tensorflow/core:android_tensorflow_lib",
    ],
    copts = [
    "-std=c++11",
    "-mfpu=neon",
    "-O2",
    ],
    linkopts = ["-llog -landroid -lm"],
    linkstatic = 1,
    linkshared = 1,
    )

    cc_binary(
    name = "libpthread.so",
    srcs = [],
    linkopts = ["-shared"],
    tags = [
    "manual",
    "notap",
    ],
    )



I haven't verified that this library can be loaded and used in Android yet; Android Studio 1.5 seems to be very finicky about acknowledging the presence of native libs.

Answer

After setting up an Android NDK in your WORKSPACE file, Bazel can cross-compile a .so for Android, like this:

cc_binary(
    name = "libfoo.so",
    srcs = ["foo.cc"],
    deps = [":bar"],
    linkstatic = 1,
    linkshared = 1,
)

$ bazel build foo:libfoo.so \
    --crosstool_top=//external:android/crosstool --cpu=armeabi-v7a \
    --host_crosstool_top=@bazel_tools//tools/cpp:toolchain
$ file bazel-bin/foo/libfoo.so
bazel-bin/foo/libfoo.so: ELF 32-bit LSB  shared object, ARM, EABI5 version 1 (SYSV), dynamically linked (uses shared libs), not stripped

Bazel wants all of the java app code to be inside the 'WORKSPACE' top-level directory (in the Tensorflow repo)

When 0.1.4 is released (pushing it right now) and we have pushed some fixes to TensorFlow and Protobuf, you can start using the TensorFlow repo as a remote repository. After setting it up in your WORKSPACE file, you can then refer to TensorFlow rules using @tensorflow//foo/bar labels.

Comments