TensorFlow Lite memory usage and performance
TensorFlow uses FlatBuffers for the model. FlatBuffers is a cross-platform, open source serialization library. The main advantage of using FlatBuffers is that it does not need a secondary representation before accessing the data through packing/unpacking. It is often coupled with per-object memory allocation. FlatBuffers is more memory-efficient than Protocol Buffers because it helps us to keep the memory footprint small.
FlatBuffers was originally developed for gaming platforms. It is also used in other platforms since it is performance-sensitive. At the time of conversion, TensorFlow Lite pre-fuses the activations and biases, allowing TensorFlow Lite to execute faster. The interpreter uses static memory and execution plans that allow it to load faster. The optimized operation kernels run faster on the NEON and ARM platforms.
TensorFlow takes advantage of all innovations that happen on a silicon level on these devices. TensorFlow Lite supports the Android NNAPI. At the time of writing, a few of the Oracle Enterprise Managers (OEMs) have started using the NNAPI. TensorFlow Lite uses direct graphics acceleration, which uses Open Graphics Library (OpenGL) on Android and Metal on iOS.
To improve performance, there have been changes to quantization. This is a technique to store numbers and perform calculations on them. This helps in two ways. Firstly, as long as the model is smaller, it is better for smaller devices. Secondly, many processors have specialized synthe instruction sets, which process fixed-point operands much faster than they process floating point numbers. So, a very naive way to do quantization would be to simply shrink the weights and activations after you are done training. However, this leads to suboptimal accuracies.
TensorFlow Lite gives three times the performance of TensorFlow on MobileNet and Inception-v3. While TensorFlow Lite only supports inference, it will soon be adapted to also have a training module in it. TensorFlow Lite supports around 50 commonly used operations.
It supports MobileNet, Inception-v3, ResNet50, SqueezeNet, DenseNet, Inception-v4, SmartReply, and others: