Machine Learning Projects for Mobile Applications
上QQ阅读APP看书,第一时间看更新

Age, gender, and emotion prediction

This chapter is going to cover a complete iOS application using Core ML models to detect age, gender, and emotion from a photo taken using an iPhone camera or from a photo in a user's phone gallery.

Core ML enables developers to install and run pre-trained models on a device, and this has its own advantages. Since Core ML lives in the local device, it is not necessary to call a cloud service in order to get the prediction results. This improves the communication latency and also saves data bandwidth. The other crucial benefit of Core ML is privacy. You don't need to send your data to a third party in order to get the results picked for you. The main downside of having an offline model is that the model cannot be updated, and so it cannot be improved with newer inputs. Furthermore, a few models might increase memory footprints, since storage is limited on a mobile device. 

With Core ML, when you import the ML model, Xcode will help you to do the rest of the work. In this project, we are going to build the iOS application based on the following research paper by Gil Levi and Tal Hassncer:  Age and gender classification using convolutional neural networks ( https://ieeexplore.ieee.org/document/7301352),  IEEE Workshop on Analysis and Modeling of Faces and Gestures (AMFG), at the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, 2015. 

This project is built on a MacBook Pro machine with Xcode version 9.3 on macOS High Sierra. Age and gender prediction became a common application on social media platforms. While there are multiple algorithms for predicting and classifying age and gender, these are still being improved in terms of performance. In this chapter, the classification is done based on deep CNNs. 

You can find the application developed in this chapter here: https://github.com/intrepidkarthi/MLmobileapps/tree/master/Chapter2. We are going to use Adience dataset for our application in this chapter. This is found here:  https://talhassner.github.io/home/projects/Adience/Adience-data.html.