Google is attempting to let smartphones better recognize images
while also reducing the consumption of power. Towards this, the company has
introduced a brand new set of models called MobileNets. These are basically
pre-trained image recognition models that will allow devs to suit a model for
their application based on their specific requirements.
So here is the thing, mobile applications aren’t very
good at machine learning. So what they do is that they transmit all the data
they have gathered to a cloud service and the processors that actually derive
the insights from the data are situated there only. The users are directly
presented with the insights through the app.
The advantage of this approach is that you can use
massive workstations and computers for deriving insights — allowing you to
leverage the kind of processing power that wouldn’t be available with within a
mobile. The disadvantage of the approach is latency and of course, your data is
no longer private and is vulnerable to attacks.
These issues can be resolved if there was a way to
perform computations on the mobile. However, while this is possible thanks to
the advanced processing units we have developed, it takes up a lot of battery.
More battery in short, than you would be willing to sacrifice. So what needs to
be done, is that all processes need to be optimized so that the battery loss is
as low as possible.
With MobileNets though, you can leave all that hefty work
to Google. The company has taken care of all the optimization beforehand and
developers can simply implement the model in their application.
These models are available in all sorts of complexity and
capabilities and you can chose the one that suits your needs. While choosing
the model, you can keep this in mind: the ore the number of operations a model
uses, the more accurate it will be. But the strain on the resources will also
You can deploy these models through Tensor Flow Mobile.