Smartphones already use a variety of different technologies to make everyday tasks easier. Don’t know where you’re going? Fire up Google Maps or any other navigation app that can leverage your phone’s GPS sensor. Need to know the answer to a quick question? Just whip out your phone and ask Google or Siri.
But current technology can only take us so far, according to a professor at Purdue University. Researchers at Purdue are working on technology that could essentially turn your smartphone into a third human-like eye. The tech, which would use a system of algorithms known as deep learning, could enable your smartphone’s camera to immediately understand any object it sees.
Eugenio Culurciello, an associate professor in Purdue University’s Weldon School of Biomedical Engineering who is involved in the project, said the technology would work similarly to that shown in the movie “Her.”
“It would give [smartphones] that capability,” Culurciello told Business Insider, specifically referencing a scene in which Joaquin Phoenix’s character takes out his phone (“Samantha” voiced by Scarlett Johansson) to show it the world around him. “The phone would see the way [Joaquin Phoenix] sees … that’s basically what we’re really going for here.”
According to Culurciello, the technology would dig deeper than current augmented reality apps and contextual computing. Apps such as Nokia City Lens, for example, can use your smartphone camera’s viewfinder to tell you which building is in front of you or which restaurant is nearby. City Lens does that by pulling information from Nokia Maps and overlaying it over your environment. With the technology that Purdue is researching, however, Culurciello says there would be no server communication required. The phone would simply understand the image it’s seeing, just like you would.
It would do this through deep learning algorithms, which process an image in layers to understand its content. As Culurciello notes, the technology might use one layer to recognise the eyes of a person in a photo, while another layer would identify the nose, and so on.
The intention would be to make your smartphone a more intelligent virtual assistant that can perceive your surroundings just as clearly, or perhaps even more clearly, than you can. For example, if you’re looking for a pair of shoes at a shopping mall, your phone may be able to point you in the right direction. Let’s say you’ve already searched for those shoes online. If you’re in a store and you have your phone out, it may be able to read a sign for those shoes from a distance before you have the chance to see it.
Additionally, since the technology is capable of recognising people and objects in an image, it can tag elements of a photo. For instance, if you’re looking for a photo of your best friends standing under a tree from last fall, you could try typing the word “tree” in your phone’s search bar rather than sifting through thousands of pictures.
Until this point, Culurciello says, it’s been challenging to put this technology in mobile devices because it requires a great deal of processing power. The research group at Purdue, however, said it has developed software and hardware capable of showing how deep learning could work in a conventional smartphone processor. The technology wouldn’t be limited to smartphones, though: it could be implemented in wearable devices such as Google Glass as well, Culurciello explained.
This type of functionality isn’t too far from appearing everyday mobile devices.
“With the right partnership, we could do this within a year,” he said.
Culurciello said the researchers are currently in talks with device manufacturers such as Samsung and Sony about a potential partnership.