Have you ever taken a photo but wished you could view the scene from a different angle?
In the near future, the cameras and software embedded in your smartphone could allow you to not only capture photos, but create, manipulate, and edit 3-D effects in real-time.
For instance, let’s say it’s Halloween and you’re taking a photo of your friend.
Since she’s sitting down in the picture, however, you can’t really get a good look at her costume. With future imaging technology, you could actually alter the image so that your friend is standing instead of sitting.
Mantis Vision, an Israel-based company that makes 3-D software solutions, is already working on ways to inject this type of imaging technology into mobile devices.
The company provides the core 3-D engine behind Google’s Project Tango, which was unveiled earlier this year. Developed in Google’s Advanced Technology and Projects Group (ATAP), Project Tango is the company’s effort to bring true 3-D effects to smartphones and tablets.
Google has already unveiled smartphone and tablet prototypes for Project Tango — meaning these devices are capable of mapping the world around them and rendering objects in a realistic 3-D format.
We spent some time with the CEO of Mantis Vision, Amihai Loven, to learn more about the stunning tech that will power Google’s Project Tango gadgets — and potentially all cameras in our future mobile devices.
Mantis Vision’s technology claims to be different than the 3-D effects you’ll see on current devices like the HTC One M8. While the camera in HTC’s new flagship is capable of detecting depth, Mantis Vision’s technology translates every single image frame into 3-D.
This, in turn, not only allows users to view images from different perspectives, but it can also allow users to actually alter the particles that make up an image. The same effects are available for video as well.
Notice how the subject in the video below changes from standing to sitting using Mantis’ technology. The sample video is just raw data, meaning it doesn’t reflect how images will look when the technology is ready for consumers.
Image quality will be much sharper in the final iteration of the software, Mantis tells us. The clip shown below is from a version of Mantis’ software intended for developers.
Images can be manipulated through both touch inputs and the device’s sensors. For example, this means you can change the perspective of an image or video by touching it or simply tilting the device from left to right. The case is the same with the 3-D image effects in HTC’s One M8.
But what’s more interesting, Loven says, is the idea that you’ll potentially be able to interact with images using all of your phone’s sensors, including the microphone, the heart rate monitor in devices like the Galaxy S5, and more. For example, an image may change depending on where you’re looking at the screen simply by using the front camera to track your eyes.
The technology works best the closer you are to an object, but Loven said eventually it will be refined enough to capture more prominent 3-D effects from farther away. It’s not meant to replace the regular 2-D photography in smartphones, however. Devices using Mantis’ tech will also be capable of taking standard, flat photos and the 3-D aspect will be optional.
Here’s an example of the type of 3-D images you’ll be able to take using Mantis Vision’s software and hardware. Again, this is a rough version of what the final product will look like. It’s just sample raw data being used to show what the technology is capable of.
That’s not to say this is exactly what we’ll see in Google’s Project Tango devices. Some of the elements will be there, but Loven notes that Google is more interested in features like indoor mapping than enhancing photography.
Here’s a more realistic example of what Mantis’ technology will look like when it hits consumer mobile devices, based on a concept video released by the company.
A mother takes a 3-D video of her daughter dancing.
After the video is taken, the little girl can choose which backdrop she wants. She can also add in augmented reality elements, such as artificial butterflies flying across the scene.
She then sends the video to her father. Notice how the still preview of the video moves as the phone moves.
Mantis Vision’s technology is one of several indications that 3-D effects are about to play a much larger role in mainstream consumer technology. Amazon is expected to unveil its first smartphone this week, which is rumoured to come with four cameras on the front of its display for enabling 3-D effects.
Lytro is also working to bring interactive photography to the web by making its image player open source. The company is already working with the photography community 500px to increase adoption of Lytro’s “living pictures.”
Loven, however, claims that Mantis’ ambitions and capabilities are much broader than the competition, flaunting its tech as true 3-D while others are 2-D with a 3-D “flavour.”
You can check out Mantis Vision’s full concept video below.