Our recently published list of groundbreaking scientists highlighted the smartest and most innovative modern scientists and researchers in the field.
Among them is Abe Davis, an MIT graduate student who has worked with researchers from MIT, Microsoft, and Adobe to publish their findings on “the visual microphone” — an algorithm that can almost literally create sound from soundless objects.
In a TED talk this spring, Davis broke down the research using videos and graphics to a stunned audience. Here’s what he explained:
If we watch video of a wrist with a pulse or a video of a breathing baby, for instance, the naked eye can’t easily discern movement. But Davis’ team created software that finds this subtle motion in video and amplifies it, making it visible.
So they went further and asked the question: Could this software be used to recreate sound from motion? If all sound creates vibrations in objects, and they captured those vibrations through video, would they be able to discern original sound?
Davis explains one the the team’s first experiments in which they played the tune “Mary Had a Little Lamb” from a speaker placed near a potted plant. The plant’s leaves vibrated as seen in a slowed down video of thousands of frames per second. He calls this motion “perceptually invisible” to human beings.
Crazily enough, Davis and his team were able to create an algorithm that recovers the “sound” (aka the vibrations of the leaves) and plays it back.
Perhaps one of his most practical demonstrations is a silent video of headphones playing music that are resting on a laptop. The sound recovered through the algorithm accurately read and discovered by music-finding app Shazam.
While Davis also presents more examples from varied experiments where they tampered with light, noise volume, and camera quality, the crux of his team’s research and findings is this: By using the team’s specialised algorithm, we can use cameras of varied quality to recover sounds from soundless video.
By using video of the vibrations of objects, he’s uncovered a new way to interact with still objects. After recording just five seconds of video (with movement due to a person’s fist hitting the wood), Davis is able to create a simulation of how the object would respond to new forces by clicking and dragging with his mouse.
So while the public sees simulation like this often in video games and 3D models, these findings tell us that we can also do this by using simple video techniques to capture real world objects. Davis says these experiments unearth incredible potential for changing the way we see the world.
Davis is also the creator of Caperture for iOS. It allows users to capture, view, and send objects in 3D.
NOW WATCH: What Adderall is actually doing to your body
NOW WATCH: Briefing videos
Business Insider Emails & Alerts
Site highlights each day to your inbox.