News

Is Your Potato Chips Bag Spying on You?

Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analysing minute vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a potato-chip bag photographed from 15 feet away through soundproof glass.

In other experiments, they extracted useful audio signals from videos of aluminium foil, the surface of a glass of water, and even the leaves of a potted plant. The researchers will present their findings in a paper at this year’s Siggraph, the premier computer graphics conference.

“When sound hits an object, it causes the object to vibrate,” says Abe Davis, a graduate student in electrical engineering and computer science at MIT and first author on the new paper. “The motion of this vibration creates a very subtle visual signal that’s usually invisible to the naked eye. People didn’t realize that this information was there.”

Joining Davis on the Siggraph paper are Frédo Durand and Bill Freeman, both MIT professors of computer science and engineering; Neal Wadhwa, a graduate student in Freeman’s group; Michael Rubinstein of Microsoft Research, who did his PhD with Freeman; and Gautham Mysore of Adobe Research.

Reconstructing audio from video requires that the frequency of the video samples, the number of frames of video captured per second, be higher than the frequency of the audio signal. In some of their experiments, the researchers used a high-speed camera that captured 2,000 to 6,000 frames per second. That’s much faster than the 60 frames per second possible with some smartphones, but well below the frame rates of the best commercial high-speed cameras, which can top 100,000 frames per second.

In other experiments, however, they used an ordinary digital camera. Because of a quirk in the design of most cameras’ sensors, the researchers were able to infer information about high-frequency vibrations even from video recorded at a standard 60 frames per second. While this audio reconstruction wasn’t as faithful as that with the high-speed camera, it may still be good enough to identify the gender of a speaker in a room; the number of speakers; and even, given accurate enough information about the acoustic properties of speakers’ voices, their identities.

The researchers’ technique has obvious applications in law enforcement and forensics, but Davis is more enthusiastic about the possibility of what he describes as a “new kind of imaging.”

“We’re recovering sounds from objects,” he says. “That gives us a lot of information about the sound that’s going on around the object, but it also gives us a lot of information about the object itself, because different objects are going to respond to sound in different ways.”

In ongoing work, the researchers have begun trying to determine material and structural properties of objects from their visible response to short bursts of sound. In the experiments reported in the Siggraph paper, the researchers also measured the mechanical properties of the objects they were filming and determined that the motions they were measuring were about a tenth of micrometer. That corresponds to five thousandths of a pixel in a close-up image, but from the change of a single pixel’s colour value over time, it’s possible to infer motions smaller than a pixel.

“This is new and refreshing. It’s the kind of stuff that no other group would do right now,” says Alexei Efros, an associate professor of electrical engineering and computer science at the University of California at Berkeley. “We’re scientists, and sometimes we watch these movies, like James Bond, and we think, ‘This is Hollywood theatrics. It’s not possible to do that. This is ridiculous.’ And suddenly, there you have it. This is totally out of some Hollywood thriller. You know that the killer has admitted his guilt because there’s surveillance footage of his potato chip bag vibrating.”

The results are certainly impressive and a little scary. In one example shown in a compilation video, a bag of chips is filmed from 15 feet away, through sound-proof glass. The reconstructed audio of someone reciting “Mary Had a Little Lamb” in the same room as the chips isn’t crystal clear. But the words being said are possible to decipher.

Thank you NakedSecurity for providing us with this information.

Image and video courtesy of MIT.

Bohs Hansen

Disqus Comments Loading...

Recent Posts

Manor Lords Is Out Now On Steam, Epic and Gamespass!

Just a few hours after its release on Steam alone Manor Lords has already managed…

16 hours ago

WWE 2K24 PS5 Standard Edition

FORTY YEARS OF WRESTLEMANIA WrestleMania is the biggest event in sports entertainment, where Superstars become…

17 hours ago

Digital Camera 1080P FHD Compact Camera

FHD 1080P & 44MP & Anti-Shake: This digital camera with Full HD 1080P resolution and…

18 hours ago

Glorious Clicky Switch – Raptor – Lubed x 36 (GLO-SWT-RAPTOR-LUBED)

Clicky switches designed to be precise and responsive for gaming High actuation force paired with…

18 hours ago

Asus ROG Strix X670E-A Gaming WIFI DDR5 ATX Motherboard

Product seriesProduct Series/FamilyROG StrixColourPrimary ColourBlackSecondary ColourGreyStorage PortsM.2 PCIe 4.0 x43SATA 6G (internal)4M.2 PCIe 2.0 x24Internal…

18 hours ago

Ssupd Meshlicious Mini ITX Case – Tempered Glass – Black 

Compact and stylish Mini-ITX case Clearance for 315mm GPUs with up to three slots PCIe…

18 hours ago