Like pinholes and pinspecks, edges and corners also restrict the passage of light rays. Using conventional recording equipment, even iPhones, in broad daylight, Bouman and company filmed a building corner’s “penumbra”: the shadowy area that is illuminated by a subset of the light rays coming from the hidden region around the corner. If there’s a person in a red shirt walking there, for example, the shirt will project a tiny amount of red light into the penumbra, and this red light will sweep across the penumbra as the person walks, invisible to the unaided eye but clear as day after processing.
In groundbreaking work reported in June, Freeman and colleagues reconstructed the “light field” of a room — a picture of the intensity and direction of light rays throughout the room — from the shadows cast by a leafy plant near the wall. The leaves act as pinspeck cameras, each blocking out a different set of light rays. Contrasting each leaf’s shadow with the rest reveals its missing set of rays and thus unlocks an image of part of the hidden scene. Accounting for parallax, the researchers can then piece these images together.
This light-field approach yields far crisper images than earlier accidental-camera work, because prior knowledge of the world is built into the algorithms. The known shape of the houseplant, the assumption that natural images tend to be smooth, and other “priors” allow the researchers to make inferences about noisy signals, which helps sharpen the resulting image. The light-field technique “requires knowing a lot about the environment to do the reconstruction, but it gives you a lot of information,” Torralba said.
While Freeman, Torralba and their protégés uncover images that have been there all along, elsewhere on the MIT campus, Ramesh Raskar, a TED-talking computer vision scientist who explicitly aims to “change the world,” takes an approach called “active imaging”: He uses expensive, specialized camera-laser systems to create high-resolution images of what’s around corners.
In 2012, realizing an idea he had five years earlier, Raskar and his team pioneered a technique that involves shooting laser pulses at a wall so that a small fraction of the scattered light bounces around a barrier. Moments after each pulse, they use a “streak camera,” which records individual photons at billions of frames per second, to detect the photons that bounce back from the wall. By measuring the times-of-flight of the returning photons, the researchers can tell how far they traveled and thus reconstruct the detailed 3-D geometry of hidden objects the photons scattered off of behind the barrier. One complication is that you must raster-scan the wall with the laser to form a 3-D image. Say, for instance, there’s a hidden person around the corner. “Then light from a particular point on the head, a particular point on the shoulder, and a particular point on the knee might all arrive [at the camera] at the same exact time,” Raskar said. “But if I shine the laser at a slightly different spot, then the light from the three points will not arrive at the same exact time.” You have to combine all the signals and solve what’s known as the “inverse problem” to reconstruct the hidden 3-D geometry.
Raskar’s original algorithm for solving the inverse problem was computationally demanding, and his apparatus cost half a million dollars. But significant progress has been made on simplifying the math and cutting costs. In March, a paper published in Nature set a new standard for efficient, cost-effective 3-D imaging of an object — specifically, a bunny figurine — around a corner. The authors, Matthew O’Toole, David Lindell and Gordon Wetzstein at Stanford University, devised a powerful new algorithm for solving the inverse problem and used a relatively affordable SPAD camera — a semiconductor device with a lower frame rate than streak cameras. Raskar, who supervised two of the authors earlier in their careers, called the work “very clever” and “one of my favorite papers.”