Researchers from Stanford University have enhanced a hardware system (like the one used in autonomous cars) with a highly efficient algorithm that can reconstruct three-dimensional hidden scenes based on the movement of individual particles of light, or photons.
The paper outlining this research recently appeared in the journal Nature Communications. The lead author of the paper is David Lindell, a graduate student in electrical engineering
“A lot of imaging techniques make images look a little bit better, a little bit less noisy, but this is really something where we make the invisible visible,” said Gordon Wetzstein, assistant professor of electrical engineering at Stanford and senior author of the paper. “This is really pushing the frontier of what may be possible with any kind of sensing system. It’s like superhuman vision.”
This technique is focused on applications like
1. Navigating self-driving cars in fog or heavy rain
2. Satellite imaging of the surface of Earth and other planets through a hazy atmosphere.
How does it work?
The system basically consists of a laser and a super-sensitive photon detector.
When the laser scans an obstruction like a wall of foam, as shown above, a few photons manage to pass through the foam, hit the objects hidden behind it, and pass back through the foam to reach the detector.
The algorithm-supported software then uses those occasional photons and information about where and when they hit the detector to reconstruct the hidden objects in 3D.
Though this is not the only system with such a capability, it overcomes a some problems associated with other techniques, for example some required knowledge about how far away the object of interest is.
It is also common that these systems only use information from ballistic photons, which are photons that travel to and from the hidden object through the scattering field but without actually scattering along the way.
“We were interested in being able to image through scattering media without these assumptions and to collect all the photons that have been scattered to reconstruct the image,” said David Lindell. “This makes our system especially useful for large-scale applications, where there would be very few ballistic photons.”
The hardware used by the researchers is only slightly more advanced than the ones used in autonomous cars and its secret lies in software only. Depending on the brightness of the hidden objects, scanning in their tests took anywhere from one minute to one hour, but the algorithm reconstructed the obscured scene in real-time and could be run on a laptop.
“You couldn’t see through the foam with your own eyes, and even just looking at the photon measurements from the detector, you really don’t see anything,” said Lindell. “But, with just a handful of photons, the reconstruction algorithm can expose these objects — and you can see not only what they look like, but where they are in 3D space.”
Someday, a descendant of this system could be sent through space to other planets and moons to help see through icy clouds to deeper layers and surfaces. In the nearer term, the researchers would like to experiment with different scattering environments to simulate other circumstances where this technology could be useful.
Journal Reference:
Lindell, D.B., Wetzstein, G. Three-dimensional imaging through scattering media based on confocal diffuse tomography. Nat Commun, 2020 nature.com/articles/s41467-020-18346-3
Press Release: Stanford University