Generating 3D Holograms In Real-Time Using Artificial Intelligence

Generating 3D Holograms In Real-Time Using Artificial Intelligence

Virtual Reality was one term that became a hot topic once. But it fell as time passed, as there were its reasons. These include making users feel sick. VR creates an illusion of 3D viewing in a 2-D environment. This can make users feel nausea and also strain the eye. So the solution could be holograms. Yes, these 60-year-old technologies could actually solve this.

We all know holograms create an exceptional representation of the 3D world around us. Not only are they beautiful, but they also provide us with a shifting perspective based on the viewer’s position, and they allow the eye to adjust the focal depth to alternately focus on foreground and background.

So the doubt we get is why were these not so popular when they have such excellent features?.

The problem was that the traditionally created holograms needed a supercomputer to churn through physics simulations which not only was time-consuming but also it could yield less-than-photorealistic results.

New research by MIT has created holograms almost in an instant. Deep learning was used for this, and it can also be run on a laptop efficiently.

“People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations,” says Liang Shi, the study’s lead author and a PhD student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “It’s often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades.”

Liang Shi, the study’s lead author and a PhD student in MIT’s Department of Electrical Engineering and Computer Science (EECS) said that there was a conception that it was impossible to build such 3D holograms.

Shi added that the alternative approach called “tensor holography” will be helpful and can be used in VR And 3D printing.

We now will get deeper about knowing this. In a lens-based photograph, the brightness of each light-wave is encoded. Whereas a hologram encodes both the brightness and phase of each light wave.

This helps it deliver a truer depiction of a scene’s parallax and depth. Hologram brings an object to life, giving it a 3D texture of each brushstroke. These are however hard to make and share.

While jumping into the history of holograms, we see they were first developed around the mid-1990s and optical recording was used to do this.

This process required splitting a laser beam, with half the beam used to illuminate the subject and the other half used as a reference for the light waves’ phase. This reference generates a hologram’s unique sense of depth. The resulting images were static, so they couldn’t capture motion, making them even harder to copy and share.

The alternative method found skips all this hardship by using an optical setup. Shi explained that the process although seems simple depends on computations. It is complex even when run on a supercomputer, as each point in the scene has a different depth and we can’t apply the same operations for all of them.

Thus, they used a unique approach by letting the computer learn physics by itself.

Using Deep learning, they sped up the computer-generated holography. They used Convolutional Neural Networks for this and trained it on a custom-built dataset. The data set had about 4000 pairs of computer-generated images. Each pair-matched a picture — including colour and depth information for each pixel — with its corresponding hologram.

The team used scenes with complex and variable shapes and colours, with the depth of pixels distributed evenly from the background to the foreground, and with a new set of physics-based calculations to handle occlusion. That approach resulted in photorealistic training data. Next, the algorithm got to work.

By learning from each image pair, the tensor network tweaked the parameters of its own calculations, successively enhancing its ability to create holograms. The fully optimized network operated orders of magnitude faster than physics-based calculations. That efficiency surprised the team themselves.

The model could craft holograms from images in milliseconds. The compact tensor network requires less than 1 MB of memory. “It’s negligible, considering the tens and hundreds of gigabytes available on the latest cell phone,” he says.

This recent development will have its impact in many fields from VR to 3D printing. The technology, with its ease of development and deployment, will help reduce costs, showing us more realistic sceneries and also reduce the strain on eyes.

Three-dimensional holography could also boost the development of volumetric 3D printing, the researchers say.

This technology could prove faster and more precise than traditional layer-by-layer 3D printing since volumetric 3D printing allows for the simultaneous projection of the entire 3D pattern. Other applications include microscopy, visualization of medical data, and the design of surfaces with unique optical properties.

Journal Reference:
Liang Shi, Beichen Li, Changil Kim, Petr Kellnhofer, Wojciech Matusik. Towards real-time photorealistic 3D holography with deep neural networks. Nature, 2021; 591 (7849): 234 DOI: 10.1038/s41586-020-03152-0

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top