Japanese scientists have found a way to generate 3D holograms from 2D images made by a simple camera using a deep learning. This will open up inumerous possibilities for low-cost 3D displays and immersive worlds.
Holograms have stirred the imagination for decades, conjuring the possibility of immersive virtual worlds. A simple version of holograms can be seen on credit cards or ID cards, and are used to prevent fraud. More complicated examples can be seen in countless sci-fi movies.
The 3D imagery offered by holograms holds enormous potential in various fields, including medical imaging, manufacturing, and virtual reality. However, while full-body holograms of humans are possible in Star Wars, it is not as possible in the real world due to the high computational power needed to generate them.
Huge computational power needed
Holograms are made by capturing 3D data about how light interacts with an object. However, this method needs a special camera to record 3D images, and it's very computationally demanding. This makes it difficult for most people to access the technology, limiting its widespread use.
This could all be about to change. Researchers from Chiba University, led by Professor Tomoyoshi Shimobaba, have proposed a game-changing approach that uses neural networks to transform ordinary two-dimensional color images made by simple cameras into 3D holograms, simplifying the process of generating holograms.
Three-dimensional GPS in your car
Scientists believe that the method they discovered will revolutionize the development of holographic head-up displays in cars, showing the necessary information on people, roads, and signs to passengers in 3D.
“There are several problems in realizing holographic displays, including the acquisition of 3D data, the computational cost of holograms, and the transformation of hologram images to match the characteristics of a holographic display device. We undertook this study because we believe that deep learning has developed rapidly in recent years and has the potential to solve these problems,” said Shimobaba.
The technology will also serve in creating high-fidelity head-mounted 3D displays used for virtual and augmented realities.
Deep learning technology can make holograms from 3D data gathered by RGB-D cameras that record an object's color and depth. This new method is easier and avoids many of the complex computer problems linked to the old way of generating holograms.
The new method uses three deep neural networks (DNNs) to change a normal 2D color picture into data that can be used to display a 3D scene as a hologram. The first DNN takes a color picture made by a regular camera and predicts an associated depth map, telling us about the 3D shape in the image.
Both the original RGB image and the depth map from the first DNN are used by the second DNN to make a hologram. Finally, the third DNN improves the hologram generated by the second DNN so it can be shown on various devices.
“Another noteworthy benefit of our approach is that the reproduced image of the final hologram can represent a natural 3D reproduced image. Moreover, since depth information is not used during hologram generation, this approach is inexpensive and does not require 3D imaging devices such as RGB-D cameras after training,” Shimobaba concludes.
More from Cybernews:
Subscribe to our newsletter