Movies have perfected three-dimensional systems where characters can have conversations with holograms without the need for glasses or devices. But making that reality outside of Hollywood is a completely different story.
Or is it?
A team of researchers led by Ryuji Hirayama, Diego Martinez Plasencia, Nobuyuki Masuda and Sriram Subramanian from the University of Sussex, created the Multimodal Acoustic Trap Display, which can produce visual, auditory and tactile content all at the same time.
RELATED: VR AND HOLOGRAPHIC TECHNOLOGY MAKE THE STAR TREK HOLODECK A REALITY
Researchers applied the acoustic tweezer premise
Applying the acoustic tweezers premise in which small objects can be moved using sound waves, the researches created a system that traps a particle acoustically and illuminates it with red, green and blue light to control its color as it scans the display volume. Then using time multiplexing, the system delivers auditive and tactile content simultaneously.
"The system demonstrates particle speeds of up to 8.75 meters per second and 3.75 meters per second in the vertical and horizontal directions, respectively, offering particle manipulation capabilities superior to those of other optical or acoustic approaches demonstrated until now," wrote the researchers. "In addition, our technique offers opportunities for non-contact, high-speed manipulation of matter, with applications in computational fabrication and biomedicine."
Researchers make a countdown timer image you can touch
To demonstrate their system the researchers produced 3D images of a torus knot, a pyramid, and a globe. The images could be seen from any point around the display. By using acoustic files to create the image, they can also produce sound and tactile feedback to the content being displayed. In one demonstration they created an audio-visual countdown timer that users were able to start and stop by tapping the display.
"The prototype demonstrated in the work brings us closer to displays that could provide a fully sensorial reproduction of virtual content," the authors said in a report published in the journal Nature.