With the proliferation of 3D gaming, video-based communication and generally enjoyable gadgets (here’s looking at you, Oculus Rift) it’s safe to say that augmented reality is slowly taking over the culture at large. As of now, this has been most successfully implemented as escapist fare, with companies like Snap Inc. creating new ways to fuse budding technology with everyday use.
Enter augmented reality of sound
Google, ever the innovation giant, is currently hoping to lead the charge in augmented reality, and they’re focusing less on what you see and more on what you hear. Resonance Audio is the tech giant’s major push towards bringing the full five senses into the 3D space. Resonance Audio is a Software Development Kit (SDK) aimed at enhancing your VR-gaming and video projects with more nuanced and accurate spatial textures.
According to Google, the goal of Resonance Audio is to properly replicate “how real sound waves interact with human ears and with the environment.” What this means is accounting for the digital disruptions and distortions that physical objects create in the sonic landscape, which is a fancy way of saying “the things that fill a space can affect the way that sound travels through it.”
For instance, if someone were to walk through a room singing out loud, the sound would come across very differently walking through a parking garage as compared with through an open field. In both instances, the openness of the space would produce vastly different sounds, to say nothing of the variety of objects one would encounter along the way (stone columns, a flatbed truck, a series of trees, large rock formations), all of which would be variables in regards to sound quality
The sound of the future
Resonance Audio aims to bring that level of sophistication to the VR front, accounting for much of those same variables. The kit would allow developers to identify the sources of the sound in the scene (either from a subject or from other sources) and shift the audio so that it moves more accurately and directionally. This way, a player of a video game would not, for instance, have the same quality sound source when they follow a character from behind, rather than in front.
These may seem like minor details, but they go a long way towards producing something more acutely “real”. Most CPU resources are often devoted strictly to visual design, meaning that sound is rarely at the forefront of a kit’s innovative package. Many might even be shipped out with the most basic audio in place.
Getting real in gaming
If successful—and Google is sure to push Resonance Audio through the majority of its products—the SDK could be a quantum leap forward for developers attempting to make their simulated worlds as accurate as possible. Google has slowly been showing interest in the VR and AR world, as made evident during their Poly presentation, their attempt at a developer’s one-stop shop for 3D assets and environments. Additionally, the company’s Android line, Daydream VR platform, and YouTube 360 project, means Resonance Audio will fit tidily into Google’s newly forming tech ecosystem.
There are, of course, more wide-ranging possibilities as well. If Resonance Audio includes a more detailed depiction of the world as it is, it could signal a leap forward not just in entertainment, but in digital workflow and tele-communication as well. That level of hyper-detailed rendering of a given environment will be ideal for developers, but it might even be applicable in such programs as Google Hangouts, which already use sound to trigger different camera windows during a conference video call. The possibilities are endless, and with augmented reality slowly becoming the road of the future, Resonance Audio might help accelerate the process.