Finally! Google has taken the definitive step to make VR and AR the universally accessible entertainment and information platform that we always dreamed it to be.
No, we are not talking about that cardboard, or its flimsy upgrade. No, this is definitely not the revival of that wonky experimental wearable either. We are talking about Google’s latest VR and AR announcement at its recent I/O event.
Freeing VR From its Shackles
VR being an endgame form of entertainment is a criticism that has been repeated ad nauseam throughout its development years. The core of all of those negative points is that VR simply requires too many stuff to work. Graphics need hardware. High frame rates need hardware. Optimization requires software. That doesn’t even count control accessories, which can be a nightmare of its own.
Google has instead decided to walk away from all of these basic requirements, by making the VR device completely standalone. That’s right, the VR device itself is the only thing that you will need to experience VR.
There are no specifics on the software and implementation just yet, so the next best question goes to the integrated hardware. Of course, just by looking at a typical VR headset’s form factor, we can surmise that optimization will be a particular focus for the standalone VR, second only to its specs. After all, it would technically follow the footsteps of Google Daydream, which was primarily designed on the concept of comfortable use.
The upcoming standalone VR headset is currently being worked on in collaboration with HTC and Lenovo. According to the announcement, the first commercially available sets should be available sometime later this year.
Taking AR to Smart Practicality
Taking another step in the simulated reality realm, Google also announced certain upgrades and enhancements to its Tango AR platform. The most interesting would perhaps be the item search assistant tool. Sounds like a new Google search function? Quite, and in the physical sense. Think of those annoying hand-holding tutorials in video games.
The presentation described the new feature as:
“It’s kind of like GPS. But instead of talking to satellites to figure out where it is, your phone [instead] looks for distinct visual features in the environment, and it triangulates with those.”
Basically, what we have is a smart feature that looks and sees the user’s surroundings, and recognizes everything around its field of vision. Google dubbed the feature, the VPS, or Visual Positioning Service, after its functionally similar resemblance to GPS.
Now that might not sound so amazing, but we have to remember that computers typically cannot analyze things using visual data without specific software design. A facial recognition software would definitely have problems realizing that it’s actually looking at a house or vehicle. VPS however, is designed to recognize anything, by creating what is called as “feature points” that it will use as a locational reference for anything that the device will visually come across. With a few database cross-referencing (presumably from other VPS users), the user should be able to find the location of the thing he/she is looking for.
Google believes that VPS can be a very useful tool in the future, something that would be universal enough to be incorporated into any mobile OS platform. At the very least, that’s what Google has in store for its futuristic visual analysis tool.