Google has posted a video to YouTube of its SXSW presentation detailing different aspects of how it envisions Google Glass coming to life when it launches late this year. The video embedded below highlights how Google Glass actually works over a Wi-Fi and/or Bluetooth connection to a smartphone, which is how it connects to the Internet. The end result is that many of the functions that seem like onboard apps, are really cloud-based web apps that can be developed using the Project Glass Mirror API.
In addition to being able to record video and share it to the net, as well as use its onboard GPS to help users navigate their way to a destination, Project Glass users will sign into OAuth 2, which will allow services like news sources and social networks to push notification to the Glass head up display. Information is presented in Timeline Cards that could contain a combination of text and images, which Glass users will be able to interact with through voice commands.
“Project glass is about our relationship to technology. It’s about technology that’s there when you want it but out of the way when you don’t,” says Google developer advocate Timothy Jordan in the 50 minute video. “It feels like technology is getting in the way more than it needs to. And, that’s what we are addressing with Project Glass,” he tries to explain. “It’s so that you can still have access to the technology that you love, but it doesn’t take you out of the moment.”
All of which seems somewhat odd, given that Google Glass is technology that sits directly on your face. However, what it does permit, is the ability for users to engage with technology, information and alerts while still being able to look directly ahead of themselves. Whether this is progress, or even worse than a smartphone user constantly glancing at the handset will be something that will be debated. Will it be considered equally rude to sit in a meeting wearing Google Glass, just as it is considered impolitic to sneak peeks at your handset from under a table?