At this year’s SXSW Interactive conference, Google’s Timothy Jordan delivered a demonstration on a first look at Google Glass and its supporting Mirror API. In addition, Google showed attendees how it is working with early partners to write dedicated apps for the new device. Path, Evernote and The New York Times have all created a range of “mini-apps” to work with Google Glass.
The New York Times app, for example, shows top story headlines and then lets you listen to the full article by telling Glass to “read aloud”. Google’s own Gmail app has also been ported over to Glass and uses voice recognition to answer emails, as well as a visual indicator of who is sending you email. Along with Gmail, Google+ is also built directly into the Glass experience as a platform for sharing. However, Google’s Jordan stated that developers would be able to add their own sharing options. Glass also features Google’s text to speech technology, a camera for taking pictures and recording video and also voice recognition software built directly into the hardware.
Glass works by connecting to Google’s Cloud servers with a dedicated app, which pulls and pushes data to Glass through Google’s new APIs. All of this data is then presented on Glass through what Google calls “Timeline Cards.” Timeline cards can include text, images, video and rich HTML. Besides single cards, Google has also developed a new concept which it has named ‘bundles’. Bundles are a set of cards that users can navigate using the mini touchpad on the side of glass, or using their own voice to navigate the menus.
As Glass is a new form factor, Google is pushing a new set of rules to make the user experience enjoyable and uncluttered. For example, for a news app, users would not expect to see full news stories; rather the key headline would be pushed in front of a relevant picture from the story. Google’s doesn’t want to make Glass distracting for the user.
There is no confirmed availability of Google Glass, but you can see a full action demo below: