TravGear talks to tech expert Kevin Curran at the IEEE about Google Glass
Put away that map and let your sunglasses show you the way to your hotel. Instead of reaching for your guidebook, have them read aloud to you about the architectural gem in front of you. The read your latest email. And why bother taking your smartphone or camera out when a muscular blink is all it takes to record a photo, and a swipe of your head is all that’s needed to send it to a loved one? Actually, having a face-full of technology isn’t how many of us envisage our trips around the world, but there’s no denying that ‘smart glass’ – effectively having a tiny transparent screen permanently in view – will soon be a common sight at tourist hotspots, in offices and in airports. TravGear.com talked to tech expert Kevin Curran, senior lecturer in Computer Science at the University of Ulster and Technical Expert at the Institute of Electrical and Electronics Engineers (IEE), about exactly what impact smart glasses will have.
What are smart glasses? How do they work?
KC: Smart glasses are wearable computing devices – primarily a smart pair of glasses with an integrated heads-up display and a power source hidden inside the frame. Smart glasses will allow us to search the Internet and display the results directly in front of our eyes, from maps of where we need to go to our latest stocks. You can connect the glasses to a phone via Bluetooth or Wi-Fi and use the phone’s 3G or 4G connection although they are capable of connecting to the Internet without an additional phone. A core feature of smart glasses will be the camera on the front that can take photos of whatever you are seeing and share them instantly. Most glasses will use a transparent AMOLED display and are location-aware due to the inbuilt camera and GPS.
A new term ‘glasshole’ has arisen for someone who is wearing smart glasses!
Will smart glasses change people’s behaviour?
KC: Smart glasses will lead to changes in behaviour. They have the potential to be one of the most invasive technologies ever created. The ability to record video and images without the knowledge or permission of those within view will lead to much debate. Already, some bars, strip joints, restaurants among others have announced they are banning the wearing on smart glasses in their establishments. Much will come down to people’s personal preference. Some will not find them intruding on their privacy whilst others will always have their guard up when faced with a Glass user. Apparently a new term ‘glasshole’ has arisen for someone who is a ‘jerk’ and wearing smart glasses!
What will change in content generation?
KC: Undoubtedly, apps on Glass will need to be designed with the limited navigation and display environment in mind. Much interaction is done using voice commands or head tilts. This severely limits porting traditional apps in many cases to the Glass platform. Google has already posted some general guidelines for how apps should interact with Glass. They are keen to emphasise that apps should be designed with glass in mind and apps should always be tested on Glass before release. Apps should concentrate on real-time notifications and react to a user’s action as soon as possible and given that the Glass is to be worn all day, there should not be “unexpected functionality” surprises. It may seem for now that Google are severely limiting the features in Glass but perhaps in time, many of these restrictions may be lifted.
Take for instance, the Google Glasses first third party app – the New York Times. This app displays news and headlines to the head-mounted display at regular intervals. Early reports state that navigating the news stream is relatively straightforward and a tilt of the head allows browsing photos and full articles as well.
Google has also released its Glass Mirror API for developers. This allows coders to write what Google calls “Glassware”. As each application communicates with Glass through Google’s servers, the API provides developers with a set of RESTful services and is completely cloud-based and none of the code will actually run on Glass itself.
Google provides sample projects for Java and Python developers, in addition to client libraries for Go, Ruby, Dart, PHP and .NET. Developers can communicate with users through timeline cards, which can include text, rich HTML, images and video. Menu items can be added to apps such as system commands to “read aloud”.
What smart glasses models are available or in development?
KC: Most major companies such as Apple, Microsoft, Sony and Samsung have been linked with building a competitor to Google Glass. There are none however which seem to have a prototype anywhere near as advanced as Glass. Other offerings come from companies such as Olympus, Innovega, Sensics, Vuzix, Epson, Lumus, Oakley and Baidu.
Vuzix Corp who are based in Rochester, New York have starting shipping their M100 smart glasses to developers. The M100 can run any Android app and claim to have developers currently working on augmented reality apps in areas such as navigation, gaming and fitness. Similar to Google Glasses, the M100 has a built-in video camera which projects images of the real-world on to an eyepiece. They are calling this a wave-guide approach as it allows developers to overlay information and graphics in the viewer.
Both Google Glass and the M100 however are monocular systems. This means that they both make use of a single eyepiece to deliver an augmented field of view of about 14 degrees. A person’s natural field of view however is more like 180 degrees. To address this limitation, manufacturers are working on binocular systems that look like ordinary sunglasses, in part to achieve a wider field and 3-D viewing.
Epson for instance offer an Android-powered Moverio BT-100 smartglasses which give users the impression they are looking at information on an 80-inch screen through a 23-degree field. The binocular approach enables them to overlay 3-D content in the centre of a field of view making it much more impressive.
What about Google Glass?
KC: Google Glass is an Android based augmented reality head-mounted display (HMD) from Google expected to go on limited release in 2014. The glasses display information in smartphone-like format hands-free and allow users to surf the Web via natural language voice commands. Aesthetically, they have a minimalist appearance consisting mostly of an aluminium strip with 2 nose pads. Project X is the internal Google name for the group responsible for the glasses and is led by Google co-founder Sergey Brin. Sergey has indeed shared some photos of him driving where he put the put the phone into a mode where a photo could be taken every 10 seconds. The glasses can be classified as a wearable computing device as it is primarily a smart pair of glasses with an integrated heads-up display and a power source hidden inside the frame. One can also scroll and click on information by head tilting due to the motion sensing capabilities. There is also the ability to use voice both as input and output. There is no pricing tariffs released yet but we know that the early Explorer editions cost $1,500. Glass works with most Bluetooth-capable phone, but to use GPS and SMS through Glass requires the MyGlass companion app which requires Android 4.0.3+.
The display has a resolution that is the equivalent of a 25 inch high-definition screen from eight feet away. The resolution of the Glass display has not been released. The camera is 5 megapixels and is capable of recording 720p video. Photos and videos get uploaded by default to Google+. There is 16GB on-board Flash storage synced with Google cloud storage and the device should be capable of lasting for a full day in typical usage scenarios. A neat feature is that audio is heard using a bone conduction transducer, which transmits sound from Glass to the inner ear through the bones of the wearer’s skull. There are adjustable nose pads and durable frame to fit any face along with extra nose pads in two sizes. It comes with a micro USB cable and charger. There is a choice of five colours available. Google are also working on versions which can include prescription glasses.
What are the differences between them?
Difficult to tell at this moment. The main differences are the screen resolutions, aesthetics, audio, battery life and on-board storage. All that can be ascertained in via patent filings by each of the companies and comparing the specs of the more advanced smart glasses such as the Vusix M100.
Can smart glasses change the business world? How?
KC: Although head-worn displays for augmented reality are not a new idea, these glasses have garnered a lot of attention and with that – high expectation. Most of this comes from the priority and resources seemingly attached to it from Google. High expectation partially comes from early reports of the features inbuilt. It really does seem to bring the utopian vision for modern geeks of instant ‘God Mode’ connectivity to the Internet with the answer to everything at the tip of the nose as well as perfect navigation updates and ‘always on camera mode’. The celebrated blogger Robert Scoble is one of the early advocates for Google Glasses and he has stated that “I Just Wore Google’s Glasses For 2 Weeks And I’m Never Taking Them Off”. They have the potential to be one of the most disruptive technology offerings we have ever seen. Some early uses for Glasses have seen it used to translate foreign languages in real-time on the display.
What about face-scanning?
KC: One app that I honestly expect to see is upon meeting a person, the Glasses will conduct facial scanning and a Google search in an attempt to ascertain a person’s name. Much easier may be to simply ‘recall’ a person’s name from previous interactions where their image was saved in our private image databases. Basically, apps to help us never forget a name again!
And battery life?
KC: The party pooper as usual may come from the failure of batteries to keep up with the progress in technology. There is also the question of navigation without a keyboard. Both of these are by no means trivial to overcome.