We tested if wearables like Google Glass could be a viable alternative to mobile handsets in collecting data and this is what we found.
The release of a Google Glass beta has generated speculation about its eventual use and commercial applications. As early recipients of a Google Glass device, my team were somewhat underwhelmed by its ‘clunkiness’ but recognised the potential implications for market research and decided it was worthy of exploratory pilot work.
On the one hand, eye-wear is compelling, perhaps heralding a move away from handset-centric mobile computing and signalling the coming of age of augmented reality. Yet it was important to cut through the hype and make an assessment grounded in the here and now.
Our first challenge was one of definition: the term ‘wearables’ is widely used, yet too broad for anything but the most general statement as mass-produced products that consumers have always worn are being technology-enabled (glasses, watch, t-shirt). As well as Google Glass, this umbrella term includes fitness aids like Nike’s Fitbit, watches like Samsung’s Galaxy Gear and even T-Shirts equipped with RFID.
In our paper presented at ESOMAR’s Digital Dimensions event, we proposed two categories – ‘wearable sensors’ (like Nike’s Fitbit), and ‘wearable user-interface devices’ (such as Google Glass and Samsung Galaxy Gear).
Yet the discussions that followed our presentation suggested that for the purposes of assessing applications to market research, three categories of capability (as opposed to device) would be better. These are ‘wearable sensors’ (measuring biometrics from a worn device), ‘wearable mobile computing’ (the ability to control mobile features from a worn device) and ‘first person media capture’ (capturing video and photos from a worn device).
Each of these capabilities alone has significant and different implications for market research, deserving separate consideration. Since Google Glass and other products are likely to have more than one capability, such categorisation also pre-empts likely semantic difficulties.
Capturing images from a worn device
Our focus was primarily with first-person media capture. Early tests suggested it was the most robust feature of Google Glass and revealed a number of other, more mature worn devices that capture first person media data. In fact, one of those also has an eye-tracking capability, arguably in the sensor category, suggesting any lines we draw will be subject to at least some blurring.
In assessing the options, we considered three types of device. First, wrist-wear, since Samsung Galaxy Gear had recently been released; but whilst it gives consumers an alternative way to capture media, in essence it offers little more than mobile and its video capture was limited to 15 seconds.
Second, we considered life-blogging cameras, which are worn round the neck and capture photos at intervals. This also captured a continuous, first-person view, albeit the media was less granular and not quite from the same vantage point. Typically life-blogger devices are lower cost, have longer battery life and respondents would likely feel less self-conscious than those equipped with eye-wear (an issue our initial eye-wear tests had identified). Our chosen brand for the study was “Autographer”.
Third, we considered eye-wear. We consulted Eye Square, a research organisation part owned by Kantar specialising in the field of experience research, whose experience with eye-tracking allowed them to identify a pair of glasses (“Pupil”) similar in size and cost to Google Glass. Pupil captures video as well as data about where a person is looking (eye-tracking).
If you’re an ESOMAR member you can read the full article in MyESOMAR in the digital copy of Research World. If you are not a member of ESOMAR you can join and receive a free copy of Research World 6 times a year or alternatively you can sign up for a subscription of the magazine in our publications store.