EEE INFOCOM 2016-The 35th Annual IEEE International Conference on Computer Communications GlassGesture:Exploring Head Gesture Interface of Smart Glasses Shanhe Yi,Zhengrui Qin,Ed Novak,Yafeng Yin',Qun Li College of William and Mary,Williamsburg,VA,USA fState Key Laboratory for Novel Software Technology,Nanjing University,China syi,zhengrui,ejnovak.liqun@cs.wm.edu,yyf@dislab.nju.edu.cn Abstract-We have seen an emerging trend towards wearables be applied in every scenario;for example,when the user is nowadays.In this paper,we focus on smart glasses,whose current talking directly with someone,or in a conference or meeting. interfaces are difficult to use,error-prone,and provide no or insecure user authentication.We thus present GlassGesture,a An even worse example is that other people can accidentally system that improves Google Glass through a gesture-based activate Glass using voice commands,as long as the command user interface,which provides efficient gesture recognition and is loud enough to be picked by Glass.Additionally.disabled robust authentication.First,our gesture recognition enables the users are at a severe disadvantage using Glass if they cannot use of simple head gestures as input.It is accurate in various speak,or have lost control of their arms or fine motor skills. wearer activities regardless of noise.Particularly,we improve the On the other hand,authentication on Glass is very cumber- recognition efficiency significantly by employing a novel similarity search scheme.Second,our gesture-based authentication can some and is based solely on the touchpad [1].As a wearable identify owner through features extracted from head movements. device,Glass contains rich private information including point- We improve the authentication performance by proposing new of-view (POV)photo/video recording,deep integration of features based on peak analyses,and employing an ensemble social/communication apps,and personal accounts of all kinds. method.Last,we implement GlassGesture and present extensive There will be a severe information leak if Glass is accessed by evaluations.GlassGesture achieves a gesture recognition accuracy near 96%.For authentication,GlassGesture can accept autho- some malicious users.Thus,any user interface for Glass needs rized users in near 92%of trials,and reject attackers in near to provide schemes to reject unauthorized access.However, 99%of trials.We also show that in 100 trials imitators cannot the current authentication on Glass is far from mature:a successfully masquerade as the authorized user even once. "password"is set by performing four consecutive swiping or tapping actions on the touchpad similar to a traditional four I.INTRODUCTION digit PIN code.This system has many problems.First,the In recent years,we have seen an emerging trend towards entropy is low,as only five touchpad gestures (tap,swipe wearables,which are designed to improve the usability of forward with one or two fingers,or swipe backward with computers worn on the human body,while being more aesthet- one or two fingers)are available,which form a limited ically pleasing and fashionable at the same time.One category set of permutations.Second,these gestures are difficult to of wearable devices is smart glasses (eyewear),which are perform correctly on the narrow touchpad,especially when usually equipped with a heads-up,near-eye display and various the user is not still.Third,this sort of password is hard to sensors,mounted on a pair of glasses.Among many kinds remember because it is unorthodox.Finally,this system is of smart eyewear,Google Glass (Glass for short)is the most very susceptible to shoulder surfing attacks.Any attacker can iconic product.However,since Glass is a new type of wearable easily observe the pattern from possibly several meters away, device,the user interface is less than ideal. with no special equipment. On one hand,there is no virtual or physical keyboard attached to Glass.Currently,the most prominent input method for Glass has two parts.However,each of these input methods suffers in many scenarios.First,there is a touchpad mounted on the right-hand side of the device.Tapping and swiping on the touchpad is error-prone for users:1)The user needs to raise their hands and fingers to the side of their forehead to locate the touchpad and perform actions,which can be difficult or dangerous when the user is walking or driving.2)Since the touchpad is very narrow and slim,some gestures,such as slide up/down,or tap can be easily confused.3)When Fig.1:Head Movements the user puts Glass on their head,or takes it off,it is very To solve all of these problems,we propose the use of head easy to accidentally touch the touchpad,causing erroneous gestures (gesture for short)as an alternative user interface input.Second,Glass supports voice commands and speech for smart eyewear devices like Google Glass.Because head recognition.A significant drawback is that voice input cannot gestures are an intuitive option,we can leverage them as 978-1-4673-9953-1/16/$31.00©20161EEEGlassGesture: Exploring Head Gesture Interface of Smart Glasses Shanhe Yi, Zhengrui Qin, Ed Novak, Yafeng Yin† , Qun Li College of William and Mary, Williamsburg, VA, USA †State Key Laboratory for Novel Software Technology, Nanjing University, China {syi,zhengrui,ejnovak,liqun}@cs.wm.edu, †yyf@dislab.nju.edu.cn Abstract—We have seen an emerging trend towards wearables nowadays. In this paper, we focus on smart glasses, whose current interfaces are difficult to use, error-prone, and provide no or insecure user authentication. We thus present GlassGesture, a system that improves Google Glass through a gesture-based user interface, which provides efficient gesture recognition and robust authentication. First, our gesture recognition enables the use of simple head gestures as input. It is accurate in various wearer activities regardless of noise. Particularly, we improve the recognition efficiency significantly by employing a novel similarity search scheme. Second, our gesture-based authentication can identify owner through features extracted from head movements. We improve the authentication performance by proposing new features based on peak analyses, and employing an ensemble method. Last, we implement GlassGesture and present extensive evaluations. GlassGesture achieves a gesture recognition accuracy near 96%. For authentication, GlassGesture can accept authorized users in near 92% of trials, and reject attackers in near 99% of trials. We also show that in 100 trials imitators cannot successfully masquerade as the authorized user even once. I. INTRODUCTION In recent years, we have seen an emerging trend towards wearables, which are designed to improve the usability of computers worn on the human body, while being more aesthetically pleasing and fashionable at the same time. One category of wearable devices is smart glasses (eyewear), which are usually equipped with a heads-up, near-eye display and various sensors, mounted on a pair of glasses. Among many kinds of smart eyewear, Google Glass (Glass for short) is the most iconic product. However, since Glass is a new type of wearable device, the user interface is less than ideal. On one hand, there is no virtual or physical keyboard attached to Glass. Currently, the most prominent input method for Glass has two parts. However, each of these input methods suffers in many scenarios. First, there is a touchpad mounted on the right-hand side of the device. Tapping and swiping on the touchpad is error-prone for users: 1) The user needs to raise their hands and fingers to the side of their forehead to locate the touchpad and perform actions, which can be difficult or dangerous when the user is walking or driving. 2) Since the touchpad is very narrow and slim, some gestures, such as slide up/down, or tap can be easily confused. 3) When the user puts Glass on their head, or takes it off, it is very easy to accidentally touch the touchpad, causing erroneous input. Second, Glass supports voice commands and speech recognition. A significant drawback is that voice input cannot be applied in every scenario; for example, when the user is talking directly with someone, or in a conference or meeting. An even worse example is that other people can accidentally activate Glass using voice commands, as long as the command is loud enough to be picked by Glass. Additionally, disabled users are at a severe disadvantage using Glass if they cannot speak, or have lost control of their arms or fine motor skills. On the other hand, authentication on Glass is very cumbersome and is based solely on the touchpad [1]. As a wearable device, Glass contains rich private information including pointof-view (POV) photo/video recording, deep integration of social/communication apps, and personal accounts of all kinds. There will be a severe information leak if Glass is accessed by some malicious users. Thus, any user interface for Glass needs to provide schemes to reject unauthorized access. However, the current authentication on Glass is far from mature: a “password” is set by performing four consecutive swiping or tapping actions on the touchpad similar to a traditional four digit PIN code. This system has many problems. First, the entropy is low, as only five touchpad gestures (tap, swipe forward with one or two fingers, or swipe backward with one or two fingers) are available, which form a limited set of permutations. Second, these gestures are difficult to perform correctly on the narrow touchpad, especially when the user is not still. Third, this sort of password is hard to remember because it is unorthodox. Finally, this system is very susceptible to shoulder surfing attacks. Any attacker can easily observe the pattern from possibly several meters away, with no special equipment. Fig. 1: Head Movements To solve all of these problems, we propose the use of head gestures (gesture for short) as an alternative user interface for smart eyewear devices like Google Glass. Because head gestures are an intuitive option, we can leverage them as IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications 978-1-4673-9953-1/16/$31.00 ©2016 IEEE