正在加载图片...
1:2 Y.Yin et al. 1 INTRODUCTION With the advancement of rich embedded sensors,mobile or wearable devices(e.g.,smartphone, smartwatch)have been largely used in activity recognition [37]21][23][45][26][31][41],and benefit many human-computer interactions,e.g.,motion-sensing games [25],sign language recognition [12],in-air writing [1],etc.As a typical interaction mode,writing in the air has aroused wide attentions [9][10][39][36][6].It allows user to write characters with arm and hand freely in the air without focusing the attentions on the small screen or tiny keys on a device [2.As shown in Fig.1,a user carrying/wearing a sensor-embedded device writes in the air,and the gesture will be recognized as a character.Recognizing in-air writing gestures is a key technology to facilitate writing gesture-based interactions in the air and can be used in many scenarios.For example,"writing"commands in the air to control a unmanned aerial vehicle(UAV),while looking at the scene transmitted from the UAV in a virtual reality(VR)headset,to avoid taking off the VR headset and inputting the commands with a controller.Another example could be replacing the traditional on-screen text input by "writing"the text message in the air,thus allowing to interact with mobile or wearable devices having tiny or no screen.Besides,when one hand of the user is occupied,typing with a keyboard becomes inconvenient,the sensor-assisted in-air input technology can be used to capture hand gestures and lay them out in text or image [1].When comparing to the existing handwriting,voice or camera-based input,in-air writing with inertial sensors can tolerate the limited screen,environmental noises and poor light conditions.In this paper, we focus on recognizing in-air writing gestures as characters. Wrist-worn device (eg.,smartwatch)】 In-air In-air/3D Output writing gesture contours (recognized characters) Fig.1.AirContour:in-air writing gesture recognition based on contours In inertial sensor based gesture recognition,many approaches have been proposed.Some data-driven approaches [10][2][7]35][15]tend to extract features from sensor data to train classifiers for gesture recognition,while paying little attention on human activity analysis.If the user performs gestures with more degrees of freedom,i.e.,the gestures may have large variations in speeds,sizes,or orientations,the type of approaches may fail to recognize them with high accuracy.On the Contrast,some pattern-driven approaches [1][32][13]try to capture the moving patterns of gestures for activity recognition.For example,Agrawal et al. [1]utilize the segmented strokes and grammar tree to recognize capital letters in a 2D plane. However,due to the complexity of analyzing human activities,the type of approaches may redefine the gesture patterns or constrain the gestures in a limited area(e.g.,on a limited 2D plane),which may decrease user experience.To track the continuous in-air gestures, Shen et al.[29]utilize the 5-DoF arm model and HMM to track the 3D posture of the arm. However,in 3D space,tracking is not directly linked to recognition,especially when the ACM Trans.Sensor Netw.,Vol.1,No.1,Article 1.Publication date:January 2019.1:2 Y. Yin et al. 1 INTRODUCTION With the advancement of rich embedded sensors, mobile or wearable devices (e.g., smartphone, smartwatch) have been largely used in activity recognition [37][21][23][45][26][31][41], and benefit many human-computer interactions, e.g., motion-sensing games [25], sign language recognition [12], in-air writing [1], etc. As a typical interaction mode, writing in the air has aroused wide attentions [9][10][39][36][6]. It allows user to write characters with arm and hand freely in the air without focusing the attentions on the small screen or tiny keys on a device [2]. As shown in Fig. 1, a user carrying/wearing a sensor-embedded device writes in the air, and the gesture will be recognized as a character. Recognizing in-air writing gestures is a key technology to facilitate writing gesture-based interactions in the air and can be used in many scenarios. For example, “writing” commands in the air to control a unmanned aerial vehicle (UAV), while looking at the scene transmitted from the UAV in a virtual reality (VR) headset, to avoid taking off the VR headset and inputting the commands with a controller. Another example could be replacing the traditional on-screen text input by “writing” the text message in the air, thus allowing to interact with mobile or wearable devices having tiny or no screen. Besides, when one hand of the user is occupied, typing with a keyboard becomes inconvenient, the sensor-assisted in-air input technology can be used to capture hand gestures and lay them out in text or image [1]. When comparing to the existing handwriting, voice or camera-based input, in-air writing with inertial sensors can tolerate the limited screen, environmental noises and poor light conditions. In this paper, we focus on recognizing in-air writing gestures as characters. Fig. 1. AirContour: in-air writing gesture recognition based on contours In inertial sensor based gesture recognition, many approaches have been proposed. Some data-driven approaches [10][2][7][35][15] tend to extract features from sensor data to train classifiers for gesture recognition, while paying little attention on human activity analysis. If the user performs gestures with more degrees of freedom, i.e., the gestures may have large variations in speeds, sizes, or orientations, the type of approaches may fail to recognize them with high accuracy. On the Contrast, some pattern-driven approaches [1][32][13] try to capture the moving patterns of gestures for activity recognition. For example, Agrawal et al. [1] utilize the segmented strokes and grammar tree to recognize capital letters in a 2D plane. However, due to the complexity of analyzing human activities, the type of approaches may redefine the gesture patterns or constrain the gestures in a limited area (e.g., on a limited 2D plane), which may decrease user experience. To track the continuous in-air gestures, Shen et al. [29] utilize the 5-DoF arm model and HMM to track the 3D posture of the arm. However, in 3D space, tracking is not directly linked to recognition, especially when the ACM Trans. Sensor Netw., Vol. 1, No. 1, Article 1. Publication date: January 2019
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有