AirContour 1:3 trajectory (e.g.,handwriting trajectory)locates in different planes.Therefore,it is still a challenging task to apply the existing approaches to recognize in-air writing gestures which occurs in 3D space with more degrees of freedom,while guaranteeing user experience. To address aforementioned issues,in this paper we explore contours to represent in-air writing gestures,and propose a novel contour-based gesture model,where the 'contour'is represented with a sequence of coordinate points over time.We use an off-the-shelf wrist-worn device (e.g.,smartwatch)to collect sensor data,and our basic idea is to build a 3D contour model for each gesture and utilize the contour feature to recognize gestures as characters, as illustrated in Fig.1.Since the gesture contour keeps the essential movement patterns of in-air gestures,it can tolerate the intra-class variability of gestures.It is worth noting that while the proposed 'contour-gesture'model is applied in in-air writing gesture recognition for this work,it can also be used in sign language recognition and remote control with hand gestures [40].However,different from 2D contours,building 3D contours presents several challenges,i.e.,contour distortion caused by different viewing angles,contour difference caused by different writing directions,contour distribution across different planes,making it difficult to recognize 3D contours as 2D characters.To solve this problem,we first describe the range of viewing angles based on the way that the device is worn,which indicates the possible writing directions.We then apply Principal Component Analysis(PCA)to detect the principal/writing plane,i.e.,most of the contour is located in or close to the plane.After that,we calibrate the 2D projected contour in the principal plane for gesture/character recognition,while considering the distortion caused by dimensionality reduction and the difference of gesture sizes. We make the following contributions in this paper. To the best of our knowledge,we are the first to propose the contour-based gesture model to recognize in-air writing gestures.The model is designed to solve the new challenges in 3D gesture contours,e.g.,observation ambiguity,uncertain orientation and distribution of 3D contours,and tolerate the intra-class variability of gestures.The contour-based gesture model can be applied in not only in-air writing gesture recognition,but also many other scenarios such as sign language recognition,motion-sensing games and remote control with hand gestures. To recognize gesture contours in 3D space to characters in a 2D plane,we introduce PCA for dimensionality reduction and a series of calibrations for 2D contours.Specifically, we first utilize PCA to detect the principal/writing plane,and then project the 3D contour into the principal plane for dimensionality reduction.After that,we calibrate the 2D contour in the principal plane through reversing,rotating and normalizing operations,to make it in right orientation and normalized size under a uniform view, i.e.,to make the 2D contour suitable for character recognition. We conduct extensive experiments to verify the efficiency of the proposed contour- based gesture model.In addition,based on the model,we propose an online approach AC-Vec and an offline approach AC-CNN to recognize 2D contours as characters.The experimental results show that AC-Vec and AC-CNN achieve an accuracy of 91.6% and 94.3%,respectively,for gesture/character recognition,and both outperform the existing approaches. 2 RELATED WORK In this section,we describe and analyze the state-of-the-art related to in-air gesture recogni- tion,tracking,writing in the air,and handwritten character recognition,especially focus on inertial sensor based techniques. ACM Trans.Sensor Netw.,Vol.1,No.1,Article 1.Publication date:January 2019.AirContour 1:3 trajectory (e.g., handwriting trajectory) locates in different planes. Therefore, it is still a challenging task to apply the existing approaches to recognize in-air writing gestures which occurs in 3D space with more degrees of freedom, while guaranteeing user experience. To address aforementioned issues, in this paper we explore contours to represent in-air writing gestures, and propose a novel contour-based gesture model, where the ‘contour’ is represented with a sequence of coordinate points over time. We use an off-the-shelf wrist-worn device (e.g., smartwatch) to collect sensor data, and our basic idea is to build a 3D contour model for each gesture and utilize the contour feature to recognize gestures as characters, as illustrated in Fig. 1. Since the gesture contour keeps the essential movement patterns of in-air gestures, it can tolerate the intra-class variability of gestures. It is worth noting that while the proposed ‘contour-gesture’ model is applied in in-air writing gesture recognition for this work, it can also be used in sign language recognition and remote control with hand gestures [40]. However, different from 2D contours, building 3D contours presents several challenges, i.e., contour distortion caused by different viewing angles, contour difference caused by different writing directions, contour distribution across different planes, making it difficult to recognize 3D contours as 2D characters. To solve this problem, we first describe the range of viewing angles based on the way that the device is worn, which indicates the possible writing directions. We then apply Principal Component Analysis (PCA) to detect the principal/writing plane, i.e., most of the contour is located in or close to the plane. After that, we calibrate the 2D projected contour in the principal plane for gesture/character recognition, while considering the distortion caused by dimensionality reduction and the difference of gesture sizes. We make the following contributions in this paper. ∙ To the best of our knowledge, we are the first to propose the contour-based gesture model to recognize in-air writing gestures. The model is designed to solve the new challenges in 3D gesture contours, e.g., observation ambiguity, uncertain orientation and distribution of 3D contours, and tolerate the intra-class variability of gestures. The contour-based gesture model can be applied in not only in-air writing gesture recognition, but also many other scenarios such as sign language recognition, motion-sensing games and remote control with hand gestures. ∙ To recognize gesture contours in 3D space to characters in a 2D plane, we introduce PCA for dimensionality reduction and a series of calibrations for 2D contours. Specifically, we first utilize PCA to detect the principal/writing plane, and then project the 3D contour into the principal plane for dimensionality reduction. After that, we calibrate the 2D contour in the principal plane through reversing, rotating and normalizing operations, to make it in right orientation and normalized size under a uniform view, i.e., to make the 2D contour suitable for character recognition. ∙ We conduct extensive experiments to verify the efficiency of the proposed contourbased gesture model. In addition, based on the model, we propose an online approach AC-Vec and an offline approach AC-CNN to recognize 2D contours as characters. The experimental results show that AC-Vec and AC-CNN achieve an accuracy of 91.6% and 94.3%, respectively, for gesture/character recognition, and both outperform the existing approaches. 2 RELATED WORK In this section, we describe and analyze the state-of-the-art related to in-air gesture recognition, tracking, writing in the air, and handwritten character recognition, especially focus on inertial sensor based techniques. ACM Trans. Sensor Netw., Vol. 1, No. 1, Article 1. Publication date: January 2019