44:14 Y.Yin et al. Data collection Contour calculation Contour and pre-processing recognition Onlie w character Offset Accelerometer correction recognition 3D contour calculation Calibrution-Rotation Vector sequence- e f g Noise and Normalization hased recognition h l J k I m n removal rst Gyroscope ⑧ ◆4 的 Coordinate Higher accuracy Magnetometer trans formatio Principal plane Calibratio CNN-based character selection 2D contour Reversal recognition recognition Fig.16.Components and workflow of AirContour. the coordinates(x)of the 2D contour are updated as()based on Equation(11). △0e= ve·vy arccos Sgn(U.XUy): (10) loclloul [cos A0c -sin△0e sin△0e cos△0e (11) Normalizing:Considering the size difference of gesture contours,we introduce the normal- ization operation to mitigate the recognition error caused by size difference.Specifically,we use (x),i[1,n]to represent the points in the 2D contour after rotation.Then,we use Equa- tion(12)to update the coordinates of each point,ie.,normalizing the 2D contour.Until now,the 2D contour has been calibrated and will be used for the following character recognition. D=arg max (xp:)2+(,)尸. ic[1.n] (12) p =D= 5 SYSTEM DESIGN In Figure 16,we show the key components and workflow of our proposed system AirContour. There are three key components:data collection and pre-processing,contour calculation,and con- tour recognition.We first collect sensor data and pre-process the data.We then compute gesture contours in 3D space and then utilize PCA to select the principal plane to project 3D contours into a 2D plane.After that,we calibrate the 2D contour through reversing,rotating,and normalizing operations.Finally,we propose an online approach,AC-Vec,and an offline approach,AC-CNN,to recognize 2D contours as characters. 5.1 Data Collection and Pre-processing In AirContour,sensor data are collected using a wrist-worn device (ie.,smartwatch)equipped with an accelerometer,a gyroscope,and a magnetometer,as shown in Figure 1.With the accelera- tion measured from accelerometer,we further get the linear acceleration(linear-acc for short)and gravity acceleration(gravity-acc for short),according to the API supported by Android Platform [19].We then pre-process the sensor data by data offset correction [38],noise removal [38],coor- dinate system transformation,and so on.In coordinate system transformation,we first transform the sensor data from the device coordinate system(device-frame for short)to the fixed earth co- ordinate frame(earth-frame for short)[34].Then,we introduce the initial gestures,ie.,extending the arm to the front and dropping the arm downward [34],to establish the human-frame shown ACM Transactions on Sensor Networks,Vol 15.No.4,Article 44.Publication date:October 2019.44:14 Y. Yin et al. Fig. 16. Components and workflow of AirContour. the coordinates (xv pi ,yv pi ) of the 2D contour are updated as (x pi ,y pi ) based on Equation (11). Δθc = arccos vc · vy |vc ||vy | · Sgn(vc ×vy ). (10) x pi y pi = cos Δθc − sin Δθc sin Δθc cos Δθc xv pi yv pi . (11) Normalizing: Considering the size difference of gesture contours, we introduce the normalization operation to mitigate the recognition error caused by size difference. Specifically, we use (x pi,y pi ),i ∈ [1,n] to represent the points in the 2D contour after rotation. Then, we use Equation (12) to update the coordinates of each point, i.e., normalizing the 2D contour. Until now, the 2D contour has been calibrated and will be used for the following character recognition. D = arg max i ∈[1,n] (x pi )2 + (y pi )2. x pi = x pi D ,y pi = y pi D . (12) 5 SYSTEM DESIGN In Figure 16, we show the key components and workflow of our proposed system AirContour. There are three key components: data collection and pre-processing, contour calculation, and contour recognition. We first collect sensor data and pre-process the data. We then compute gesture contours in 3D space and then utilize PCA to select the principal plane to project 3D contours into a 2D plane. After that, we calibrate the 2D contour through reversing, rotating, and normalizing operations. Finally, we propose an online approach, AC-Vec, and an offline approach, AC-CNN, to recognize 2D contours as characters. 5.1 Data Collection and Pre-processing In AirContour, sensor data are collected using a wrist-worn device (i.e., smartwatch) equipped with an accelerometer, a gyroscope, and a magnetometer, as shown in Figure 1. With the acceleration measured from accelerometer, we further get the linear acceleration (linear-acc for short) and gravity acceleration (gravity-acc for short), according to the API supported by Android Platform [19]. We then pre-process the sensor data by data offset correction [38], noise removal [38], coordinate system transformation, and so on. In coordinate system transformation, we first transform the sensor data from the device coordinate system (device-frame for short) to the fixed earth coordinate frame (earth-frame for short) [34]. Then, we introduce the initial gestures, i.e., extending the arm to the front and dropping the arm downward [34], to establish the human-frame shown ACM Transactions on Sensor Networks, Vol. 15, No. 4, Article 44. Publication date: October 2019