AirContour 1:15 Data collection Contour Contour calculation and pre-processing recognition Real time W Offset character recognition Accelerometer 3D contour calculation Calibration-Rotation Vector sequence- 9 Noise based recognition r a Gyroscope removal Coordinate Higher accuracy eton transtormaton Principal plane Projected Calibration CNN-based character 2D contour Reversal recognition setection recognition Fig.16.Components and workflow of AirContour 5.1 Data Collection and Pre-processing In AirContour,sensor data are collected using a wrist-worn device(i.e.,smartwatch)equipped with an accelerometer,a gyroscope and a magnetometer,as shown in Fig.1.With the acceleration measured from accelerometer,we further get the linear acceleration (linear-acc for short)and gravity acceleration (gravity-acc for short),according to the API supported by Android Platform [19.We then pre-process the sensor data by data offset correction [38,noise removal 38,coordinate system transformation,etc.In coordinate system transformation, we first transform the sensor data from the device coordinate system(device-frame for short) to the fixed earth coordinate frame (earth-frame for short)[34].Then,we introduce the initial gestures,i.e.,extending the arm to the front and dropping down the arm downward [34],to establish the human-frame shown in Fig.5(a).After that,we transform the sensor data from earth-frame to human-frame [34,to tolerate the direction variation of human body. 5.2 Contour Calculation After data pre-processing,we now calculate gesture contour in human-frame.It consists of the following three main steps:extracting activity data,calculating gesture contours in 3D space,transforming 3D contours to 2D contours. 5.2.1 Extracting Activity Data.Intuitively,the start and the end of a writing gesture mean the hand transforms from static state to active state and from active state to static state. respectively.The sensor data between the static-to-active point and active-to-static point will be extracted as the activity data.Suppose the linear-acc at time t is at,if at <e,then at means a static state.Otherwise,it means an active state.Here,e is a constant and set to 0.8m/s2 by default.If the ratio of active states in a window wa is larger than Pa,then the end of this window indicates the start of a writing gesture.On the contrary,if the ratio of static states in a window wa is larger than pa,the start of the window indicates the end of a writing gesture.In this paper,we set wa =15(i.e.,number of sampling data in a window), Pa =85%by default.Similarly,we can extract the activity data based on gyroscope data. Finally,we select the sensor data in the common extracted segment from linear-acc and gyroscope data as activity data. 5.2.2 Calculating Gesture Contour in 3D Space.With the extracted activity data,we will calculate the contour of the in-air writing gesture.Considering the uncontrollable accu- mulated error of continuous double integral,we introduce segmented integral and velocity compensation [17][5]for contour calculation.We utilize the gyroscope data close to zero (or below a threshold)to split the writing process into multiple segments.Then,we reset the velocity in the start and the end of the segment to zero,to suppress the velocity drifts.In ACM Trans.Sensor Netw.,Vol.1,No.1,Article 1.Publication date:January 2019.AirContour 1:15 Gyroscope Magnetometer Data collection and pre-processing Offset correction Noise removal Coordinate transformation Contour calculation 3D contour calculation Principal plane selection xh -axis zh -axis yh-axis zh-axis yh-axis zh -axis yh-axis Projected 2D contour CalibrationReversal Calibration-Rotation and Normalization Contour recognition CNN-based recognition + Real time character recognition Higher accuracy character recognition a b c d e f g h i j k l m n o p q r s t u v w x y z Vector sequencebased recognition Accelerometer Gyroscope Magnetometer Data collection and pre-processing Offset correction Noise removal Coordinate transformation Contour calculation 3D contour calculation Principal plane selection xh -axis zh -axis yh-axis zh-axis yh-axis zh -axis yh-axis Projected 2D contour CalibrationReversal Calibration-Rotation and Normalization Contour recognition CNN-based recognition + Real time character recognition Higher accuracy character recognition a b c d e f g h i j k l m n o p q r s t u v w x y z Vector sequencebased recognition Accelerometer Fig. 16. Components and workflow of AirContour 5.1 Data Collection and Pre-processing In AirContour, sensor data are collected using a wrist-worn device (i.e., smartwatch) equipped with an accelerometer, a gyroscope and a magnetometer, as shown in Fig. 1. With the acceleration measured from accelerometer, we further get the linear acceleration (linear-acc for short) and gravity acceleration (gravity-acc for short), according to the API supported by Android Platform [19]. We then pre-process the sensor data by data offset correction [38], noise removal [38], coordinate system transformation, etc. In coordinate system transformation, we first transform the sensor data from the device coordinate system (device-frame for short) to the fixed earth coordinate frame (earth-frame for short) [34]. Then, we introduce the initial gestures, i.e., extending the arm to the front and dropping down the arm downward [34], to establish the human-frame shown in Fig. 5(a). After that, we transform the sensor data from earth-frame to human-frame [34], to tolerate the direction variation of human body. 5.2 Contour Calculation After data pre-processing, we now calculate gesture contour in human-frame. It consists of the following three main steps: extracting activity data, calculating gesture contours in 3D space, transforming 3D contours to 2D contours. 5.2.1 Extracting Activity Data. Intuitively, the start and the end of a writing gesture mean the hand transforms from static state to active state and from active state to static state, respectively. The sensor data between the static-to-active point and active-to-static point will be extracted as the activity data. Suppose the linear-acc at time 𝑡 is 𝑎𝑡, if 𝑎𝑡 ≤ 𝜖𝑙 , then 𝑎𝑡 means a static state. Otherwise, it means an active state. Here, 𝜖𝑙 is a constant and set to 0.8𝑚/𝑠2 by default. If the ratio of active states in a window 𝑤𝑎 is larger than 𝜌𝑎, then the end of this window indicates the start of a writing gesture. On the contrary, if the ratio of static states in a window 𝑤𝑎 is larger than 𝜌𝑎, the start of the window indicates the end of a writing gesture. In this paper, we set 𝑤𝑎 = 15 (i.e., number of sampling data in a window), 𝜌𝑎 = 85% by default. Similarly, we can extract the activity data based on gyroscope data. Finally, we select the sensor data in the common extracted segment from linear-acc and gyroscope data as activity data. 5.2.2 Calculating Gesture Contour in 3D Space. With the extracted activity data, we will calculate the contour of the in-air writing gesture. Considering the uncontrollable accumulated error of continuous double integral, we introduce segmented integral and velocity compensation [17][5] for contour calculation. We utilize the gyroscope data close to zero (or below a threshold) to split the writing process into multiple segments. Then, we reset the velocity in the start and the end of the segment to zero, to suppress the velocity drifts. In ACM Trans. Sensor Netw., Vol. 1, No. 1, Article 1. Publication date: January 2019