正在加载图片...
AirContour:Building Contour-based Model for In-Air Writing Gesture Recognition 44:17 uate the person-dependent performance and person-independent performance of each approach in character recognition.In the experiments,we use the LG Watch Urbane running on Android platform(Android Wear 1.1.0 and Android OS 5.1.1)as a wrist-worn device,which captures the in-air writing gestures with the embedded accelerometer,gyroscope,and magnetometer.The sampling rate of each sensor is set to 50Hz.We recruit 14 subjects and collect data in a period of four weeks.Subjects write in the air with the smartwatch,as shown in Figure 1 and Figure 6. 6.1 Efficiency of Contour-based Gesture Model As shown in Figure 16,AirContour converts sensor data to contours for gesture recognition.To verify the efficiency of the proposed contour-based gesture model,we first evaluate the compo- nents related to the model,including sensor data processing and contour calculation,ie.,coor- dinate system transformation,data-to-contour transformation,3D-contour-to-2D-contour trans- formation,principal plane selection,and 2D contour calibration.In each test,we alter only one of the five aspects while keeping the rest unchanged.In the experiments,we invite six subjects to write in the air,and the subjects write each character five times.We allow a certain degree of variation in writing,ie.,the subject can hold the device in different ways,write slow or fast with different gesture sizes,towards different directions,and so on.To evaluate the contour-based ges- ture model,we keep AC-Vec and AC-CNN unchanged.The difference lies in the input to AC-Vec and AC-CNN.Unless otherwise specified,we utilize the 5 times 5-fold cross-validation to evaluate the effect of each aspect 6.1.1 Coordinate System Transformation.In AirContour,we transform the sensor data from device-frame to human-frame.To verify the efficiency of coordinate system transformation,we compare the gesture contours calculated in device-frame and that in human-frame in terms of character recognition accuracy.As shown in Figure 19,the accuracy in device-frame is inferior to that in human-frame.Take AC-CNN as an example:The accuracy in device-frame is 77.6%,while in human-frame it is 89.1%.This is mainly because the continuous change of device-frame makes it difficult to accurately calculate the gesture contour,leading to the distortion of 3D contours and disturbing the contour calibration.Therefore,coordinate system transformation is necessary;it paves the way for the following contour calculation and calibration. 6.1.2 Data to Contour Transformation.Instead of directly using the collected sensor data for gesture recognition,AirContour transforms sensor data to contours.To evaluate the efficiency of the data-to-contour transformation,we compare the sensor data and final calibrated 2D contour in terms of character recognition accuracy.As shown in Figure 20,the performance using sensor data is inferior to the performance using gesture contour.Take AC-Vec as an example:The accuracy using sensor data is 62.4%,while using gesture contour it is 91.0%.This is mainly because the sensor data is difficult to tolerate the intra-class variability of gestures,e.g.,the gestures of the same character may have different sizes,orientations,and so on.Therefore,transforming sensor data to gesture contours is meaningful. 6.1.3 3D-contour to 2D-contour Transformation.In Section 4,we describe how to transform a 3D contour to a 2D contour for character recognition.To verify the necessity of 3D contour to 2D contour transformation,we use both 3D contours and the calibrated 2D contours to test the character recognition accuracy.Figure 21 shows that the performance of 3D contours is worse than that of 2D contours,whether in AC-Vec or AC-CNN.Take AC-Vec as an example:The character recognition accuracy of 3D contours is 62.3%,while accuracy of 2D contours is 91.0%.This is mainly because 3D contours have problems such as confused viewing angles and uncertain writing directions,as described in Section 3.2.It is difficult to directly compare the similarity between ACM Transactions on Sensor Networks,Vol.15,No.4.Article 44.Publication date:October 2019.AirContour: Building Contour-based Model for In-Air Writing Gesture Recognition 44:17 uate the person-dependent performance and person-independent performance of each approach in character recognition. In the experiments, we use the LG Watch Urbane running on Android platform (Android Wear 1.1.0 and Android OS 5.1.1) as a wrist-worn device, which captures the in-air writing gestures with the embedded accelerometer, gyroscope, and magnetometer. The sampling rate of each sensor is set to 50Hz. We recruit 14 subjects and collect data in a period of four weeks. Subjects write in the air with the smartwatch, as shown in Figure 1 and Figure 6. 6.1 Efficiency of Contour-based Gesture Model As shown in Figure 16, AirContour converts sensor data to contours for gesture recognition. To verify the efficiency of the proposed contour-based gesture model, we first evaluate the compo￾nents related to the model, including sensor data processing and contour calculation, i.e., coor￾dinate system transformation, data-to-contour transformation, 3D-contour–to–2D-contour trans￾formation, principal plane selection, and 2D contour calibration. In each test, we alter only one of the five aspects while keeping the rest unchanged. In the experiments, we invite six subjects to write in the air, and the subjects write each character five times. We allow a certain degree of variation in writing, i.e., the subject can hold the device in different ways, write slow or fast with different gesture sizes, towards different directions, and so on. To evaluate the contour-based ges￾ture model, we keep AC-Vec and AC-CNN unchanged. The difference lies in the input to AC-Vec and AC-CNN. Unless otherwise specified, we utilize the 5 times 5-fold cross-validation to evaluate the effect of each aspect. 6.1.1 Coordinate System Transformation. In AirContour, we transform the sensor data from device-frame to human-frame. To verify the efficiency of coordinate system transformation, we compare the gesture contours calculated in device-frame and that in human-frame in terms of character recognition accuracy. As shown in Figure 19, the accuracy in device-frame is inferior to that in human-frame. Take AC-CNN as an example: The accuracy in device-frame is 77.6%, while in human-frame it is 89.1%. This is mainly because the continuous change of device-frame makes it difficult to accurately calculate the gesture contour, leading to the distortion of 3D contours and disturbing the contour calibration. Therefore, coordinate system transformation is necessary; it paves the way for the following contour calculation and calibration. 6.1.2 Data to Contour Transformation. Instead of directly using the collected sensor data for gesture recognition, AirContour transforms sensor data to contours. To evaluate the efficiency of the data-to-contour transformation, we compare the sensor data and final calibrated 2D contour in terms of character recognition accuracy. As shown in Figure 20, the performance using sensor data is inferior to the performance using gesture contour. Take AC-Vec as an example: The accuracy using sensor data is 62.4%, while using gesture contour it is 91.0%. This is mainly because the sensor data is difficult to tolerate the intra-class variability of gestures, e.g., the gestures of the same character may have different sizes, orientations, and so on. Therefore, transforming sensor data to gesture contours is meaningful. 6.1.3 3D-contour to 2D-contour Transformation. In Section 4, we describe how to transform a 3D contour to a 2D contour for character recognition. To verify the necessity of 3D contour to 2D contour transformation, we use both 3D contours and the calibrated 2D contours to test the character recognition accuracy. Figure 21 shows that the performance of 3D contours is worse than that of 2D contours, whether in AC-Vec or AC-CNN. Take AC-Vec as an example: The character recognition accuracy of 3D contours is 62.3%, while accuracy of 2D contours is 91.0%. This is mainly because 3D contours have problems such as confused viewing angles and uncertain writing directions, as described in Section 3.2. It is difficult to directly compare the similarity between ACM Transactions on Sensor Networks, Vol. 15, No. 4, Article 44. Publication date: October 2019
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有