正在加载图片...
AirContour:Building Contour-based Model for In-Air Writing Gesture Recognition 44:15 in Figure 5(a).After that,we transform the sensor data from earth-frame to human-frame [34]to tolerate the direction variation of human body. 5.2 Contour Calculation After data pre-processing,we now calculate gesture contour in human-frame.It consists of the following three main steps:extracting activity data,calculating gesture contours in 3D space,and transforming 3D contours to 2D contours. 5.2.1 Extracting Activity Data.Intuitively,the start and the end of a writing gesture mean the hand transforms from static state to active state and from active state to static state,respectively. The sensor data between the static-to-active point and active-to-static point will be extracted as the activity data.Suppose the linear-acc at time t is at,if a <el,then a means a static state. Otherwise,it means an active state.Here,e is a constant and set to 0.8m/s2 by default.If the ratio of active states in a window wa is larger than pa,then the end of this window indicates the start of a writing gesture.On the contrary,if the ratio of static states in a window wa is larger than pa, the start of the window indicates the end of a writing gesture.In this article,we set wa =15(i.e., number of sampling data in a window),pa =85%by default.Similarly,we can extract the activity data based on gyroscope data.Finally,we select the sensor data in the common extracted segment from linear-acc and gyroscope data as activity data. 5.2.2 Calculating Gesture Contour in 3D Space.With the extracted activity data,we will calcu- late the contour of the in-air writing gesture.Considering the uncontrollable accumulated error of continuous double integral,we introduce segmented integral and velocity compensation [5,17] for contour calculation.We utilize the gyroscope data close to zero(or below a threshold)to split the writing process into multiple segments.Then,we reset the velocity at the start and the end of the segment to zero to suppress the velocity drifts.In each segment,we use velocity compensation to mitigate the computation error of velocity.With the calibrated velocity,we calculate the ges- ture contour in 3D space by integral.In this way,although the calculated contour can be smaller or larger than the actual contour,it keeps the important contour features(e.g.,shape and orientation), which are essential to recognizing contours as characters,as shown in Figures 10(a)-14(a). 5.2.3 Transforming 3D Contour to 2D Contour.As described in Section 4,we first introduce Principal Component Analysis(PCA)to detect the principal/writing plane of the 3D contour.Then, we calibrate the projected 2D contour in the principal plane through reversing,rotating,and nor- malizing operations.After that,the calibrated 2D contour will be used for character recognition. 5.3 Contour Recognition To recognize the calibrated 2D contours as characters,we utilize the overall space-time distribution in the contour,i.e.,the relative positions among the contour points and the shape changes along with time,to recognize contours as characters.Specifically,we propose an online approach,AC- Vec,and an offline approach,AC-CNN. Vector sequence-based recognition approach:Considering the distribution of contour,we first propose a vector sequence-based recognition approach,AC-Vec.As shown in Figure 17,we sequentially and evenly select m points in the contour.Suppose the origin of coordinates in the principal plane is(),the coordinates of the ith selected point is().Then,we can get the vector from the origin of coordinates to the ith selected point as nd=(xp).as shown in Figure 17.By putting the m coordinate vectors in a vector,we can get a feature vector (nd,nid,....ndmnd)that describes the distribution of the contour in principal plane.After ACM Transactions on Sensor Networks,Vol.15,No.4.Article 44.Publication date:October 2019.AirContour: Building Contour-based Model for In-Air Writing Gesture Recognition 44:15 in Figure 5(a). After that, we transform the sensor data from earth-frame to human-frame [34] to tolerate the direction variation of human body. 5.2 Contour Calculation After data pre-processing, we now calculate gesture contour in human-frame. It consists of the following three main steps: extracting activity data, calculating gesture contours in 3D space, and transforming 3D contours to 2D contours. 5.2.1 Extracting Activity Data. Intuitively, the start and the end of a writing gesture mean the hand transforms from static state to active state and from active state to static state, respectively. The sensor data between the static-to-active point and active-to-static point will be extracted as the activity data. Suppose the linear-acc at time t is at , if at ≤ ϵl , then at means a static state. Otherwise, it means an active state. Here, ϵl is a constant and set to 0.8m/s2 by default. If the ratio of active states in a window wa is larger than ρa, then the end of this window indicates the start of a writing gesture. On the contrary, if the ratio of static states in a window wa is larger than ρa, the start of the window indicates the end of a writing gesture. In this article, we set wa = 15 (i.e., number of sampling data in a window), ρa = 85% by default. Similarly, we can extract the activity data based on gyroscope data. Finally, we select the sensor data in the common extracted segment from linear-acc and gyroscope data as activity data. 5.2.2 Calculating Gesture Contour in 3D Space. With the extracted activity data, we will calcu￾late the contour of the in-air writing gesture. Considering the uncontrollable accumulated error of continuous double integral, we introduce segmented integral and velocity compensation [5, 17] for contour calculation. We utilize the gyroscope data close to zero (or below a threshold) to split the writing process into multiple segments. Then, we reset the velocity at the start and the end of the segment to zero to suppress the velocity drifts. In each segment, we use velocity compensation to mitigate the computation error of velocity. With the calibrated velocity, we calculate the ges￾ture contour in 3D space by integral. In this way, although the calculated contour can be smaller or larger than the actual contour, it keeps the important contour features (e.g., shape and orientation), which are essential to recognizing contours as characters, as shown in Figures 10(a)–14(a). 5.2.3 Transforming 3D Contour to 2D Contour. As described in Section 4, we first introduce Principal Component Analysis (PCA) to detect the principal/writing plane of the 3D contour. Then, we calibrate the projected 2D contour in the principal plane through reversing, rotating, and nor￾malizing operations. After that, the calibrated 2D contour will be used for character recognition. 5.3 Contour Recognition To recognize the calibrated 2D contours as characters, we utilize the overall space-time distribution in the contour, i.e., the relative positions among the contour points and the shape changes along with time, to recognize contours as characters. Specifically, we propose an online approach, AC￾Vec, and an offline approach, AC-CNN. Vector sequence-based recognition approach: Considering the distribution of contour, we first propose a vector sequence–based recognition approach, AC-Vec. As shown in Figure 17, we sequentially and evenly select m points in the contour. Suppose the origin of coordinates in the principal plane is (xp0 ,yp0 ), the coordinates of the ith selected point is (x  pi ,y pi ). Then, we can get the vector from the origin of coordinates to the ith selected point as ndi = (x  pi − xp0 ,y pi − yp0 ), as shown in Figure 17. By putting the m coordinate vectors in a vector, we can get a feature vector (nd1 ,nd2 ,...,ndm−1 ,ndm ) that describes the distribution of the contour in principal plane. After ACM Transactions on Sensor Networks, Vol. 15, No. 4, Article 44. Publication date: October 2019.
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有