□○ lser Numher of candid-des (a)Recognition accuracy of letters (b)Recognition accuracy of different (c)Recognition accuracy comparison (d)Illusitration of finger tracking users with LeapMotion 0 5R150.303185284 SL 0.3 21 0.0 48 200624 249030.030 52 03912 33 0.9 12 ¥04 RRQ396.400240.3 0.2 RL91803 39 21 0 L RR 20 SL SR 2 500 1000 150 200 (e)Distance error of tracking differ-(f)Confusion matrix of multi-touch(g)Accuracy of multi-touch gestures (h)Training accuracy of CNN ent shapes gestures with users Fig.14.Evaluation results. letter“a”,“f',"h”,k”and“y'are correctly recognized with paper..We use the average DTW distance to characterize the 100%due to their distinct shapes. tracking accuracy of RF-finger as shown in Figure 14(e).We Moreover,we evaluate the robustness of RF-finger by find three of the shapes have the average error as low as 1cm, comparing the recognition accuracy across different users.We while the error for rectangles is about 2.3cm.Through in- also vary the size of candidate set produced by LipiTk for depth investigating.we find all the tracked rectangles are easily comparison.As shown in Figure 14(b),all the users achieve recognized(similar to Figure 14(d)),but they are distorted with more than 75%the accuracy based on the first candidate.As some rotations,leading to a little bit higher tracking error than we increase the number of candidates to three,the accuracy the other shapes.Overall,RF-finger is able to accurately track increases to more than 85%,meaning we can correctly rec- the finger trace with small error. ognize the letters from the first three candidates with more D.Multi-touch Gesture Recognition than 85%probability.Particularly,user 3 achieves the highest Finally,we evaluate the performance of multi-touch recogni- recognition accuracy as 94%,while the lowest accuracy is tion using the CNN based classification algorithm.Figure 14(f) 84%for user 4.Therefore,RF-finger is robust to recognize presents the confusion matrix of classifying the 6 gestures.We the letters from the finger writings of different users. find 5 of the 6 gestures achieve over 90%accuracy for gesture Additionally,we compare the letter recognition accuracy of recognition.Even though these gestures are not performed at RF-finger with that of LeapMotion by varying the number of exactly the same position over the tag array,CNN model candidates.As shown in Figure 14(c),it is encouraging to can still correctly classify them via the local property of find that the accuracy of RF-finger is only 3%to 6%lower the images,e.g.,the relative positions of fingers in different than the LeapMotion,which validates the accuracy of RF- periods.The average accuracy of the all gestures achieves as finger.Particularly,RF-finger achieves about 89%recognition high as 92%,indicating RF-finger can be used to accurately accuracy when we use 3 candidates,and LeapMotion achieves recognize the multi-touch gesture. 92%accuracy.Therefore,RF-finger achieves comparable ac- We also show the robustness of the CNN model by compar- curacy for the recognition of finger writings with the video- ing the recognition accuracy across different users.All the 10 based technique (i.e.,LeapMotion). users perform the 6 gestures in front of the tag array,while the C.Finger Tracking of Shapes users randomly choose the position over the tag array to per- Next,we evaluate the accuracy of finger tracking by com- form.As shown in Figure 14(g),the proposed method achieves paring the shapes of RF-finger with the shapes of groundtruth around 90%accuracy for most of the users.Particularly,the drawn on the paper.Particularly,we test 4 basic shapes,i.e., lowest accuracy is as high as 89%,while the highest accuracy rectangle,triangle,circle and heart(☐,△,○,),respectively.. is 94%.Therefore,RF-finger can accurately classify the multi- Figure 14(d)illustrates the traces of RF-finger,which include touch gestures based on the properties extracted from the CNN ▣,△,O,and letter“a”,k”,“m”,“s”,“z”.All the finger model. traces can be easily recognized with little distortion.Besides. Besides,we also present the learning rate of our CNN all the traces are written in a 15cm x 15cm square,indicating model as shown in Figure 14(h).We randomly choose 1440 RF-finger can track the trajectory with fine-grained resolution. gestures from all the 1800 to train our CNN model.All the Furthermore,we compare the trace of RF-finger with the parameters in each CNN layer automatically update in each groundtruth on the paper.Particularly,we use DTW to map epoch to improve the recognition accuracy of the training each location in the trace of RF-finger to the groundtruth on the dataset.Particularly,we find the CNN model achieves as high 8Letters a b c d e f g h i j k l mn o p q r s t u vwx y z Accuracy 0 0.2 0.4 0.6 0.8 1 (a) Recognition accuracy of letters User ID 1 2 3 4 Accuracy 0 0.2 0.4 0.6 0.8 1 One candidate Two candidates Three candidates (b) Recognition accuracy of different users Number of candidates 1 2 3 Accuracy 0 0.2 0.4 0.6 0.8 1 RF-Finger LeapMotion (c) Recognition accuracy comparison with LeapMotion (d) Illusitration of finger tracking Rectangle Triangle Circle Heart Distance error (cm) 0 0.5 1 1.5 2 2.5 (e) Distance error of tracking different shapes Ground truth RL RR ZI ZO SL SR Estimated gestures RL RR ZI ZO SL SR 91.8 0.3 5.2 0.6 0.6 1.5 0.3 96.4 0.3 2.4 0.3 0.3 3.9 0.0 91.2 2.4 2.1 0.3 2.1 2.4 3.3 90.3 0.0 1.8 0.6 0.3 0.9 0.0 93.0 5.2 0.9 0.6 1.2 3.0 4.8 89.4 (f) Confusion matrix of multi-touch gestures User ID 1 2 3 4 5 6 7 8 9 10 Accuracy 0 10 20 30 40 50 60 70 80 90 100 (g) Accuracy of multi-touch gestures with users Learning epoch 0 500 1000 1500 2000 Accuracy 0 0.2 0.4 0.6 0.8 1 (h) Training accuracy of CNN Fig. 14. Evaluation results. letter “a”, “f”, “h”, “k” and “y” are correctly recognized with 100% due to their distinct shapes. Moreover, we evaluate the robustness of RF-finger by comparing the recognition accuracy across different users. We also vary the size of candidate set produced by LipiTk for comparison. As shown in Figure 14(b), all the users achieve more than 75% the accuracy based on the first candidate. As we increase the number of candidates to three, the accuracy increases to more than 85%, meaning we can correctly recognize the letters from the first three candidates with more than 85% probability. Particularly, user 3 achieves the highest recognition accuracy as 94%, while the lowest accuracy is 84% for user 4. Therefore, RF-finger is robust to recognize the letters from the finger writings of different users. Additionally, we compare the letter recognition accuracy of RF-finger with that of LeapMotion by varying the number of candidates. As shown in Figure 14(c), it is encouraging to find that the accuracy of RF-finger is only 3% to 6% lower than the LeapMotion, which validates the accuracy of RF- finger. Particularly, RF-finger achieves about 89% recognition accuracy when we use 3 candidates, and LeapMotion achieves 92% accuracy. Therefore, RF-finger achieves comparable accuracy for the recognition of finger writings with the videobased technique (i.e., LeapMotion). C. Finger Tracking of Shapes Next, we evaluate the accuracy of finger tracking by comparing the shapes of RF-finger with the shapes of groundtruth drawn on the paper. Particularly, we test 4 basic shapes, i.e., rectangle, triangle, circle and heart (, 4, , ♥), respectively. Figure 14(d) illustrates the traces of RF-finger, which include , 4, , ♥ and letter “a”, “k”, “m”, “s”, “z”. All the finger traces can be easily recognized with little distortion. Besides, all the traces are written in a 15cm ×15cm square, indicating RF-finger can track the trajectory with fine-grained resolution. Furthermore, we compare the trace of RF-finger with the groundtruth on the paper. Particularly, we use DTW to map each location in the trace of RF-finger to the groundtruth on the paper. We use the average DTW distance to characterize the tracking accuracy of RF-finger as shown in Figure 14(e). We find three of the shapes have the average error as low as 1cm, while the error for rectangles is about 2.3cm. Through indepth investigating, we find all the tracked rectangles are easily recognized (similar to Figure 14(d)), but they are distorted with some rotations, leading to a little bit higher tracking error than the other shapes. Overall, RF-finger is able to accurately track the finger trace with small error. D. Multi-touch Gesture Recognition Finally, we evaluate the performance of multi-touch recognition using the CNN based classification algorithm. Figure 14(f) presents the confusion matrix of classifying the 6 gestures. We find 5 of the 6 gestures achieve over 90% accuracy for gesture recognition. Even though these gestures are not performed at exactly the same position over the tag array, CNN model can still correctly classify them via the local property of the images, e.g., the relative positions of fingers in different periods. The average accuracy of the all gestures achieves as high as 92%, indicating RF-finger can be used to accurately recognize the multi-touch gesture. We also show the robustness of the CNN model by comparing the recognition accuracy across different users. All the 10 users perform the 6 gestures in front of the tag array, while the users randomly choose the position over the tag array to perform. As shown in Figure 14(g), the proposed method achieves around 90% accuracy for most of the users. Particularly, the lowest accuracy is as high as 89%, while the highest accuracy is 94%. Therefore, RF-finger can accurately classify the multitouch gestures based on the properties extracted from the CNN model. Besides, we also present the learning rate of our CNN model as shown in Figure 14(h). We randomly choose 1440 gestures from all the 1800 to train our CNN model. All the parameters in each CNN layer automatically update in each epoch to improve the recognition accuracy of the training dataset. Particularly, we find the CNN model achieves as high 8