正在加载图片...
41:2·C.Wang et al. 1 INTRODUCTION The gesture-based Human-Computer Interaction(HCI)embraces an increasing number of practical usage enabled by the growing popularity of electronic devices with gesture recognition capabilities.Recent survey reveals that the global gesture recognition market is anticipated to reach USD 48.56 billion by 2024 [4].In particular,the success of Microsoft Kinect [8]in tracking human gestures in gaming consoles has induced many emerging applications to adopt gesture recognition solutions in the fields like healthcare,smart homes,mobile robot control, etc.For example,numerous applications are developed to monitor human's well-being based on their activities (such as fitness,drinking,sleeping,etc.)with either wearable devices or smartphones.The success of gesture and activity recognition leads to a growing interest in developing new approaches and technologies to track the body movement in 3D space,which can further facilitate behavior recognition in various occasions,such as VR gaming,mobile healthcare,and user access control. Existing solutions for body movement recognition fall into three main categories:(i)Computer vision-based solutions,such as Kinect and LeapMotion [5,8],leverage the depth sensors or infrared cameras to recognize body gestures and allow the user to interact with machines in a natural way.However,these methods suffer from several inherent disadvantages of computer vision including light dependence,dead corner,high computational cost,and ambiguity of multi-people.(ii)Sensor-based solutions,such as the smartwatch and wristband [3],are designed to track the movement of the limbs based on the accelerator or gyroscope readings.But these systems usually require the user to wear different kinds of sensing devices,which present short life cycles due to the high energy consumption.Further,there are also some products(i.e.,Vicon [6])integrating the information from both cameras and wearable sensors to accurately track the body movement,however the high price of the infrastructure is not affordable for many systems.(iii)Wireless signal-based solutions [17,25]capture the specific gestures based on the changes of some wireless signal features,such as the Doppler frequency shift and signal amplitude fluctuation.But only a limited number of gestures could be correctly identified due to the high cost of training data collection and the lack of capabilities for multi-user identification. With the rapid development of RFID techniques [34,41],RFID tag now is not only an identification device, but also a low power battery-free wireless sensor serving for various applications,such as the localization and motion tracking.Previous studies,such as Tagoram [41]and RF-IDraw [35],could achieve cm-level accuracy on tracking an individual RFID tag in 2D space(ie.,a tagged object or finger).Further,Tagyro [39]could accurately track the 3D orientation of objects attached with an array of RFID tags,but it only works for objects with the fixed geometry and rotation center.However,due to the complicated body movement involving multiple degrees of freedom,the problem of 3D RFID tag tracking associated with the human body movement,including both limb orientation and joint displacement (e.g.,elbow displacement),remains elusive. Inspired by these advanced schemes,we explore the possibility of tracking the human body movement in 3D space via RFID system.In particular,we propose a wearable RFID-based approach as shown in Figure 1,which investigates new opportunities for tracking the body movement by attaching the lightweight RFID tags onto the human body.Wearable RFID refers to the gesture recognition towards the human body wearing multiple RFID tags on different parts of the limbs and torso.In actual applications,these tags can be easily embedded into the fabric [11].e.g.,T-shirts,with fixed positions to avoid the complicated configurations.During the process of the human motion,we are able to track the human body movement,including both the rigid body [7]movement(e.g, the torso movement)and non-rigid body movement (e.g.,the arm/leg movement),by analyzing the relationship between these movements and the RF-signals from the corresponding tag sets.Due to the inherent identification function,wearable RFID solves the distinguishing problem of tracking multiple subjects in most device-free sensing schemes.For example,in regard to tracking the body movement of multiple human subjects,different human subjects or even different arms/legs can be easily distinguished according to the tag ID,which is usually difficult to achieve in the computer vision or wireless-based sensing schemes.Even RF-IDraw [35]makes the first Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018.41:2 • C. Wang et al. 1 INTRODUCTION The gesture-based Human-Computer Interaction (HCI) embraces an increasing number of practical usage enabled by the growing popularity of electronic devices with gesture recognition capabilities. Recent survey reveals that the global gesture recognition market is anticipated to reach USD 48.56 billion by 2024 [4]. In particular, the success of Microsoft Kinect [8] in tracking human gestures in gaming consoles has induced many emerging applications to adopt gesture recognition solutions in the fields like healthcare, smart homes, mobile robot control, etc. For example, numerous applications are developed to monitor human’s well-being based on their activities (such as fitness, drinking, sleeping, etc.) with either wearable devices or smartphones. The success of gesture and activity recognition leads to a growing interest in developing new approaches and technologies to track the body movement in 3D space, which can further facilitate behavior recognition in various occasions, such as VR gaming, mobile healthcare, and user access control. Existing solutions for body movement recognition fall into three main categories: (i) Computer vision-based solutions, such as Kinect and LeapMotion [5, 8], leverage the depth sensors or infrared cameras to recognize body gestures and allow the user to interact with machines in a natural way. However, these methods suffer from several inherent disadvantages of computer vision including light dependence, dead corner, high computational cost, and ambiguity of multi-people. (ii) Sensor-based solutions, such as the smartwatch and wristband [3], are designed to track the movement of the limbs based on the accelerator or gyroscope readings. But these systems usually require the user to wear different kinds of sensing devices, which present short life cycles due to the high energy consumption. Further, there are also some products (i.e., Vicon [6]) integrating the information from both cameras and wearable sensors to accurately track the body movement, however the high price of the infrastructure is not affordable for many systems. (iii) Wireless signal-based solutions [17, 25] capture the specific gestures based on the changes of some wireless signal features, such as the Doppler frequency shift and signal amplitude fluctuation. But only a limited number of gestures could be correctly identified due to the high cost of training data collection and the lack of capabilities for multi-user identification. With the rapid development of RFID techniques [34, 41], RFID tag now is not only an identification device, but also a low power battery-free wireless sensor serving for various applications, such as the localization and motion tracking. Previous studies, such as Tagoram [41] and RF-IDraw [35], could achieve cm-level accuracy on tracking an individual RFID tag in 2D space (i.e., a tagged object or finger). Further, Tagyro [39] could accurately track the 3D orientation of objects attached with an array of RFID tags, but it only works for objects with the fixed geometry and rotation center. However, due to the complicated body movement involving multiple degrees of freedom, the problem of 3D RFID tag tracking associated with the human body movement, including both limb orientation and joint displacement (e.g., elbow displacement), remains elusive. Inspired by these advanced schemes, we explore the possibility of tracking the human body movement in 3D space via RFID system. In particular, we propose a wearable RFID-based approach as shown in Figure 1, which investigates new opportunities for tracking the body movement by attaching the lightweight RFID tags onto the human body. Wearable RFID refers to the gesture recognition towards the human body wearing multiple RFID tags on different parts of the limbs and torso. In actual applications, these tags can be easily embedded into the fabric [11], e.g., T-shirts, with fixed positions to avoid the complicated configurations. During the process of the human motion, we are able to track the human body movement, including both the rigid body [7] movement (e.g., the torso movement) and non-rigid body movement (e.g., the arm/leg movement), by analyzing the relationship between these movements and the RF-signals from the corresponding tag sets. Due to the inherent identification function, wearable RFID solves the distinguishing problem of tracking multiple subjects in most device-free sensing schemes. For example, in regard to tracking the body movement of multiple human subjects, different human subjects or even different arms/legs can be easily distinguished according to the tag ID, which is usually difficult to achieve in the computer vision or wireless-based sensing schemes. Even RF-IDraw [35] makes the first Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有