41 RF-Kinect:A Wearable RFID-based Approach Towards 3D Body Movement Tracking CHUYU WANG,Nanjing University,CHN JIAN LIU,Rutgers University,USA YINGYING CHEN',Rutgers University,USA LEI XIE',Nanjing University,CHN HONGBO LIU,Indiana University-Purdue University Indianapolis,USA SANGLU LU,Nanjing University,CHN The rising popularity of electronic devices with gesture recognition capabilities makes the gesture-based human-computer interaction more attractive.Along this direction,tracking the body movement in 3D space is desirable to further facilitate behavior recognition in various scenarios.Existing solutions attempt to track the body movement based on computer version or wearable sensors,but they are either dependent on the light or incurring high energy consumption.This paper presents RF-Kinect,a training-free system which tracks the body movement in 3D space by analyzing the phase information of wearable RFID tags attached on the limb.Instead of locating each tag independently in 3D space to recover the body postures,RF-Kinect treats each limb as a whole,and estimates the corresponding orientations through extracting two types of phase features, Phase Difference between Tags(PDT)on the same part of a limb and Phase Difference between Antennas(PDA)of the same tag It then reconstructs the body posture based on the determined orientation of limbs grounded on the human body geometric model,and exploits Kalman filter to smooth the body movement results,which is the temporal sequence of the body postures. The real experiments with 5 volunteers show that RF-Kinect achieves 8.7 angle error for determining the orientation of limbs and 4.4cm relative position error for the position estimation of joints compared with Kinect 2.0 testbed. CCS Concepts:.Networks-Sensor networks;Mobile networks;.Human-centered computing-Mobile devices; Additional Key Words and Phrases:RFID;Body movement tracking ACM Reference Format: Chuyu Wang.Jian Liu,Yingying Chen,Lei Xie,Hongbo Liu,and Sanglu Lu.2018.RF-Kinect:A Wearable RFID-based Approach Towards 3D Body Movement Tracking.Proc.ACM Interact.Mob.Wearable Ubiguitous Technol.2,1,Article 41(March 2018). 28 pages.https://doi.org/10.1145/3191773 "Yingying Chen and Lei Xie are the co-corresponding authors,Email:yingche@scarletmail rutgers.edu,Ixie@nju.edu.cn Authors'addresses:Chuyu Wang.Nanjing University,State Key Laboratory for Novel Software Technology,163 Xianlin Ave,Nanjing,210046, CHN:Jian Liu,Rutgers University,Department of Electrical and Computer Engineering,North Brunswick,NJ,08902,USA;Yingying Chen, Rutgers University,Department of Electrical and Computer Engineering,North Brunswick,NJ,08902,USA;Lei Xie,Nanjing University,State Key Laboratory for Novel Software Technology,163 Xianlin Ave,Nanjing.210046,CHN:Hongbo Liu,Indiana University-Purdue University Indianapolis.Department of Computer,Information and Technology.Indianapolis,IN,46202,USA:Sanglu Lu,Nanjing University,State Key Laboratory for Novel Software Technology,163 Xianlin Ave,Nanjing,210046,CHN. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.Copyrights for components of this work owned by others than ACM must be honored.Abstracting with credit is permitted.To copy otherwise,or republish,to post on servers or to redistribute to lists,requires prior specific permission and/or a fee.Request permissions from permissions@acm.org. 2018 Association for Computing Machinery. 2474-9567/2018/3-ART41$15.00 https:/doi.org/10.1145/3191773 Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
41 RF-Kinect: A Wearable RFID-based Approach Towards 3D Body Movement Tracking CHUYU WANG, Nanjing University, CHN JIAN LIU, Rutgers University, USA YINGYING CHEN∗ , Rutgers University, USA LEI XIE∗ , Nanjing University, CHN HONGBO LIU, Indiana University-Purdue University Indianapolis, USA SANGLU LU, Nanjing University, CHN The rising popularity of electronic devices with gesture recognition capabilities makes the gesture-based human-computer interaction more attractive. Along this direction, tracking the body movement in 3D space is desirable to further facilitate behavior recognition in various scenarios. Existing solutions attempt to track the body movement based on computer version or wearable sensors, but they are either dependent on the light or incurring high energy consumption. This paper presents RF-Kinect, a training-free system which tracks the body movement in 3D space by analyzing the phase information of wearable RFID tags attached on the limb. Instead of locating each tag independently in 3D space to recover the body postures, RF-Kinect treats each limb as a whole, and estimates the corresponding orientations through extracting two types of phase features, Phase Difference between Tags (PDT) on the same part of a limb and Phase Difference between Antennas (PDA) of the same tag. It then reconstructs the body posture based on the determined orientation of limbs grounded on the human body geometric model, and exploits Kalman filter to smooth the body movement results, which is the temporal sequence of the body postures. The real experiments with 5 volunteers show that RF-Kinect achieves 8.7 ◦ angle error for determining the orientation of limbs and 4.4cm relative position error for the position estimation of joints compared with Kinect 2.0 testbed. CCS Concepts: • Networks → Sensor networks; Mobile networks; • Human-centered computing → Mobile devices; Additional Key Words and Phrases: RFID; Body movement tracking ACM Reference Format: Chuyu Wang, Jian Liu, Yingying Chen, Lei Xie, Hongbo Liu, and Sanglu Lu. 2018. RF-Kinect: A Wearable RFID-based Approach Towards 3D Body Movement Tracking. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 1, Article 41 (March 2018), 28 pages. https://doi.org/10.1145/3191773 ∗Yingying Chen and Lei Xie are the co-corresponding authors, Email: yingche@scarletmail.rutgers.edu, lxie@nju.edu.cn. Authors’ addresses: Chuyu Wang, Nanjing University, State Key Laboratory for Novel Software Technology, 163 Xianlin Ave, Nanjing, 210046, CHN; Jian Liu, Rutgers University, Department of Electrical and Computer Engineering, North Brunswick, NJ, 08902, USA; Yingying Chen, Rutgers University, Department of Electrical and Computer Engineering, North Brunswick, NJ, 08902, USA; Lei Xie, Nanjing University, State Key Laboratory for Novel Software Technology, 163 Xianlin Ave, Nanjing, 210046, CHN; Hongbo Liu, Indiana University-Purdue University Indianapolis, Department of Computer, Information and Technology, Indianapolis, IN, 46202, USA; Sanglu Lu, Nanjing University, State Key Laboratory for Novel Software Technology, 163 Xianlin Ave, Nanjing, 210046, CHN. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. © 2018 Association for Computing Machinery. 2474-9567/2018/3-ART41 $15.00 https://doi.org/10.1145/3191773 Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018
41:2·C.Wang et al. 1 INTRODUCTION The gesture-based Human-Computer Interaction(HCI)embraces an increasing number of practical usage enabled by the growing popularity of electronic devices with gesture recognition capabilities.Recent survey reveals that the global gesture recognition market is anticipated to reach USD 48.56 billion by 2024 [4].In particular,the success of Microsoft Kinect [8]in tracking human gestures in gaming consoles has induced many emerging applications to adopt gesture recognition solutions in the fields like healthcare,smart homes,mobile robot control, etc.For example,numerous applications are developed to monitor human's well-being based on their activities (such as fitness,drinking,sleeping,etc.)with either wearable devices or smartphones.The success of gesture and activity recognition leads to a growing interest in developing new approaches and technologies to track the body movement in 3D space,which can further facilitate behavior recognition in various occasions,such as VR gaming,mobile healthcare,and user access control. Existing solutions for body movement recognition fall into three main categories:(i)Computer vision-based solutions,such as Kinect and LeapMotion [5,8],leverage the depth sensors or infrared cameras to recognize body gestures and allow the user to interact with machines in a natural way.However,these methods suffer from several inherent disadvantages of computer vision including light dependence,dead corner,high computational cost,and ambiguity of multi-people.(ii)Sensor-based solutions,such as the smartwatch and wristband [3],are designed to track the movement of the limbs based on the accelerator or gyroscope readings.But these systems usually require the user to wear different kinds of sensing devices,which present short life cycles due to the high energy consumption.Further,there are also some products(i.e.,Vicon [6])integrating the information from both cameras and wearable sensors to accurately track the body movement,however the high price of the infrastructure is not affordable for many systems.(iii)Wireless signal-based solutions [17,25]capture the specific gestures based on the changes of some wireless signal features,such as the Doppler frequency shift and signal amplitude fluctuation.But only a limited number of gestures could be correctly identified due to the high cost of training data collection and the lack of capabilities for multi-user identification. With the rapid development of RFID techniques [34,41],RFID tag now is not only an identification device, but also a low power battery-free wireless sensor serving for various applications,such as the localization and motion tracking.Previous studies,such as Tagoram [41]and RF-IDraw [35],could achieve cm-level accuracy on tracking an individual RFID tag in 2D space(ie.,a tagged object or finger).Further,Tagyro [39]could accurately track the 3D orientation of objects attached with an array of RFID tags,but it only works for objects with the fixed geometry and rotation center.However,due to the complicated body movement involving multiple degrees of freedom,the problem of 3D RFID tag tracking associated with the human body movement,including both limb orientation and joint displacement (e.g.,elbow displacement),remains elusive. Inspired by these advanced schemes,we explore the possibility of tracking the human body movement in 3D space via RFID system.In particular,we propose a wearable RFID-based approach as shown in Figure 1,which investigates new opportunities for tracking the body movement by attaching the lightweight RFID tags onto the human body.Wearable RFID refers to the gesture recognition towards the human body wearing multiple RFID tags on different parts of the limbs and torso.In actual applications,these tags can be easily embedded into the fabric [11].e.g.,T-shirts,with fixed positions to avoid the complicated configurations.During the process of the human motion,we are able to track the human body movement,including both the rigid body [7]movement(e.g, the torso movement)and non-rigid body movement (e.g.,the arm/leg movement),by analyzing the relationship between these movements and the RF-signals from the corresponding tag sets.Due to the inherent identification function,wearable RFID solves the distinguishing problem of tracking multiple subjects in most device-free sensing schemes.For example,in regard to tracking the body movement of multiple human subjects,different human subjects or even different arms/legs can be easily distinguished according to the tag ID,which is usually difficult to achieve in the computer vision or wireless-based sensing schemes.Even RF-IDraw [35]makes the first Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
41:2 • C. Wang et al. 1 INTRODUCTION The gesture-based Human-Computer Interaction (HCI) embraces an increasing number of practical usage enabled by the growing popularity of electronic devices with gesture recognition capabilities. Recent survey reveals that the global gesture recognition market is anticipated to reach USD 48.56 billion by 2024 [4]. In particular, the success of Microsoft Kinect [8] in tracking human gestures in gaming consoles has induced many emerging applications to adopt gesture recognition solutions in the fields like healthcare, smart homes, mobile robot control, etc. For example, numerous applications are developed to monitor human’s well-being based on their activities (such as fitness, drinking, sleeping, etc.) with either wearable devices or smartphones. The success of gesture and activity recognition leads to a growing interest in developing new approaches and technologies to track the body movement in 3D space, which can further facilitate behavior recognition in various occasions, such as VR gaming, mobile healthcare, and user access control. Existing solutions for body movement recognition fall into three main categories: (i) Computer vision-based solutions, such as Kinect and LeapMotion [5, 8], leverage the depth sensors or infrared cameras to recognize body gestures and allow the user to interact with machines in a natural way. However, these methods suffer from several inherent disadvantages of computer vision including light dependence, dead corner, high computational cost, and ambiguity of multi-people. (ii) Sensor-based solutions, such as the smartwatch and wristband [3], are designed to track the movement of the limbs based on the accelerator or gyroscope readings. But these systems usually require the user to wear different kinds of sensing devices, which present short life cycles due to the high energy consumption. Further, there are also some products (i.e., Vicon [6]) integrating the information from both cameras and wearable sensors to accurately track the body movement, however the high price of the infrastructure is not affordable for many systems. (iii) Wireless signal-based solutions [17, 25] capture the specific gestures based on the changes of some wireless signal features, such as the Doppler frequency shift and signal amplitude fluctuation. But only a limited number of gestures could be correctly identified due to the high cost of training data collection and the lack of capabilities for multi-user identification. With the rapid development of RFID techniques [34, 41], RFID tag now is not only an identification device, but also a low power battery-free wireless sensor serving for various applications, such as the localization and motion tracking. Previous studies, such as Tagoram [41] and RF-IDraw [35], could achieve cm-level accuracy on tracking an individual RFID tag in 2D space (i.e., a tagged object or finger). Further, Tagyro [39] could accurately track the 3D orientation of objects attached with an array of RFID tags, but it only works for objects with the fixed geometry and rotation center. However, due to the complicated body movement involving multiple degrees of freedom, the problem of 3D RFID tag tracking associated with the human body movement, including both limb orientation and joint displacement (e.g., elbow displacement), remains elusive. Inspired by these advanced schemes, we explore the possibility of tracking the human body movement in 3D space via RFID system. In particular, we propose a wearable RFID-based approach as shown in Figure 1, which investigates new opportunities for tracking the body movement by attaching the lightweight RFID tags onto the human body. Wearable RFID refers to the gesture recognition towards the human body wearing multiple RFID tags on different parts of the limbs and torso. In actual applications, these tags can be easily embedded into the fabric [11], e.g., T-shirts, with fixed positions to avoid the complicated configurations. During the process of the human motion, we are able to track the human body movement, including both the rigid body [7] movement (e.g., the torso movement) and non-rigid body movement (e.g., the arm/leg movement), by analyzing the relationship between these movements and the RF-signals from the corresponding tag sets. Due to the inherent identification function, wearable RFID solves the distinguishing problem of tracking multiple subjects in most device-free sensing schemes. For example, in regard to tracking the body movement of multiple human subjects, different human subjects or even different arms/legs can be easily distinguished according to the tag ID, which is usually difficult to achieve in the computer vision or wireless-based sensing schemes. Even RF-IDraw [35] makes the first Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018
RF-Kinect:A Wearable RFID-based Approach Towards 3D Body Movement Tracking.41:3 Wearable RFID tags Body movement captured by RF-Kinect 0.5 0.5 02 25 030 Y (m) 21 060 X (m) Fig.1.RF-Kinect:Tracking the body movement based on wearable RFID tags. attempt to track the finger by wearing one RFID tag on the finger,we are the first to systematically explore the usage of wearable RFID on tracking the whole body movement(i.e.,the limb orientation and joint displacement) in 3D space,which is more complicated and challenging. In order to investigate the applicability of wearable RFID,we present RF-Kinect which consists of multiple wearable RFID tags and one dual-antenna RFID reader measuring the RF signal variations from these tags.In particular,RF-Kinect focuses on tracking the complicated 3D limb orientation and movement with multi-degree of freedom other than the simple trajectory of the finger or hand,which has been well studied in previous work [27,31,35].The key novelty of RF-Kinect lies in (i)training-free and(ii)minimum hardware requirements. First,it is impractical to traverse numerous body movements(e.g,[13,17])to build a complete training dataset. Thus,we build a geometric model of the human body to assist the body movement tracking with little efforts on training data collection.Second,it is also not applicable to place a large number of antennas on RFID readers around the user,making the tracking of the body movement cumbersome.The existing RFID-based localization systems require either at least three static antennas or a moving antenna [34-36,41]to accomplish the task, posing a big challenge on the hardware design for the body movement tracking.Therefore,we aim to design RF-Kinect with the minimum hardware requirement by leveraging a single dual-antenna RFID reader. The basic idea of RF-Kinect is to derive the limb orientation by leveraging the phase information of RF signals collected from multiple wearable RFID tags,and then construct the body movement,which is represented as a temporal sequence of the limb orientation estimations,grounded on the predefined human body geometric model.Specifically,RF-Kinect extracts two types of phase features to perform the limb orientation estimation:(i) Phase Difference between any two Tags(PDT)attached to the same part of a limb(e.g,the upper arm),and (ii) Phase Difference between the two Antennas(PDA)of the same tag.By regarding the two tags on one skeleton as a rigid body,we can formulate PDT as a function of the skeleton orientation with respect to the transmitting direction of the antenna.The possible orientations derived from a single antenna thus form a conical surface in 3D space,where the apex of the cone is the rotation center of the limb [2].When two antennas are employed to perform the orientation estimation,the possible range of orientations can be largely reduced by examining the overlapping range of two conical surfaces.However,two antennas still lead to two ambiguity orientations on the mirroring sides.Therefore,we further model the relationship between two rigid bodies to filter out the ambiguity.Particularly,we calculate the relative distance between the tags on different skeletons to describe the relationship.Since the relative distance describes the relative postures between two skeletons,the ambiguity orientation on the mirroring side can be thus filtered out due to the unmatched relative distances.Finally,we can correctly estimate the orientations of each skeleton of the limb The key contributions in this work are summarized as follows:1)To the best of our knowledge,we are the first to propose the wearable RFID research and systematically investigate the applicability of it by presenting Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
RF-Kinect: A Wearable RFID-based Approach Towards 3D Body Movement Tracking • 41:3 !"#$ " %&'() "#$ "#* "#+ !$ !$#, -&'() !. !"#, " "#, /&'() !"#$#%&"' ()*+',#-. /012'304"3"5,' 6#7,8$"1'%2'()9:;5"6, Fig. 1. RF-Kinect: Tracking the body movement based on wearable RFID tags. attempt to track the finger by wearing one RFID tag on the finger, we are the first to systematically explore the usage of wearable RFID on tracking the whole body movement (i.e., the limb orientation and joint displacement) in 3D space, which is more complicated and challenging. In order to investigate the applicability of wearable RFID, we present RF-Kinect which consists of multiple wearable RFID tags and one dual-antenna RFID reader measuring the RF signal variations from these tags. In particular, RF-Kinect focuses on tracking the complicated 3D limb orientation and movement with multi-degree of freedom other than the simple trajectory of the finger or hand, which has been well studied in previous work [27, 31, 35]. The key novelty of RF-Kinect lies in (i) training-free and (ii) minimum hardware requirements. First, it is impractical to traverse numerous body movements (e.g., [13, 17]) to build a complete training dataset. Thus, we build a geometric model of the human body to assist the body movement tracking with little efforts on training data collection. Second, it is also not applicable to place a large number of antennas on RFID readers around the user, making the tracking of the body movement cumbersome. The existing RFID-based localization systems require either at least three static antennas or a moving antenna [34–36, 41] to accomplish the task, posing a big challenge on the hardware design for the body movement tracking. Therefore, we aim to design RF-Kinect with the minimum hardware requirement by leveraging a single dual-antenna RFID reader. The basic idea of RF-Kinect is to derive the limb orientation by leveraging the phase information of RF signals collected from multiple wearable RFID tags, and then construct the body movement, which is represented as a temporal sequence of the limb orientation estimations, grounded on the predefined human body geometric model. Specifically, RF-Kinect extracts two types of phase features to perform the limb orientation estimation: (i) Phase Difference between any two Tags (PDT) attached to the same part of a limb (e.g., the upper arm), and (ii) Phase Difference between the two Antennas (PDA) of the same tag. By regarding the two tags on one skeleton as a rigid body, we can formulate PDT as a function of the skeleton orientation with respect to the transmitting direction of the antenna. The possible orientations derived from a single antenna thus form a conical surface in 3D space, where the apex of the cone is the rotation center of the limb [2]. When two antennas are employed to perform the orientation estimation, the possible range of orientations can be largely reduced by examining the overlapping range of two conical surfaces. However, two antennas still lead to two ambiguity orientations on the mirroring sides. Therefore, we further model the relationship between two rigid bodies to filter out the ambiguity. Particularly, we calculate the relative distance between the tags on different skeletons to describe the relationship. Since the relative distance describes the relative postures between two skeletons, the ambiguity orientation on the mirroring side can be thus filtered out due to the unmatched relative distances. Finally, we can correctly estimate the orientations of each skeleton of the limb. The key contributions in this work are summarized as follows: 1) To the best of our knowledge, we are the first to propose the wearable RFID research and systematically investigate the applicability of it by presenting Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018
41:4·C.Wang et al. RF-Kinect.It is the first training-free and low-cost human body movement tracking system,including both the limb orientation and joint displacement,by leveraging multiple wearable RFID tags,and it overcomes many drawbacks on existing light-dependent works.2)We demonstrate that RF-Kinect could accurately track the 3D body movement other than simply tracking one joint on the body,with the minimum hardware requirements involving only a dual-antenna RFID reader and several low-cost wearable RFID tags.3)Instead of locating the absolute position of each joint for tracking,we regard the human body as the combination of several rigid bodies(i.e.,skeletons)and use a kinematic method to connect each skeleton as the human body model.Then,we exploit the features PDT and PDA to estimate the orientations of each skeleton and use the relative distances to measure the relationship between different skeletons for tracking.4)The fast adoption and low-cost deployment of RF-Kinect are also validated through our prototype implementation.Given the groundtruth from the Kinect 2.0 testbed,our systematic evaluation shows that RF-Kinect could achieve the average angle and position error as low as 8.7 and 4.4cm for the limb orientation and joints'position estimation,respectively. 2 RELATED WORK Existing studies on the gesture/posture recognition can be classified into three main categories: Computer Vision-based.The images and videos captured by the camera could truthfully record the human body movement in different levels of granularity,so there have been active studies on tracking and analyzing the human motion based on the computer vision.For example,Microsoft Kinect [8]provides the fine-grained body movement tracking by fusing the RGB and depth image.Other works try to communicate or sense the human location and activities based on the visible light [15,16,18].LiSense [20]reconstructs the human skeleton in real-time by analyzing the shadows produced by the human body blockage on the encoded visible light sources.It is obvious that the computer vision-based methods are highly light-dependent,so they could fail in tracking the body movement if the line-of-sight(LOS)light channel is unavailable.Besides,the videos may incur the privacy problem of the users in some sensitive scenarios.Unlike the computer vision-based approaches,RF-Kinect relies on the RF device,which can work well in most Non-line-of-sight(NLOS)channel environments.Moreover,given the unique ID for each tag,it can also be easily extended to the body movement tracking scenario involving multiple users. Motion Sensor-based.Previous research has shown that the built-in motion sensors on wearable devices can also be utilized for the body movement recognition [19,44].Wearable devices such as the smartwatch and wristband can detect a variety of body movements,including walking,running,jumping,arm movement etc., based on the accelerometer and gyroscope readings [22,23,33,40,45].For example,ArmTrack [30]proposes to track the posture of the entire arm solely relying on the smartwatch.However,the motion sensors in wearable devices are only able to track the movement of a particular part of the human body,and more importantly, their availability is highly limited by the battery life.Some academic studies [32]and commercial products(e.g, Vicon [6])have the whole human body attached with the special sensors,and then rely on the high-speed cameras to capture the motion of different sensors for the accurate gesture recognition.Nevertheless,the high-speed cameras are usually so expensive that are not affordable by everyone,and the tracking process with camera is also highly light-dependent.Different from the above motion sensor-based systems,RF-Kinect aims to track the body movement with RFID tags,which are battery-free and more low-cost.Moreover,since each RFID tag only costs from 5 to 15 U.S.cents today,such price is affordable for almost everyone,even if the tags are embedded into clothes Wireless Signal-based.More recently,several studies propose to utilize wireless signals to sense human gestures [10,17,25,35,37,38,42,43].Pu et al.[25]leverage the Doppler influence from Wi-Fi signals caused by body gestures to recognize several pre-defined gestures;Kellogg et al.[17]recognize a set of gestures by analyzing the amplitude changes of RF signals without wearing any device;Adib et al.[9]propose to reconstruct a human Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
41:4 • C. Wang et al. RF-Kinect. It is the first training-free and low-cost human body movement tracking system, including both the limb orientation and joint displacement, by leveraging multiple wearable RFID tags, and it overcomes many drawbacks on existing light-dependent works. 2) We demonstrate that RF-Kinect could accurately track the 3D body movement other than simply tracking one joint on the body, with the minimum hardware requirements involving only a dual-antenna RFID reader and several low-cost wearable RFID tags. 3) Instead of locating the absolute position of each joint for tracking, we regard the human body as the combination of several rigid bodies (i.e., skeletons) and use a kinematic method to connect each skeleton as the human body model. Then, we exploit the features PDT and PDA to estimate the orientations of each skeleton and use the relative distances to measure the relationship between different skeletons for tracking. 4) The fast adoption and low-cost deployment of RF-Kinect are also validated through our prototype implementation. Given the groundtruth from the Kinect 2.0 testbed, our systematic evaluation shows that RF-Kinect could achieve the average angle and position error as low as 8.7 ◦ and 4.4cm for the limb orientation and joints’ position estimation, respectively. 2 RELATED WORK Existing studies on the gesture/posture recognition can be classified into three main categories: Computer Vision-based. The images and videos captured by the camera could truthfully record the human body movement in different levels of granularity, so there have been active studies on tracking and analyzing the human motion based on the computer vision. For example, Microsoft Kinect [8] provides the fine-grained body movement tracking by fusing the RGB and depth image. Other works try to communicate or sense the human location and activities based on the visible light [15, 16, 18]. LiSense [20] reconstructs the human skeleton in real-time by analyzing the shadows produced by the human body blockage on the encoded visible light sources. It is obvious that the computer vision-based methods are highly light-dependent, so they could fail in tracking the body movement if the line-of-sight (LOS) light channel is unavailable. Besides, the videos may incur the privacy problem of the users in some sensitive scenarios. Unlike the computer vision-based approaches, RF-Kinect relies on the RF device, which can work well in most Non-line-of-sight (NLOS) channel environments. Moreover, given the unique ID for each tag, it can also be easily extended to the body movement tracking scenario involving multiple users. Motion Sensor-based. Previous research has shown that the built-in motion sensors on wearable devices can also be utilized for the body movement recognition [19, 44]. Wearable devices such as the smartwatch and wristband can detect a variety of body movements, including walking, running, jumping, arm movement etc., based on the accelerometer and gyroscope readings [22, 23, 33, 40, 45]. For example, ArmTrack [30] proposes to track the posture of the entire arm solely relying on the smartwatch. However, the motion sensors in wearable devices are only able to track the movement of a particular part of the human body, and more importantly, their availability is highly limited by the battery life. Some academic studies [32] and commercial products (e.g., Vicon [6]) have the whole human body attached with the special sensors, and then rely on the high-speed cameras to capture the motion of different sensors for the accurate gesture recognition. Nevertheless, the high-speed cameras are usually so expensive that are not affordable by everyone, and the tracking process with camera is also highly light-dependent. Different from the above motion sensor-based systems, RF-Kinect aims to track the body movement with RFID tags, which are battery-free and more low-cost. Moreover, since each RFID tag only costs from 5 to 15 U.S. cents today, such price is affordable for almost everyone, even if the tags are embedded into clothes. Wireless Signal-based. More recently, several studies propose to utilize wireless signals to sense human gestures [10, 17, 25, 35, 37, 38, 42, 43]. Pu et al. [25] leverage the Doppler influence from Wi-Fi signals caused by body gestures to recognize several pre-defined gestures; Kellogg et al. [17] recognize a set of gestures by analyzing the amplitude changes of RF signals without wearing any device; Adib et al. [9] propose to reconstruct a human Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018
RF-Kinect:A Wearable RFID-based Approach Towards 3D Body Movement Tracking.41:5 Wearable Arm rotation RFID leads to the tag RFID tags displacement antennas and rotation. Fig.2.RF-Kinect illustration of the RFID-based human body movements tracking figure by analyzing the RF signals'reflections through walls and occlusions,thereby accurately locating each part of the human body.Nowadays,as the rapid development of RFID-based localization techniques [28,34,41],more systems are developed to sense human activities based on RFID.Wang et al.[35]recover the moving trace of the tagged finger on a surface plane based on the AoA model.Shangguan et al.[29]tracks the tagged object in the 2D plane for user feedbacks based on only one antenna.But these methods only work in 2D space by tracking a rigid body,and thus are not suitable for tracking the complicated movement of the human body.Lin et al.[21]track the motion status of the tagball based on the phase variation read from the attached tags.Tagyro [39]estimates the orientation of passive objects that have the constant geometric by analyzing the Phase Difference of Arrival of attached tags.Ding et al.[13]aim to detect the fitness gestures leveraging the Doppler profile extracted from the phase trend of RF signals.These RFID-based methods mainly focus on estimating the position/orientation of one single passive rigid body or recognizing the gestures via pattern matching.However,RF-Kinect is designed to track the whole body movement through a model-based approach,which involves several related rigid bodies (i.e.,skeletons)and thus is more challenging for the design of the model-based approach. 3 APPLICATIONS CHALLENGES In this section,we first present the application scenario of RF-Kinect,and introduce the preliminaries of tracking human body movements using RF signals.We then describe the main challenges of the proposed RF-Kinect. 3.1 RF-Kinect Application Scenario The wireless information gathered from wearable RFID tags opens a new research opportunity for developing gesture recognition systems and supporting related applications.RF-Kinect is such a system aiming to track human body movements based on the RF signals emitted from the wearable RFID tags attached to the human body.Taking the Virtual reality(VR)gaming as one example,we can utilize RF-Kinect to recognize the user gestures during the game,in the meanwhile,RF-Kinect can also identify specific users based on the wearable tag IDs at any time.Therefore,RF-Kinect can easily support multi-player games by identifying the users from the tag IDs and automatically reload the gaming process for each user from the tag ID.In the contrast,even traditional vision-based approaches can provide good accuracy in the games,they usually need to manually configure for different users and may also suffer from the interference of surrounding people,leading to bad user experience.Personal fitness,as another example,could also rely on RF-Kinect to associate the recognized activities with the subject for the fitness monitoring.Due to the energy harvesting capability of the wearable RFID tags from backscattered signal,RF-Kinect could operate for a long term without the battery supply issues Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
RF-Kinect: A Wearable RFID-based Approach Towards 3D Body Movement Tracking • 41:5 RFID antennas Wearable RFID tags Arm rotation leads to the tag displacement and rotation. Fig. 2. RF-Kinect illustration of the RFID-based human body movements tracking. figure by analyzing the RF signals’ reflections through walls and occlusions, thereby accurately locating each part of the human body. Nowadays, as the rapid development of RFID-based localization techniques [28, 34, 41], more systems are developed to sense human activities based on RFID. Wang et al. [35] recover the moving trace of the tagged finger on a surface plane based on the AoA model. Shangguan et al. [29] tracks the tagged object in the 2D plane for user feedbacks based on only one antenna. But these methods only work in 2D space by tracking a rigid body, and thus are not suitable for tracking the complicated movement of the human body. Lin et al. [21] track the motion status of the tagball based on the phase variation read from the attached tags. Tagyro [39] estimates the orientation of passive objects that have the constant geometric by analyzing the Phase Difference of Arrival of attached tags. Ding et al. [13] aim to detect the fitness gestures leveraging the Doppler profile extracted from the phase trend of RF signals. These RFID-based methods mainly focus on estimating the position/orientation of one single passive rigid body or recognizing the gestures via pattern matching. However, RF-Kinect is designed to track the whole body movement through a model-based approach, which involves several related rigid bodies (i.e., skeletons) and thus is more challenging for the design of the model-based approach. 3 APPLICATIONS & CHALLENGES In this section, we first present the application scenario of RF-Kinect, and introduce the preliminaries of tracking human body movements using RF signals. We then describe the main challenges of the proposed RF-Kinect. 3.1 RF-Kinect Application Scenario The wireless information gathered from wearable RFID tags opens a new research opportunity for developing gesture recognition systems and supporting related applications. RF-Kinect is such a system aiming to track human body movements based on the RF signals emitted from the wearable RFID tags attached to the human body. Taking the Virtual reality (VR) gaming as one example, we can utilize RF-Kinect to recognize the user gestures during the game, in the meanwhile, RF-Kinect can also identify specific users based on the wearable tag IDs at any time. Therefore, RF-Kinect can easily support multi-player games by identifying the users from the tag IDs and automatically reload the gaming process for each user from the tag ID. In the contrast, even traditional vision-based approaches can provide good accuracy in the games, they usually need to manually configure for different users and may also suffer from the interference of surrounding people, leading to bad user experience. Personal fitness, as another example, could also rely on RF-Kinect to associate the recognized activities with the subject for the fitness monitoring. Due to the energy harvesting capability of the wearable RFID tags from backscattered signal, RF-Kinect could operate for a long term without the battery supply issues Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018
41:6·C.Wang et al. Antenna pattern X RFID Tag RFID antenna of the dipole tag 8 6 .90 45 0 45 90 Rotation angle ( (a)Experiment deployment of examining the influence of tag orientation (b)Phase value increases according to the change of tag orientation Fig.3.Phase VS tag orientation:phase value is related to the tag orientation. like wearable sensors.Compared with the vision-based approaches,RF-Kinect can efficiently filter out the other users from the tag IDs and usually work well when some objects block the line of sight path. As shown in Figure 2,RF-Kinect utilizes a dual-antenna RFID reader to continuously scan the wearable RFID tags attached on the user(e.g.,on the clothes)for the body movement tracking.In actual applications,we embed the tags into the T-shirts with fixed positions to avoid the complicated configurations.The changes of user's posture(e.g.,the arm rotation)lead to the displacement and rotation of the wearable tags accordingly,thus producing unique RF signal patterns.Specifically,during each scan of RFID reader,the RF signals collected from multiple wearable RFID tags are combined to estimate the orientation of each limb(e.g.,the upper arm,lower arm)and the position of each joint(e.g.,the elbow,wrist),and finally the body posture will be successfully reconstructed.By concatenating the body postures derived from multiple scans,the entire body movement will be uniquely determined.To get rid of the exhaustive training efforts on covering all body movements,a training-free framework thus is essentially needed to reduce the complexity of the body movement tracking. 3.2 Preliminaries In order to track the body movement,we need to identify some reliable RF signal features on distinguishing different postures and corresponding changes.There are several RF signal features,such as phase,RSSI and reading rate,available from the RFID system.According to the recent studies [21,28,35,41],the phase information is proved to be an optimal choice than other features for the localization and many other sensing applications In particular,the phase indicates the offset degree of the received signal from the original transmitting signal, ranging from 0 to 360 degrees.Assuming the distance between the antenna and tag is d,the phase 0 can be represented as: 8=(2m24 +aeu)mod2m, (1) where A is the wavelength,and de is the system noise caused by factorial imperfection of tags. Since the body movement can unavoidably lead to the change of the tag orientation in 3D space,we first conduct controlled experiments to study the influence of the tag orientation on the phase as illustrated in Figure 3(a). The RFID tag spins 180 on a fixed spot along three different axes before an RFID reader,and the corresponding phase change is presented in Figure 3(b).We find that the phase changes linearly as the tag rotating along the X-axis,and remains stable along the Z-axis.When rotating along the Y-axis,most phase are similar except the perpendicular direction(i.e.,-90and 90).We observe the similar phase variation trend when conducting the same experiment with different tags placed at different locations.The reason behind such phenomenon is that the RFID tags are commonly equipped with the linear-polarized dipole antenna,while the reader antenna works Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
41:6 • C. Wang et al. RFID antenna RFID Tag x y z x y z Antenna pattern of the dipole tag (a) Experiment deployment of examining the influence of tag orientation Rotation angle (°) -90 -45 0 45 90 Phase value (radian) 0 2 4 6 8 10 X Y Z (b) Phase value increases according to the change of tag orientation Fig. 3. Phase VS tag orientation: phase value is related to the tag orientation. like wearable sensors. Compared with the vision-based approaches, RF-Kinect can efficiently filter out the other users from the tag IDs and usually work well when some objects block the line of sight path. As shown in Figure 2, RF-Kinect utilizes a dual-antenna RFID reader to continuously scan the wearable RFID tags attached on the user (e.g., on the clothes) for the body movement tracking. In actual applications, we embed the tags into the T-shirts with fixed positions to avoid the complicated configurations. The changes of user’s posture (e.g., the arm rotation) lead to the displacement and rotation of the wearable tags accordingly, thus producing unique RF signal patterns. Specifically, during each scan of RFID reader, the RF signals collected from multiple wearable RFID tags are combined to estimate the orientation of each limb (e.g., the upper arm, lower arm) and the position of each joint (e.g., the elbow, wrist), and finally the body posture will be successfully reconstructed. By concatenating the body postures derived from multiple scans, the entire body movement will be uniquely determined. To get rid of the exhaustive training efforts on covering all body movements, a training-free framework thus is essentially needed to reduce the complexity of the body movement tracking. 3.2 Preliminaries In order to track the body movement, we need to identify some reliable RF signal features on distinguishing different postures and corresponding changes. There are several RF signal features, such as phase, RSSI and reading rate, available from the RFID system. According to the recent studies [21, 28, 35, 41], the phase information is proved to be an optimal choice than other features for the localization and many other sensing applications. In particular, the phase indicates the offset degree of the received signal from the original transmitting signal, ranging from 0 to 360 degrees. Assuming the distance between the antenna and tag is d, the phase θ can be represented as: θ = (2π 2d λ + θdev ) mod 2π, (1) where λ is the wavelength, and θdev is the system noise caused by factorial imperfection of tags. Since the body movement can unavoidably lead to the change of the tag orientation in 3D space, we first conduct controlled experiments to study the influence of the tag orientation on the phase as illustrated in Figure 3(a). The RFID tag spins 180◦ on a fixed spot along three different axes before an RFID reader, and the corresponding phase change is presented in Figure 3(b). We find that the phase changes linearly as the tag rotating along the X-axis, and remains stable along the Z-axis. When rotating along the Y-axis, most phase are similar except the perpendicular direction (i.e., −90◦ and 90◦ ). We observe the similar phase variation trend when conducting the same experiment with different tags placed at different locations. The reason behind such phenomenon is that the RFID tags are commonly equipped with the linear-polarized dipole antenna, while the reader antenna works Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018
RF-Kinect:A Wearable RFID-based Approach Towards 3D Body Movement Tracking.41:7 3.5 PDT peak RFID RFID Tags antenna 2 3 4 5 6 Time(s) (a)A user stretches the arm forward with two tags at-(b)PDT trend when the user stretches the arm forward tached on the arm Fig.4.Preliminary study of PDT trend when the tag-attached user stretches the arm forward. at the circular polarization mode,which is compatible with two perpendicular polarizations with 90 phase difference,to identify the tags of different orientations.The rotation of RFID tag along the X-axis will change the polarization of the dipole antenna,and thereby affect the phase measurements due to the changes on the electromagnetic coupling between the tag and reader antenna.As a result,rotating 180 along the X-axis leads to 2m changes on the phase measurement,whereas rotating along the Y or Z-axis leads to no phase change due to the constant electromagnetic coupling.Note that when the tag rotates 90 along Y-axis,the tag faces the direction of the minimum gain on the RFID reader antenna,thus producing erroneous phase measurements due to multi-path effects.The above observation implies that the tag orientation has a significant impact on the extracted phase,and results in the ambiguity on body movement tracking. In order to eliminate such negative impacts from tag orientation changes,we utilize the Phase Difference between Tags(PDT)to track the body movement.Specifically,we deploy multiple RFID tags in parallel and then measure the phase difference between these tags.Since all the tags have the same phase offset due to the consistent orientation,we can cancel such phase offset via the phase difference between different tags.Moreover, even though the tags have slight different orientations due to deployment or manufacturing error,the phase offset due to the rotation can still be canceled,because these tags are under the same rotation and hence have the same phase offset.To validate its effectiveness,we further conduct another experiment with the posture where the user stretches his/her arm forward as shown in Figure 4(a).Two tags with the same orientation are attached on the lower arm with fixed distance and the corresponding phase difference(PDT)is presented in Figure 4(b). We find that the PDT first increases and then slightly decreases.It coincides with the varying trend of distance difference(the difference between the red and yellow line)with respect to the two tags for one antenna,which is small at first and increases to the maximum value as the arm points to the antenna(e.g.,the blue dash line). Therefore,it is feasible to track the body movement based on the PDT. 3.3 Challenges To achieve the goal of the accurate and training-free body movement tracking with minimum hardware support, we identify three key challenges as follows: Tracking with a Dual-antenna RFID Reader.Given the dual-antenna RFID reader to facilitate the mini- mum hardware requirement,it is a challenging task to track the human body movement.Existing RFID-based localization methods(e.g.,[26,41])are not applicable for our problem,since they require at least three antennas Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
RF-Kinect: A Wearable RFID-based Approach Towards 3D Body Movement Tracking • 41:7 !"#$ %&'(&&% !"#$ )%*+ (a) A user stretches the arm forward with two tags attached on the arm Time (s) 0 1 2 3 4 5 6 PDT (radian) 1 1.5 2 2.5 3 3.5 4 PDT peak (b) PDT trend when the user stretches the arm forward Fig. 4. Preliminary study of PDT trend when the tag-attached user stretches the arm forward. at the circular polarization mode, which is compatible with two perpendicular polarizations with 90◦ phase difference, to identify the tags of different orientations. The rotation of RFID tag along the X-axis will change the polarization of the dipole antenna, and thereby affect the phase measurements due to the changes on the electromagnetic coupling between the tag and reader antenna. As a result, rotating 180◦ along the X-axis leads to 2π changes on the phase measurement, whereas rotating along the Y or Z-axis leads to no phase change due to the constant electromagnetic coupling. Note that when the tag rotates 90◦ along Y-axis, the tag faces the direction of the minimum gain on the RFID reader antenna, thus producing erroneous phase measurements due to multi-path effects. The above observation implies that the tag orientation has a significant impact on the extracted phase, and results in the ambiguity on body movement tracking. In order to eliminate such negative impacts from tag orientation changes, we utilize the Phase Difference between Tags (PDT) to track the body movement. Specifically, we deploy multiple RFID tags in parallel and then measure the phase difference between these tags. Since all the tags have the same phase offset due to the consistent orientation, we can cancel such phase offset via the phase difference between different tags. Moreover, even though the tags have slight different orientations due to deployment or manufacturing error, the phase offset due to the rotation can still be canceled, because these tags are under the same rotation and hence have the same phase offset. To validate its effectiveness, we further conduct another experiment with the posture where the user stretches his/her arm forward as shown in Figure 4(a). Two tags with the same orientation are attached on the lower arm with fixed distance and the corresponding phase difference (PDT) is presented in Figure 4(b). We find that the PDT first increases and then slightly decreases. It coincides with the varying trend of distance difference (the difference between the red and yellow line) with respect to the two tags for one antenna, which is small at first and increases to the maximum value as the arm points to the antenna (e.g., the blue dash line). Therefore, it is feasible to track the body movement based on the PDT. 3.3 Challenges To achieve the goal of the accurate and training-free body movement tracking with minimum hardware support, we identify three key challenges as follows: Tracking with a Dual-antenna RFID Reader. Given the dual-antenna RFID reader to facilitate the minimum hardware requirement, it is a challenging task to track the human body movement. Existing RFID-based localization methods (e.g., [26, 41]) are not applicable for our problem, since they require at least three antennas Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018
41:8·C.Wang et al.. or moving antennas to locate a target tag in 2D environment.Other related studies such as [27]and [29]can locate a tagged object with two antennas or only one antenna,but they only work in 2D plane and they only track one object by attaching one or more tags on it.For the complex body movement in 3D space,it is not applicable to the motion tracking in our application scenario.Thus,a dual-antenna-based solution needs to be proposed to facilitate the 3D body movement tracking Imperfect Phase Measurements.Unlike previous RFID-based localization studies [35,41]that track the tag movement in 2D space,our work aims to achieve a more challenging goal,i.e.,tracking the movement in 3D space.So it poses even higher requirements on the phase measurements.There are multiple factors that may affect the uniqueness and accuracy of phase measurements related to the body movement.According to our preliminary study,the phase change of the RF signal is determined by both the tag-antenna distance and the tag orientation.Moreover,both the water-rich human body and the muscle deformation during the body movement may also affect the phase measurements from RF signals.All the above factors together make it much harder to track the human body movement in 3D space leveraging the phase information in RF-signals Training-free Body Movement Tracking.Existing studies on gesture tracking usually spend significant efforts on training the classification model by asking the users to perform each specific gesture multiple times [13, 25].However,the number of gestures that can be recognized highly relies on the size of the training set,so the scalability to unknown gestures is greatly thwarted.Some other methods [29,31,35]are designed to recover the trace of a rigid body(e.g.,the finger and box)from the signal models,but they are not suitable for the complex human body,which consists of several rigid bodies.In order to identify diverse gestures or postures flexibly of the complex human body,it is critical to develop a body movement tracking system that does not rely on any training dataset. 4 SYSTEM DESIGN In this section,we first introduce the architecture of our RF-Kinect system,and then present the building modules of RF-Kinect for tracking the 3D body movement. 4.1 System Architecture The basic idea of RF-Kinect is to derive the body posture in each scanning round by analyzing the RF signals from the wearable RFID tags attached on the limbs and chest,and then reconstruct the body movement from a series of body postures in consecutive scans.Figure 5 illustrates the architecture of RF-Kinect.We first extract the phase information of M RFID tags from two antennas in consecutive scanning rounds as Phase Stream,where all the attached tags are read in each scanning round.Then the system is initialized by requiring the user to stand still with his/her arms hanging down naturally.As a perfect rigid object,the tags on the chest enable Body Position/Orientation Estimation module to determine the position and facing orientation of the user relative to the antennas based on a model-based approach in the previous work(e.g.,[21]).Then,Coordinate Transformation module converts the relative positions of the antennas into the Skeleton Coordinate System(SCS),which is defined based on the human body geometric structure in Section 4.2,so that the coordinates of both the tags and antennas could be expressed properly.Based on the coordinates of the antennas and tags attached on the user body when the user stands still,the theoretical phase value of each tag is calculated from Eq.(1).Phase Deviation Elimination module then computes the phase offset between the theoretical and the measured phase value,which is used to eliminate the phase deviation in the following biased phase stream. After the above preprocessing,Phase Difference Extraction module extracts two phase related features from the RF signal measurements in each scanning round:(i)Phase Difference between any two Tags(PDT)attached to the same part of a limb(e.g.,the upper arm),and (ii)Phase Difference between the two Antennas(PDA)of the same tag.The two phase related features are then utilized to estimate the limb postures based on the 3D Limb Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
41:8 • C. Wang et al. or moving antennas to locate a target tag in 2D environment. Other related studies such as [27] and [29] can locate a tagged object with two antennas or only one antenna, but they only work in 2D plane and they only track one object by attaching one or more tags on it. For the complex body movement in 3D space, it is not applicable to the motion tracking in our application scenario. Thus, a dual-antenna-based solution needs to be proposed to facilitate the 3D body movement tracking. Imperfect Phase Measurements. Unlike previous RFID-based localization studies [35, 41] that track the tag movement in 2D space, our work aims to achieve a more challenging goal, i.e., tracking the movement in 3D space. So it poses even higher requirements on the phase measurements. There are multiple factors that may affect the uniqueness and accuracy of phase measurements related to the body movement. According to our preliminary study, the phase change of the RF signal is determined by both the tag-antenna distance and the tag orientation. Moreover, both the water-rich human body and the muscle deformation during the body movement may also affect the phase measurements from RF signals. All the above factors together make it much harder to track the human body movement in 3D space leveraging the phase information in RF-signals. Training-free Body Movement Tracking. Existing studies on gesture tracking usually spend significant efforts on training the classification model by asking the users to perform each specific gesture multiple times [13, 25]. However, the number of gestures that can be recognized highly relies on the size of the training set, so the scalability to unknown gestures is greatly thwarted. Some other methods [29, 31, 35] are designed to recover the trace of a rigid body (e.g., the finger and box) from the signal models, but they are not suitable for the complex human body, which consists of several rigid bodies. In order to identify diverse gestures or postures flexibly of the complex human body, it is critical to develop a body movement tracking system that does not rely on any training dataset. 4 SYSTEM DESIGN In this section, we first introduce the architecture of our RF-Kinect system, and then present the building modules of RF-Kinect for tracking the 3D body movement. 4.1 System Architecture The basic idea of RF-Kinect is to derive the body posture in each scanning round by analyzing the RF signals from the wearable RFID tags attached on the limbs and chest, and then reconstruct the body movement from a series of body postures in consecutive scans. Figure 5 illustrates the architecture of RF-Kinect. We first extract the phase information of M RFID tags from two antennas in consecutive scanning rounds as Phase Stream, where all the attached tags are read in each scanning round. Then the system is initialized by requiring the user to stand still with his/her arms hanging down naturally. As a perfect rigid object, the tags on the chest enable Body Position/Orientation Estimation module to determine the position and facing orientation of the user relative to the antennas based on a model-based approach in the previous work (e.g., [21]). Then, Coordinate Transformation module converts the relative positions of the antennas into the Skeleton Coordinate System (SCS), which is defined based on the human body geometric structure in Section 4.2, so that the coordinates of both the tags and antennas could be expressed properly. Based on the coordinates of the antennas and tags attached on the user body when the user stands still, the theoretical phase value of each tag is calculated from Eq. (1). Phase Deviation Elimination module then computes the phase offset between the theoretical and the measured phase value, which is used to eliminate the phase deviation in the following biased phase stream. After the above preprocessing, Phase Difference Extraction module extracts two phase related features from the RF signal measurements in each scanning round: (i) Phase Difference between any two Tags (PDT) attached to the same part of a limb (e.g., the upper arm), and (ii) Phase Difference between the two Antennas (PDA) of the same tag. The two phase related features are then utilized to estimate the limb postures based on the 3D Limb Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018
RF-Kinect:A Wearable RFID-based Approach Towards 3D Body Movement Tracking.41:9 Phase Stream Phase Calibration {⑥1(m,a),6(m,a).,6(m,a)】 Body Position/ Tag ID:m∈[1,M] Orientation Estimation Antenna ID:a E [1,2] 盲 Coordinate Phase Difference Extraction Transformation Phase Difference Phase Difference between between Tags(PDT) Antennas(PDA) Phase Deviation Elimination Human Body Body Posture Estimation Geometric 3DLmb Upper Arm Thigh Model Orientation Relative Posture Posture Estimation Distance- 官 based Theoretical AoA-based Orientation Lower Arm Shank Phase Values Orientation Filter Posture Posture Refinement Constraint of the Next Recognized Body Posture Stream Body Posture based on RF-Kinect {b,b2,-,b) Kalman Filter ----------- 3D Body Movement Tracking Fig.5.System architecture of training-free RF-Kinect. Orientation estimation,AoA-based Orientation Refinement and Relative Distance-based Orientation Calibration methods in the Body Posture Estimation.The first two methods determine the limb postures by comparing the extracted PDT/PDA with the theoretical PDT/PDA derived from the Human Body Geometric Model.Moreover. Relative Distance-based Orientation Filter removes the impossible orientations by measuring the relationship between different skeletons,which shrinks the searching range in the orientation estimation.Specifically,the arm posture estimation starts from the upper arm to the lower arm,while the leg posture estimation follows the order from the thigh to the shank.Then,the individual postures estimated from multiple scanning rounds construct a Recognized Body Posture Stream to represent the Body Movement,which is further smoothed with Kalman Filter.After smoothing,the positions of each tag can be calculated from the estimated body posture.Then we compute the theoretical phase values of each tag and extract the theoretical PDA as the constraint condition to calibrate the next body posture estimation.Finally,the 3D body movement is reconstructed accordingly,and can be applied to many interesting applications,such as gaming,healthcare,etc. 4.2 Human Body Geometric Model Before diving into the details of the body movement tracking,we first introduce the human body geometric model in RF-Kinect.Inspired by the robotics studies,which model the human arm into 7 rotational degrees of freedom(DoF)[24].we use 4 of 7 DoFs to model the joints of a single arm by ignoring the other 3 DoFs on the wrist,which beyond the detection capability of our system.Similarly,we extend the method to model the joints of the leg with 3 DoFs. Figure 6 illustrates the model of the right side of human body,while the left side follows a similar model.We use red and blue arrows to indicate the DoFs on the arm and leg.respectively.Specifically,,2 and 3 are the 3 DoFs on the shoulder joint,which correspond to flexion/extension,abduction/adduction and internal/external rotation of the shoulder,respectively,and represents flexion/extension on the elbow joint.Here,=2=3==0 refers to the posture where the arm is naturally hanging down with the palm facing forward.When the user lifts up one arm with bent elbow as shown in Figure 6,both o and will change accordingly.Specifically, Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
RF-Kinect: A Wearable RFID-based Approach Towards 3D Body Movement Tracking • 41:9 !"#$ %"&'('")* +,'-)(.('") /&('0.('") 123!"#$ 4"5-0-)( 6,.78')9 %:.&- 2-5'.('") /;'0').('") !"#$% &'((%)%*+% ,-.)#+.'/* %:.&- 2'--) 6.9& ?%26@ %:.&- 2'--) A)(-)).& ?%2A@ B-7"9)'C-# !"#$ %"&(D,- E(,-.0 !"#$ "%$ & $ "'( F")&(,.')(3"-, A,0 %"&(D,- 4/56 !/$.7)% ,$.'8#.'/* BJKI')-7( OD0.) !"#$ P-"0-(,'73 4"#-; F"",#').(- 6,.)&<",0.('") 6:'9: %"&(D,- E:.)8 %"&(D,- 6:-",-('7.; %:.&- Q.;D-& 12 N'0= +,'-)(.('") /&('0.('") B-;.('5- 2'&(.)7-K =.&-# +,'-)(.('") J';(-, Fig. 5. System architecture of training-free RF-Kinect. Orientation estimation, AoA-based Orientation Refinement and Relative Distance-based Orientation Calibration methods in the Body Posture Estimation. The first two methods determine the limb postures by comparing the extracted PDT/PDA with the theoretical PDT/PDA derived from the Human Body Geometric Model. Moreover, Relative Distance-based Orientation Filter removes the impossible orientations by measuring the relationship between different skeletons, which shrinks the searching range in the orientation estimation. Specifically, the arm posture estimation starts from the upper arm to the lower arm, while the leg posture estimation follows the order from the thigh to the shank. Then, the individual postures estimated from multiple scanning rounds construct a Recognized Body Posture Stream to represent the Body Movement, which is further smoothed with Kalman Filter. After smoothing, the positions of each tag can be calculated from the estimated body posture. Then we compute the theoretical phase values of each tag and extract the theoretical PDA as the constraint condition to calibrate the next body posture estimation. Finally, the 3D body movement is reconstructed accordingly, and can be applied to many interesting applications, such as gaming, healthcare, etc. 4.2 Human Body Geometric Model Before diving into the details of the body movement tracking, we first introduce the human body geometric model in RF-Kinect. Inspired by the robotics studies, which model the human arm into 7 rotational degrees of freedom (DoF) [24], we use 4 of 7 DoFs to model the joints of a single arm by ignoring the other 3 DoFs on the wrist, which beyond the detection capability of our system. Similarly, we extend the method to model the joints of the leg with 3 DoFs. Figure 6 illustrates the model of the right side of human body, while the left side follows a similar model. We use red and blue arrows to indicate the DoFs on the arm and leg, respectively. Specifically, ϕ1, ϕ2 and ϕ3 are the 3 DoFs on the shoulder joint, which correspond to flexion/extension, abduction/adduction and internal/external rotation of the shoulder, respectively, and ϕ4 represents flexion/extension on the elbow joint. Here, ϕ1 = ϕ2 = ϕ3 = ϕ4 = 0 ◦ refers to the posture where the arm is naturally hanging down with the palm facing forward. When the user lifts up one arm with bent elbow as shown in Figure 6, both ϕ1 and ϕ4 will change accordingly. Specifically, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018
41:10·C.Wang et al,. Fig.6.Human body geometric model with rotation angle on each joint. represents the lift angle of the upper arm and represents the bent angle of the elbow.In fact,and together determine the orientation of upper arm in 3D space,while 3 and are for the lower arm orientation It is interesting that the DoF on the shoulder,does not affect the orientation of the upper arm,but the lower arm instead,because measures the angle of internal/external rotation around the arm.Further,,and are the 3 DoFs on the leg.Since the lower body can be modeled in a similar way as the upper body,we will focus on the upper body to demonstrate the human body geometric model. Given the length of upper arm lu and lower arm l,the positions of elbow and wrist are determined by the rotation values of 2,3 and 4 in the Skeleton Coordinate System(SCS).Figure 6 illustrates the SCS in our system-the plane where the user's torso locates(i.e.,the chest)serves as the XZ plane,and the line emanating from the shoulder in the frontward direction indicates the Y axis in the SCS.The midpoint between two feet is the origin of the SCS.Therefore,according to the mechanism model with Denavit-Hartenberg transformation [12], we can express the posture of the arm by calculating the position of elbow and wrist with 2,3 and 4.For example,since the orientation of upper arm is determined by and 2,the position of elbow pe is a function of 1,2 and lu as follows: sin 2 Pe Ps f(o1,2 lu)=ps lu cos 2 X sini (2) cos o1 x cos o where ps represents the position of shoulder and function f()calculates the vector pointing from the shoulder to the elbow.Similarly,the position of wrist pw can be represented as: Pw =pe +g(o1,2,3.1), (3) where g()computes the vector pointing from the elbow to the wrist.! 4.3 Body Posture Estimation As the core module of RF-Kinect system,three key techniques for estimating the body posture,3D Limb Orientation Estimation,AoA-based Orientation Refinement and Relative Distance-based Orientation Calibration,are proposed in this subsection. IThe details of function g()can be found in [12]. Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
41:10 • C. Wang et al. ! " # $ % & ' $ ' ( ) * ( * Fig. 6. Human body geometric model with rotation angle on each joint. ϕ1 represents the lift angle of the upper arm and ϕ4 represents the bent angle of the elbow. In fact, ϕ1 and ϕ2 together determine the orientation of upper arm in 3D space, while ϕ3 and ϕ4 are for the lower arm orientation. It is interesting that ϕ3, the DoF on the shoulder, does not affect the orientation of the upper arm, but the lower arm instead, because ϕ3 measures the angle of internal/external rotation around the arm. Further, ϕ5,ϕ6 and ϕ7 are the 3 DoFs on the leg. Since the lower body can be modeled in a similar way as the upper body, we will focus on the upper body to demonstrate the human body geometric model. Given the length of upper arm lu and lower arm ll , the positions of elbow and wrist are determined by the rotation values of ϕ1, ϕ2, ϕ3 and ϕ4 in the Skeleton Coordinate System (SCS). Figure 6 illustrates the SCS in our system - the plane where the user’s torso locates (i.e., the chest) serves as the XZ plane, and the line emanating from the shoulder in the frontward direction indicates the Y axis in the SCS. The midpoint between two feet is the origin of the SCS. Therefore, according to the mechanism model with Denavit-Hartenberg transformation [12], we can express the posture of the arm by calculating the position of elbow and wrist with ϕ1, ϕ2, ϕ3 and ϕ4. For example, since the orientation of upper arm is determined by ϕ1 and ϕ2, the position of elbow pe is a function of ϕ1, ϕ2 and lu as follows: pe = ps + f (ϕ1,ϕ2,lu ) = ps + lu © « sinϕ2 cosϕ2 × sinϕ1 − cosϕ1 × cosϕ2 ª ® ¬ , (2) where ps represents the position of shoulder and function f (·) calculates the vector pointing from the shoulder to the elbow. Similarly, the position of wrist pw can be represented as: pw = pe + д(ϕ1,ϕ2,ϕ3,ϕ4,ll), (3) where д(·) computes the vector pointing from the elbow to the wrist.1 4.3 Body Posture Estimation As the core module of RF-Kinect system, three key techniques for estimating the body posture, 3D Limb Orientation Estimation, AoA-based Orientation Refinement and Relative Distance-based Orientation Calibration, are proposed in this subsection. 1The details of function д() can be found in [12]. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018