正在加载图片...
This article has been accepted for publication in a future issue of this journal,but has not been fully edited.Content may change prior to final publication.Citation information:DOI 10.1109/TMC.2018.2857812.IEEE Transactions on Mobile Computing 6 approach,where both the depth camera and the RFID value.Then,we label each object with the coordinate of antenna(s)are deployed in a fixed position without moving. its peak value,i.e.,(0,d),where 6 represents the rotation The system scans the objects and tags simultaneously and angle and d represents the depth value.Therefore,as the respectively collects the depth value and RF-signals from depth d denotes the vertical distance of the objects,we these tagged objects.We can further pair the tags with can use the depth to distinguish the objects in the vertical the objects accordingly.However,when multiple tagged dimension;as the rotation angle 6 denotes the angle for the objects are placed at a close vertical distance to the system, camera to meet the perpendicular point,we can use the angle this solution cannot effectively distinguish multiple tagged to distinguish the objects in the horizontal dimension.For objects in different horizontal distances. example,in Fig.6(a),we deploy the object 4 and object To address this problem,we propose a rotate scanning- 5 with the same vertical distance to the depth camera, based solution as follows:we continuously rotate the scan- according to the results in Fig.6(b),these two objects can ning system (including the depth camera and RFID anten- be distinguished since the peak values of their depth exist nas),and simultaneously sample the depth of field and RF- in different angles,i.e.,-17 and +220 respectively.They signals from multiple tagged objects.Hence,we are able to can be easily distinguished from the horizontal dimension. collect a continuous series of features like depth,RSSI and phase values during rotate scanning.While the scanning 200 system is rotating,the vertical distances between multiple objects and the scanning system are continuously changing, from which we can further derive the differences of multiple tagged objects in different horizontal distances.In this way, 0.3气13 we are able to further distinguish multiple tagged objects with a close vertical distance but in different positions. Seale:1-0. 3Cmn 507 -20 5.2 Pair the Tags with Objects via Rotate Scanning RFID Anirnma xm (a)The deployment of multi- (b)Variation of the depth value 5.2.1 Extract Depth via Rotate Scanning ple tagged objects During the rotate scanning,we continuously rotate the Fig.6.The experiment results of rotate scanning depth camera from the angle of-0 to +0 and use it to scan the multiple tagged objects.During this process,as the 5.2.2 Estimate the tag's position with hyperbolas vertical distance between the specified objects and the depth According to the analysis shown in Fig.5,given the two camera is continuously changing,the depth values col- phase values of RF-signals extracted from two antennas lected from these objects are also continuously changing.We separated with a distance d(d=25cm in our implementa- conduct experiments to validate this judgment.As shown tion),there could be multiple solutions for the tag's posi- in Fig.6(a),we arbitrarily deploy multiple tagged objects tion,which could be represented with multiple hyperbolas in within the effective scanning range,the coordinates of these the two-dimensional space.In fact,we can leverage rotate objects are also labeled.We continuously rotate the depth camera from the angle of-40 to +40 and collect the depth scanning to figure out a unique solution by filtering out those unqualified solutions.The idea is as follows:for each values from multiple tagged objects for every 5~6 degrees. Fig.6(b)shows the experiment results.Note that the series snapshot ti(i-1 m)of the rotate scanning,for a specified tag T,we can respectively extract the phase values of depth values for each object actually form a convex curve (01,02)from the two antennas,then compute the feasible with a peak value.For each depth value obtained at a certain distances (d1,d2)between the tag and two antennas.We rotation angle,we can use k-NearestNeighbor(kNN)to clas- further compute the set of feasible positions in a global sify it into a corresponding curve according to the distance coordinate system as Si.Then,by computing the intersec- between the depth value and the other depth values in the tion of different sets Si for all snapshots,we are able to curve,and then use the method of quadratic curve fitting figure out a unique solution for the tag's position as follows: to connect the corresponding depth values as a curve.In S=n吧1S: this way,we are able to continuously identify and track these depth values for a specified object.The peak value of the convex curve denotes the snapshot when the vertical 5.2.3 Estimate the tag's position with angle of arrival distance reaches the maximum value.It appears only when In some situations,it could be difficult to directly derive the the perpendicular bisector of the depth camera crosses the tag's candidate position using the intersections of multiple specified object,since the vertical distance reaches the value hyperbolas,since the hyperbolas must be exactly plotted in of the absolute distance between the object and the depth the two-dimensional space,which might be computation- camera,which is the theoretical upper bound it can achieve. ally expensive for some mobile devices.Nevertheless,it is In other words,the peak value appears when the depth found that,as long as the tagged objects are relatively far camera is right facing towards the object,we call it the from the antenna pair,we can use the method of angle of perpendicular point. arrival at antenna pair [21]to simplify the solution.Specif- In this way,according to the peak value of depth,we ically,suppose the distance between the antenna pair A1 are able to further distinguish multiple objects with the and A2 is d,the distances between the tag and the antenna same vertical distance but different positions.The solution pair A1/A2 are respectively di and d2.As Fig.7 shows, is as follows:After the system finishes rotate scanning,it when the distance between the tag and the antenna pair is extracts the peak value from the curve of each object's depth significantly larger than the distance between the antenna 1536-1233(c)2018 IEEE Personal use is permitted,but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.1536-1233 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMC.2018.2857812, IEEE Transactions on Mobile Computing 6 approach, where both the depth camera and the RFID antenna(s) are deployed in a fixed position without moving. The system scans the objects and tags simultaneously and respectively collects the depth value and RF-signals from these tagged objects. We can further pair the tags with the objects accordingly. However, when multiple tagged objects are placed at a close vertical distance to the system, this solution cannot effectively distinguish multiple tagged objects in different horizontal distances. To address this problem, we propose a rotate scanning￾based solution as follows: we continuously rotate the scan￾ning system (including the depth camera and RFID anten￾nas), and simultaneously sample the depth of field and RF￾signals from multiple tagged objects. Hence, we are able to collect a continuous series of features like depth, RSSI and phase values during rotate scanning. While the scanning system is rotating, the vertical distances between multiple objects and the scanning system are continuously changing, from which we can further derive the differences of multiple tagged objects in different horizontal distances. In this way, we are able to further distinguish multiple tagged objects with a close vertical distance but in different positions. 5.2 Pair the Tags with Objects via Rotate Scanning 5.2.1 Extract Depth via Rotate Scanning During the rotate scanning, we continuously rotate the depth camera from the angle of −θ to +θ and use it to scan the multiple tagged objects. During this process, as the vertical distance between the specified objects and the depth camera is continuously changing, the depth values col￾lected from these objects are also continuously changing. We conduct experiments to validate this judgment. As shown in Fig. 6(a), we arbitrarily deploy multiple tagged objects within the effective scanning range, the coordinates of these objects are also labeled. We continuously rotate the depth camera from the angle of −40◦ to +40◦ and collect the depth values from multiple tagged objects for every 5∼6 degrees. Fig. 6(b) shows the experiment results. Note that the series of depth values for each object actually form a convex curve with a peak value. For each depth value obtained at a certain rotation angle, we can use k-NearestNeighbor(kNN) to clas￾sify it into a corresponding curve according to the distance between the depth value and the other depth values in the curve, and then use the method of quadratic curve fitting to connect the corresponding depth values as a curve. In this way, we are able to continuously identify and track these depth values for a specified object. The peak value of the convex curve denotes the snapshot when the vertical distance reaches the maximum value. It appears only when the perpendicular bisector of the depth camera crosses the specified object, since the vertical distance reaches the value of the absolute distance between the object and the depth camera, which is the theoretical upper bound it can achieve. In other words, the peak value appears when the depth camera is right facing towards the object, we call it the perpendicular point. In this way, according to the peak value of depth, we are able to further distinguish multiple objects with the same vertical distance but different positions. The solution is as follows: After the system finishes rotate scanning, it extracts the peak value from the curve of each object’s depth value. Then, we label each object with the coordinate of its peak value, i.e., hθ, di, where θ represents the rotation angle and d represents the depth value. Therefore, as the depth d denotes the vertical distance of the objects, we can use the depth to distinguish the objects in the vertical dimension; as the rotation angle θ denotes the angle for the camera to meet the perpendicular point, we can use the angle to distinguish the objects in the horizontal dimension. For example, in Fig. 6(a), we deploy the object 4 and object 5 with the same vertical distance to the depth camera, according to the results in Fig. 6(b), these two objects can be distinguished since the peak values of their depth exist in different angles, i.e., −17◦ and +22◦ respectively. They can be easily distinguished from the horizontal dimension. RFID Antenna y(m) 0.5 1.5 1 2 2.5 x(m) Object1 (0,0.85) Object2 (-0.23,1.2) Object3 (0.35,1.3) Object4 (-0.6,1.8) Object5 (0.6,1.8) 3D-Camera Rotation Scale:[-ș,+ș] (a) The deployment of multi￾ple tagged objects −40 −20 0 20 40 500 1000 1500 2000 Rotation angle Depth value(mm) Object1 Object2 Object3 Object4 Object5 (b) Variation of the depth value Fig. 6. The experiment results of rotate scanning 5.2.2 Estimate the tag’s position with hyperbolas According to the analysis shown in Fig. 5, given the two phase values of RF-signals extracted from two antennas separated with a distance d (d=25cm in our implementa￾tion), there could be multiple solutions for the tag’s posi￾tion, which could be represented with multiple hyperbolas in the two-dimensional space. In fact, we can leverage rotate scanning to figure out a unique solution by filtering out those unqualified solutions. The idea is as follows: for each snapshot ti(i = 1 ∼ m) of the rotate scanning, for a specified tag T, we can respectively extract the phase values (θ1, θ2) from the two antennas, then compute the feasible distances (d1, d2) between the tag and two antennas. We further compute the set of feasible positions in a global coordinate system as Si . Then, by computing the intersec￾tion of different sets Si for all snapshots, we are able to figure out a unique solution for the tag’s position as follows: S = ∩ m i=1Si . 5.2.3 Estimate the tag’s position with angle of arrival In some situations, it could be difficult to directly derive the tag’s candidate position using the intersections of multiple hyperbolas, since the hyperbolas must be exactly plotted in the two-dimensional space, which might be computation￾ally expensive for some mobile devices. Nevertheless, it is found that, as long as the tagged objects are relatively far from the antenna pair, we can use the method of angle of arrival at antenna pair [21] to simplify the solution. Specif￾ically, suppose the distance between the antenna pair A1 and A2 is d, the distances between the tag and the antenna pair A1/A2 are respectively d1 and d2. As Fig. 7 shows, when the distance between the tag and the antenna pair is significantly larger than the distance between the antenna
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有