990909999 00000 0 0 9999999999= -999P99999 (a)Sparse (b)Moderate (c)Dense Fig.6.PAMF Fig.7.Floor-map of the postal center Fig.8.Random RCGs We employ the real EMS trace of this center,which deliveries (iv)Scheduling Round:We also evaluate the efficiency 2,456 items,including express mails,parcels,and boxes,to a of anti-reader-collision by using the total number of medium-size city each day on average.We collect the delivery scheduling rounds. records in the month of December,2009 as our basic dataset. The dataset contains 78,606 records B.Implementation Results Object tracking:We collect the tracking dataset from the For testing joint identification,we employ a NI PXI-1044 RFID Ecosystem project [24].The deploying map is shown testing tool with a PXI 5600 receiver as the passive reader. in [25].There are 30 readers (or antennas)in the deployment We also employ an Alien reader as the active reader.The area.The tracking dataset has 1653 records.Each record tags are put in the middle between these two readers,with includes the tag locations,source,and identification time. a distance 2.5m in between.The CDF of read rate of our Random Topologies:Without losing generality,we also passive reader is shown in Fig.9.We can observe that the randomly generated 3 separate RCGs with 100 readers,labeled passive reader achieves a read rate of 0.73 in 60%of testing with“Sparse',“Moderate'”,and“Dense'”,respectively..They cases.The average value of its read rates is up to 0.71,which is have different maximum degrees to reflect the three deploying nearly as good as that in the single-reader deploying scenario. topologies,as summarized in Table I.The topologies of these C.Simulation Results three RCGs are also illustrated in Fig.8. 1)Identifying tags without reader collisions:We first sim- 3)Performance Metrics:Assume the set of time slots ulate the environment of deploying a single reader to show the consumed by a single reader ri is Z(ri),then the identification performance of identifying non-contentious tags.We compare time of this reader equals I(ri)=Z(ri).Besides the identi- Season-I with prior anti-tag-collision protocols with number of fication itme,we also measure the following four matrices: tags ranging from 1 to 1000.The identification time of three (i)Throughput:It is defined as the ratio of total number types of protocols,FSA based approach,Balanced Tree(BT) of tags to the overall identification time,denote as A. based approach,and Season,is shown in Fig.10.From the Namely,.入=U4eRtr可 T figure,we observe that the identification time of each protocol (ii)Average Delay:The delay of tag t,denoted as D(t),is is proportional to number of tags.Among them,Season-I defined as the expected number of time slots consumed is much faster than both FSA and BT based approaches. by tag t in waiting for its identification.The average Especially when number of tag is above 100,Season-I has delay is defined as D) 30.6%and 42.2%time saving on average than BT and FSA, (iii)Read Rate:In practice,the reader cannot accurately respectively.Furthermore,we also evaluate the throughput of collect all the tags in their interrogation region even if these anti-tag-protocols as illustrated in Fig.11.The results there is no reader collisions due to environment noise show that Season-I is the best anti-tag collision protocol whose multi-path,signal attenuation,and other factors.We maximum throughput is up to 0.4 and 60%of the cases has a use read rate,defined as the ratio of the number of throughput higher than 0.37.However,the throughput of FSA correctly collected tags to the total number of tags in and BT is typically lower than 0.29.We also observe that BT the interrogation region,to measure the feasibility of our is the most stable protocol,90%of the cases keeps around approach. 0.25to0.26. 2)Identifying tags with reader collisions:In the exper- iment,we simulate multi-reader environments to show the TABLE I performance of Season under reader-collision.We compare SUMMARY OF APPLICATIONS Season with DCS and Colorwave via the number scheduling Scenarios of readers #of max of edges #of tags rounds needed for anti-reader-collision.DCS and Colorwave degree employ the graph coloring method to schedule the readers.The Warehouse 72 4 126 2.456 Tracking 30 29 1,653 results are shown in Fig.12.Season has the least number all the Sparse 100 3 133 50.000 time compared with other anti-reader-collision protocols in the Moderate 100 8 343 50.000 five scenarios.For example,Season only needs 4 scheduling Dense 100 16 495 50.000 rounds in the 'warehouse'scenario where the maximum degreeFig. 6. PAMF 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 Fig. 7. Floor-map of the postal center (a) Sparse (b) Moderate (c) Dense Fig. 8. Random RCGs We employ the real EMS trace of this center, which deliveries 2,456 items, including express mails, parcels, and boxes, to a medium-size city each day on average. We collect the delivery records in the month of December, 2009 as our basic dataset. The dataset contains 78,606 records. Object tracking: We collect the tracking dataset from the RFID Ecosystem project [24]. The deploying map is shown in [25]. There are 30 readers (or antennas) in the deployment area. The tracking dataset has 1653 records. Each record includes the tag locations, source, and identification time. Random Topologies: Without losing generality, we also randomly generated 3 separate RCGs with 100 readers, labeled with “Sparse”, “Moderate”, and “Dense”, respectively. They have different maximum degrees to reflect the three deploying topologies, as summarized in Table I. The topologies of these three RCGs are also illustrated in Fig.8. 3) Performance Metrics: Assume the set of time slots consumed by a single reader ri is I(ri), then the identification time of this reader equals I(ri) = |I(ri)|. Besides the identi- fication itme, we also measure the following four matrices: (i) Throughput: It is defined as the ratio of total number of tags to the overall identification time, denote as λ. Namely, λ = |T| | ∪ ri∈R I(ri)| (ii) Average Delay: The delay of tag t, denoted as D(t), is defined as the expected number of time slots consumed by tag t in waiting for its identification. The average delay is defined as Davg = ∑ ti∈T D(ti) |T| (iii) Read Rate: In practice, the reader cannot accurately collect all the tags in their interrogation region even if there is no reader collisions due to environment noise, multi-path, signal attenuation, and other factors. We use read rate, defined as the ratio of the number of correctly collected tags to the total number of tags in the interrogation region, to measure the feasibility of our approach. TABLE I SUMMARY OF APPLICATIONS Scenarios # of readers # of max degree # of edges # of tags Warehouse 72 4 126 2,456 Tracking 30 3 29 1,653 Sparse 100 3 133 50,000 Moderate 100 8 343 50,000 Dense 100 16 495 50,000 (iv) Scheduling Round: We also evaluate the efficiency of anti-reader-collision by using the total number of scheduling rounds. B. Implementation Results For testing joint identification, we employ a NI PXI-1044 testing tool with a PXI 5600 receiver as the passive reader. We also employ an Alien reader as the active reader. The tags are put in the middle between these two readers, with a distance 2.5m in between. The CDF of read rate of our passive reader is shown in Fig. 9. We can observe that the passive reader achieves a read rate of 0.73 in 60% of testing cases. The average value of its read rates is up to 0.71, which is nearly as good as that in the single-reader deploying scenario. C. Simulation Results 1) Identifying tags without reader collisions: We first simulate the environment of deploying a single reader to show the performance of identifying non-contentious tags. We compare Season-I with prior anti-tag-collision protocols with number of tags ranging from 1 to 1000. The identification time of three types of protocols, FSA based approach, Balanced Tree (BT) based approach, and Season, is shown in Fig.10. From the figure, we observe that the identification time of each protocol is proportional to number of tags. Among them, Season-I is much faster than both FSA and BT based approaches. Especially when number of tag is above 100, Season-I has 30.6% and 42.2% time saving on average than BT and FSA, respectively. Furthermore, we also evaluate the throughput of these anti-tag-protocols as illustrated in Fig.11. The results show that Season-I is the best anti-tag collision protocol whose maximum throughput is up to 0.4 and 60% of the cases has a throughput higher than 0.37. However, the throughput of FSA and BT is typically lower than 0.29. We also observe that BT is the most stable protocol, 90% of the cases keeps around 0.25 to 0.26. 2) Identifying tags with reader collisions: In the experiment, we simulate multi-reader environments to show the performance of Season under reader-collision. We compare Season with DCS and Colorwave via the number scheduling rounds needed for anti-reader-collision. DCS and Colorwave employ the graph coloring method to schedule the readers. The results are shown in Fig.12. Season has the least number all the time compared with other anti-reader-collision protocols in the five scenarios. For example, Season only needs 4 scheduling rounds in the ‘warehouse’ scenario where the maximum degree