正在加载图片...
16 Error (cm) (a)Realistic Experiment Settings (b)Success Ratio of AGC Rule Checking (c)CDF of Localization Error 3000 -oAverage Error 2500 S Maximum Error Minimum Error 。-Minimum Error 2000 100 1500 1000 500 Basic APS AGC Integrated Numbor of rTags Basic APS AGC Integrated (d)Bounds of Localization Error (e)Error with Different Reference Tags (f)Time Delay of Each Method Fig.7:Experimental Evaluation Algorithm 5 Localization Algorithm-Integrated Method TABLE I:Comparison of History Data in Grid 2 1:Call APS method to get the estimated position P. 2:if P is NOT matching rules then 81一881一82 81一84 8g一82 82一8418184 439.662 3: Calibrate P by calling AGC method. 2921.276 2373.08 2481.614 548.196 108.534 1317.026 105258 952.91 -1211.76 -364.116 847.652 4:end if 634.608 55.638 776.14 .578.97 141.532 720.502 5:P is the estimated position of target tag. 542.878 9.976 1028.972 -532.902 486.094 1018.996 we present the integrated method combining the two methods method and functional modules of the integrated method in several dimensions.Basic KNN method is used as the baseline We use all the ideas of two methods to design the final method new method.We still use the reference tags to assemble the fingerprint database.Firstly,we determine the appropriate transmitting power and estimate the position of target tag by B.Performance of Rule Checking in AGC Method APS method.Then we check the result by the rules of AGC We already measure 40 feedback fingerprints before the method.If the result does not match,it will be calibrated by checking procedure.The rules of each grid are generated the AGC method. from parts of these fingerprints.Respectively,we use all of them,32 of them,24 of them,16 of them to generate the rules.For example,we can have Table I to grid 2 with 4 IX.PERFORMANCE EVALUATION history data.With the threshold 200,we can have the first A.Experiment Settings rule,s1 >(s2+542.878).Because the smallest value 542.878 We evaluate the performance in realistic settings.The basic is more than the threshold.Similarly,we can have rules, experiment settings are the same as the realistic settings in s1 (s4+776.14)and s2 (s3-532.902).The comparison Section III.A bookshelf is customized made as the testbed for between s2 and s4 cannot be rule because the decision of localization,as shown in Fig.7(a).We divide the localization history data is inconsistent.The other two cannot be rules area into 8 grids and the size of each grid is 55cm x 75cm. because the smallest values are both smaller than the threshold The size of entire localization area is 120cm x 310cm.As We check whether the rules can correctly determine if the model shows in Fig.3,15 reference tags are deployed in the estimated position of target tag is in the right grid.Fig the localization area in a 3 x 5 array.A target tag is attached 7(b)shows the success ratio from 64 checking procedures. inside a book for localization.In each evaluation procedure, We note that more feedback fingerprints can provide higher we respectively place the book with the target tag randomly success ratio.Success ratio of nearly 60%is achieved with in 8 positions of each grid.Totally we repeat measurement only 16 fingerprints and 86.5%is achieved with 40 feedback for 64 times in each evaluation procedure.We evaluate each fingerprints.(a) Realistic Experiment Settings 40 32 24 16 0 0.2 0.4 0.6 0.8 1 Number of Feedback Fingerprints Success Ratio (b) Success Ratio of AGC Rule Checking 0 50 100 150 200 250 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Error (cm) CDF Basic Method APS Method AGC Method Integrated Method (c) CDF of Localization Error Basic APS AGC Integrated 0 50 100 150 200 250 Error (cm) Average Error Maximum Error Minimum Error (d) Bounds of Localization Error 15 10 5 0 50 100 150 200 Error (cm) Number of Reference Tags Average Error Maximum Error Minimum Error (e) Error with Different Reference Tags Basic APS AGC Integrated 0 500 1000 1500 2000 2500 3000 Average Time Delay (ms) (f) Time Delay of Each Method Fig. 7: Experimental Evaluation Algorithm 5 Localization Algorithm - Integrated Method 1: Call APS method to get the estimated position P. 2: if P is NOT matching rules then 3: Calibrate P by calling AGC method. 4: end if 5: P is the estimated position of target tag. we present the integrated method combining the two methods. We use all the ideas of two methods to design the final new method. We still use the reference tags to assemble the fingerprint database. Firstly, we determine the appropriate transmitting power and estimate the position of target tag by APS method. Then we check the result by the rules of AGC method. If the result does not match, it will be calibrated by the AGC method. IX. PERFORMANCE EVALUATION A. Experiment Settings We evaluate the performance in realistic settings. The basic experiment settings are the same as the realistic settings in Section III. A bookshelf is customized made as the testbed for localization, as shown in Fig. 7(a). We divide the localization area into 8 grids and the size of each grid is 55cm × 75cm. The size of entire localization area is 120cm × 310cm. As the model shows in Fig. 3, 15 reference tags are deployed in the localization area in a 3 × 5 array. A target tag is attached inside a book for localization. In each evaluation procedure, we respectively place the book with the target tag randomly in 8 positions of each grid. Totally we repeat measurement for 64 times in each evaluation procedure. We evaluate each TABLE I: Comparison of History Data in Grid 2 s1 − s2 s1 − s3 s1 − s4 s2 − s3 s2 − s4 s3 − s4 2921.276 2373.08 2481.614 -548.196 -439.662 108.534 1317.026 105.258 952.91 -1211.76 -364.116 847.652 634.608 55.638 776.14 -578.97 141.532 720.502 542.878 9.976 1028.972 -532.902 486.094 1018.996 method and functional modules of the integrated method in several dimensions. Basic KNN method is used as the baseline method. B. Performance of Rule Checking in AGC Method We already measure 40 feedback fingerprints before the checking procedure. The rules of each grid are generated from parts of these fingerprints. Respectively, we use all of them, 32 of them, 24 of them, 16 of them to generate the rules. For example, we can have Table I to grid 2 with 4 history data. With the threshold 200, we can have the first rule, s1 ≥ (s2 + 542.878). Because the smallest value 542.878 is more than the threshold. Similarly, we can have rules, s1 ≥ (s4 + 776.14) and s2 ≤ (s3 − 532.902). The comparison between s2 and s4 cannot be rule because the decision of history data is inconsistent. The other two cannot be rules because the smallest values are both smaller than the threshold. We check whether the rules can correctly determine if the estimated position of target tag is in the right grid. Fig. 7(b) shows the success ratio from 64 checking procedures. We note that more feedback fingerprints can provide higher success ratio. Success ratio of nearly 60% is achieved with only 16 fingerprints and 86.5% is achieved with 40 feedback fingerprints
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有