Pr[X u]=Po(1-Po).The expectation of X is is asymptotically normal with mean 0 and variance 1;that is, f-1 f-1 Z satisfies the standard normal distribution and its cumulative E(X)=∑uPr(X=)=∑uP1-R) distribution function is u=0 2=0 1 f-1)P时+1-fPH+B Φ(x)= V2-心 e-号du. 1-Po We can find a constant c which makes Po(1-P6)-fF8 1-P% Pr[-c≤Z≤d=Φ(c)-Φ(-c) Since that0<A<l,then Po→0 and fPo→0 when f is =erf(c/V②)=1-6, large.So E(X)can be further simplified to where erf is the error function [27].By solving the formulation P 1 E(X)≈1-R=eP-I (2) above,we get the value of c.For example,if 6 =1%,then c=2.576.Thus,the desired accuracy can be rewritten as Correspondingly,the variance of X is Pr[t-t≤et=Pr[(1-e)t≤t≤(1+e) f-1 Var(X) (u-E(X)2Pr(X=u) =Pr[(1-e)t≤fn 1+Y ≤(1+e) u=0 P e-(1+e)p e-(1-e)p 中 (1-P)P (3) =Prl2e-+y≤1e--l According to the intuitive relation between E(X)and t,the Therefore,if we have observation of X can be used to estimate t.However,there e-(1+c)p -e-+9F-μ e-(1-c)p exists variance between the observed value of X and E(X). ≤-cand-e--o-4 ≥c By the law of large number [26],the estimation becomes more 0 accurate when the number ofobservations gets larger.We define arandom procesY=∑-,÷sthe man ofobervations. then we can guarantee Pr[lt-t et]>1-6.Combining o and Eq.3 to solve the inequalities,we get where Xi is the random variable X for the ith observation. Note that E(Xi)=E(X)and Var(X;)=Var(X).Since n≥ c2e-o(ep-e-ce)2 the reader gives a different random seed in each broadcast,Xi (1-e-p)2 (1 <i<n)is independent with each other.Therefore,we have ■ EY)=∑EX=nEX =E(X) In practice,the number of tags t is not known a priori. making it difficult to predict the exact number of rounds. and However,the minimum number of rounds n is a monotonically Var(Y)=Var(i)=nVar(X)Var(x) increasing function against the load factor p;that is,the number of rounds calculated by t=tma is large enough for the actual n2 n2 t.Therefore,n in line 2 of Algorithm 3 is computed by Since that E(Y)=E(X),by solving Eq.2 for t,we get n=etma (ctmal-cem t=f.In 1+E(Y) (4) E(Y) (1-e-etmaz/f)2 Then,according to Eq.1,by substituting Y for E(Y),we have C.Determine Optimal Parameters f and k i=f.mnl+y The estimating time of our algorithms is affected by two y factors:the number of rounds and the time cost in each round. Next,we will show how to use Var(Y)to compute the tight Here,the time cost is measured by the number of slots.From the discussion above.we find that the number of rounds n is bound of parameter n. dependent on the frame size f.The time cost in a round is either Theorem 1.Given 6.e,and p.if the number of rounds n is x+1*(if the number of empty slots observed in that round not less than the agorithm described above is smaller than korThat relies on both fand can guarantee the accuracy requirement,that is,Pr-t Hence,if we select inappropriate f and k,the performance of e≥1-6. our scheme will be adversely affected.Our remaining problem is to determine the "best"value for parameter f and k on a Proof:We use u and a to denote the expectation and given upper bound tmaz. standard variance of Y,i.e.,u=E(Y)and o =VVar(Y)= Remember that the probability of the random variable X VVar(X)/n.By the central limit theorem,we know equals to u is Po(1-Po),where Po e-t/f.We use the z=Y-4 *Note that one additional slot is needed for the first non-empty slotP r[X = u] = P u 0 (1 − P0). The expectation of X is E(X) = f X−1 u=0 uP r(X = u) = f X−1 u=0 uP u 0 (1 − P0) = (f − 1)P f+1 0 − fPf 0 + P0 1 − P0 = P0 1 − P0 (1 − P f 0 ) − fPf 0 . Since that 0 < P0 < 1, then P f 0 → 0 and fPf 0 → 0 when f is large. So E(X) can be further simplified to E(X) ≈ P0 1 − P0 = 1 e ρ − 1 . (2) Correspondingly, the variance of X is V ar(X) = f X−1 u=0 (u − E(X))2P r(X = u) ≈ P0 (1 − P0) 2 . (3) According to the intuitive relation between E(X) and t, the observation of X can be used to estimate t. However, there exists variance between the observed value of X and E(X). By the law of large number [26], the estimation becomes more accurate when the number of observations gets larger. We define a random process Y = Pn i=1 Xi n as the mean of n observations, where Xi is the random variable X for the i th observation. Note that E(Xi) = E(X) and V ar(Xi) = V ar(X). Since the reader gives a different random seed in each broadcast, Xi (1 ≤ i ≤ n) is independent with each other. Therefore, we have E(Y ) = Pn i=1 E(Xi) n = nE(X) n = E(X) and V ar(Y ) = V ar( Pn i=1 X i) n2 = nV ar(X) n2 = V ar(X) n . Since that E(Y ) = E(X), by solving Eq. 2 for t, we get t = f · ln1 + E(Y ) E(Y ) . (4) Then, according to Eq. 1, by substituting Y for E(Y ), we have t˜= f · ln1 + Y Y . Next, we will show how to use V ar(Y ) to compute the tight bound of parameter n. Theorem 1. Given δ, ǫ, and ρ, if the number of rounds n is not less than c 2 e −ρ (e ρ−e −ǫρ) 2 (1−e−ǫρ) 2 , the algorithm described above can guarantee the accuracy requirement, that is, P r[|t˜− t| ≤ ǫt] ≥ 1 − δ. Proof: We use µ and σ to denote the expectation and standard variance of Y , i.e., µ = E(Y ) and σ = p p V ar(Y ) = V ar(X)/n. By the central limit theorem, we know Z = Y − µ σ is asymptotically normal with mean 0 and variance 1; that is, Z satisfies the standard normal distribution and its cumulative distribution function is Φ(x) = 1 √ 2π Z x −∞ e − u2 2 du. We can find a constant c which makes P r[−c 6 Z 6 c] = Φ(c) − Φ(−c) = erf(c/√ 2) = 1 − δ, where erf is the error function [27]. By solving the formulation above, we get the value of c. For example, if δ = 1%, then c = 2.576. Thus, the desired accuracy can be rewritten as P r[|t˜− t| 6 ǫt] = P r[(1 − ǫ)t 6 t˜6 (1 + ǫ)t] = P r[(1 − ǫ)t 6 f ln 1 + Y Y 6 (1 + ǫ)t] = P r[ e −(1+ǫ)ρ 1 − e−(1+ǫ)ρ 6 Y 6 e −(1−ǫ)ρ 1 − e−(1−ǫ)ρ ]. Therefore, if we have e −(1+ǫ)ρ 1−e−(1+ǫ)ρ − µ σ 6 −c and e −(1−ǫ)ρ 1−e−(1−ǫ)ρ − µ σ > c, then we can guarantee P r[|t˜− t| 6 ǫt] > 1 − δ. Combining σ and Eq. 3 to solve the inequalities, we get n > c 2 e −ρ (e ρ − e −ǫρ) 2 (1 − e−ǫρ) 2 . In practice, the number of tags t is not known a priori, making it difficult to predict the exact number of rounds. However, the minimum number of rounds n is a monotonically increasing function against the load factor ρ; that is, the number of rounds calculated by t = tmax is large enough for the actual t. Therefore, n in line 2 of Algorithm 3 is computed by n = c 2 · e −tmax/f · (e tmax/f − e −ǫtmax/f ) 2 (1 − e−ǫtmax/f ) 2 C. Determine Optimal Parameters f and k The estimating time of our algorithms is affected by two factors: the number of rounds and the time cost in each round. Here, the time cost is measured by the number of slots. From the discussion above, we find that the number of rounds n is dependent on the frame size f. The time cost in a round is either x + 1∗ (if the number of empty slots observed in that round is smaller than k) or k + log2f. That relies on both f and k. Hence, if we select inappropriate f and k, the performance of our scheme will be adversely affected. Our remaining problem is to determine the “best” value for parameter f and k on a given upper bound tmax. Remember that the probability of the random variable X equals to u is P u 0 (1 − P0), where P0 = e −t/f . We use the ∗Note that one additional slot is needed for the first non-empty slot