正在加载图片...
tion algorithm is guaranteed to achieve at least a value of Kopt through Eq.(6)with constant time and space complex- ity,which we can denote as O(1).For the limited stor- age problem,we reduce it into the unbounded knapsack PROOF.In the greedy approximation algorithm,we ob- problem,and respectively propose the dynamic program- serve that the selected item(s)must occupy at least half of ming based algorithm and the greedy approximation algo- the resource,i.e.,the number of storages,in the sacks,other- rithm.The dynamic programming based algorithm main- wise,we can continue to insert the items into corresponding tains a four-dimensional table and the compute complexity sacks and attain a better solution.Suppose in the worst isO(m·M·K·a),here m is the number of fan-shaped case,the first selected item occupies more than half of the partitions,M is the size of reduced search space,K is the number of storage used,and 0 is the degree of the unit zone resource in any sack,then the rest part of the sacks can- not hold any other kind of items.As the reduced energy per Moreover,the greedy approximation algorithm has a com- storage unit,is maximized for any sack in the approxi- pute complexity of O(mM log(mM)),which is irrelevant to the number of storages,i.e.,K.As K can be a rather large mate solution,then the approximate solution can achieve the number,the compute complexity can be greatly reduced. total reduced energy as Emax.It is apparent N that in the optimal solution the total reduced energy Emax should be less than max El 7.PERFORMANCE EVALUATION K,therefore E*Emaz, the theorem gets proved. We have implemented a simulator to evaluate the perfor- mance of our solutions.In the simulation,we consider a situation where sensors are uniformly deployed in a region with randomly generated shape.We randomly select a sen- Now consider the compute complexity of Algorithm 2,we sor node inside the deployment area as the sink.The overall know that the number of the sack-item pairs (i,j)is m.M number of sensor nodes is N 1000.The deployment re- the compute complexity of the sorting is O(mMlog(mM)) moreover,the compute complexity of the following iteration gion covers an area of 6000 m.Hence on average each sensor node covers an area of 6m2.We set the average hop is O(mM).Therefore,the overall compute complexity of the greedy approximation algorithm is O(mM log(mM)),which distance d =3m.Based on the above settings,statistically in the simulations we observe that the maximum number is irrelevant to the number of storages K.As K can be a of hops n is within the range 10,15 with 95%confidence rather large number,the compute complexity can be greatly interval.Unless otherwise stated,we set the following pa- reduced,as compared to the compute complexity O(m.M rameters in our simulations:query rate re=1,query size K.)in the dynamic programming based algorithm. sa=1,data generate rate rd =1 and data size sd =1, we set the reply reduction rate a =0.5,for ease of com 6.PERFORMANCE ANALYSIS puting we set energy costs for transmitting and receiving a In this section we compare the compute complexity of our unit data as etr ere 1.In addition,we use relative optimization methodology with former research work in [5] energy cost as performance metrics.We use the case that in respect of compute complexity. no storage node except the sink is deployed as the baseline Let the energy cost for this no storage scenario be Ef.And According to the dynamic programming based algorithms let the energy cost after the storage nodes are deployed be depicted in 5,for the unlimited storage problem,they main- Es. The relative energy cost is defined as Es/Ef.In the tain a table E1..N according to all N nodes,which is following part we get the corresponding results by averaging used to hold the minimum energy cost of all subtrees rooted the results of 1000 independent experiments at node i=1,..,N.They also maintain a second table E[1..N which records the energy cost of all subtrees when 7.1 all nodes in each subtree are forwarding nodes.According Effect of the Optimized Placement with Un- to their analysis the time complexity of their dynamic pro- limited Number of Storages gramming based algorithm is O(N),where N is the number Fig.6(a)shows the energy cost with varying values of storage of nodes.For the K limited storage problem,they maintain area bound,we set ra=1.6,rd =1.Here we select a a two-dimensional (K+1).(N-1)table,Eim,l,at each specified fan-shaped partition divided from the deployment node i.According to their analysis,given a communication area as the test case,in which the topology is fairly regular tree with N nodes and at most C children for each par- As the maximum number of hops in this partition is n=10, ent,the compute complexity of their dynamic programming we vary the storage area bound from the Ist hop to the 10th based algorithm is O(K.N2(maxK,C)c-1). hop.We note that the energy cost starts to decrease from 0.97 as the storage area bound increases,when it reaches As the overall number of sensor nodes,N,can be a huge the lowest value 0.91,it starts to increase as the storage number for the large scale sensor network,thus the comput- area bound increases,finally it reaches the value 1.01 for ing complexity of the above algorithms will be extremely the full storage case.As we can see the lowest point of the large.Moreover,maintaining a two-dimensional table at energy cost curve lies in the bound equal to 5,which right each node is conventionally intolerable for sensors with lim- fits with our optimized parameter opt =5.3=5,hence it ited resources.Since in large scale sensor networks the num- demonstrates the effectiveness of the optimized parameter ber of hops,n,is greatly smaller than the number of sensor Kopt calculated by our methodologies. nodes,i.e.,nN.Therefore,in this paper,we utilize the hop as the computing unit instead of the node.Ac- In Fig.6(b)we compare the optimized storage placement cording to our optimized methodologies,for the unlimited with the random storage placement in term of energy cost storage problem,we can calculate the optimized hop-count as the reply reduction rate a varies.In the random storagetion algorithm is guaranteed to achieve at least a value of Emax 2 . Proof. In the greedy approximation algorithm, we ob￾serve that the selected item(s) must occupy at least half of the resource, i.e., the number of storages, in the sacks, other￾wise, we can continue to insert the items into corresponding sacks and attain a better solution. Suppose in the worst case, the first selected item occupies more than half of the resource in any sack, then the rest part of the sacks can￾not hold any other kind of items. As the reduced energy per storage unit, E[i][j] N[j] , is maximized for any sack in the approxi￾mate solution, then the approximate solution can achieve the total reduced energy as E ∗ ≥ 1 2 max E[i][j]·K N[j] . It is apparent that in the optimal solution the total reduced energy Emax should be less than max E[i][j]·K N[j] , therefore E ∗ ≥ 1 2Emax, the theorem gets proved. Now consider the compute complexity of Algorithm 2, we know that the number of the sack-item pairs (i, j) is m · M, the compute complexity of the sorting is O(mM log(mM)), moreover, the compute complexity of the following iteration is O(mM). Therefore, the overall compute complexity of the greedy approximation algorithm is O(mM log(mM)), which is irrelevant to the number of storages K. As K can be a rather large number, the compute complexity can be greatly reduced, as compared to the compute complexity O(m · M · K · π θ ) in the dynamic programming based algorithm. 6. PERFORMANCE ANALYSIS In this section we compare the compute complexity of our optimization methodology with former research work in [5] in respect of compute complexity. According to the dynamic programming based algorithms depicted in [5], for the unlimited storage problem, they main￾tain a table E ∗ [1..N] according to all N nodes, which is used to hold the minimum energy cost of all subtrees rooted at node i = 1, ..., N. They also maintain a second table Ef [1..N] which records the energy cost of all subtrees when all nodes in each subtree are forwarding nodes. According to their analysis the time complexity of their dynamic pro￾gramming based algorithm is O(N), where N is the number of nodes. For the K limited storage problem, they maintain a two-dimensional (K + 1) · (N − 1) table, Ei[m, l], at each node i. According to their analysis, given a communication tree with N nodes and at most C children for each par￾ent, the compute complexity of their dynamic programming based algorithm is O(K · N 2 (max{K, C}) C−1 ). As the overall number of sensor nodes, N, can be a huge number for the large scale sensor network, thus the comput￾ing complexity of the above algorithms will be extremely large. Moreover, maintaining a two-dimensional table at each node is conventionally intolerable for sensors with lim￾ited resources. Since in large scale sensor networks the num￾ber of hops, n, is greatly smaller than the number of sensor nodes, i.e., n ≪ N. Therefore, in this paper, we utilize the hop as the computing unit instead of the node. Ac￾cording to our optimized methodologies, for the unlimited storage problem, we can calculate the optimized hop-count kopt through Eq.(6) with constant time and space complex￾ity, which we can denote as O(1). For the limited stor￾age problem, we reduce it into the unbounded knapsack problem, and respectively propose the dynamic program￾ming based algorithm and the greedy approximation algo￾rithm. The dynamic programming based algorithm main￾tains a four-dimensional table and the compute complexity is O(m · M · K · π θ ), here m is the number of fan-shaped partitions, M is the size of reduced search space, K is the number of storage used, and θ is the degree of the unit zone. Moreover, the greedy approximation algorithm has a com￾pute complexity of O(mM log(mM)), which is irrelevant to the number of storages, i.e., K. As K can be a rather large number, the compute complexity can be greatly reduced. 7. PERFORMANCE EVALUATION We have implemented a simulator to evaluate the perfor￾mance of our solutions. In the simulation, we consider a situation where sensors are uniformly deployed in a region with randomly generated shape. We randomly select a sen￾sor node inside the deployment area as the sink. The overall number of sensor nodes is N = 1000. The deployment re￾gion covers an area of 6000 m2 . Hence on average each sensor node covers an area of 6m2 . We set the average hop distance d = 3m. Based on the above settings, statistically in the simulations we observe that the maximum number of hops n is within the range [10, 15] with 95% confidence interval. Unless otherwise stated, we set the following pa￾rameters in our simulations: query rate rq = 1, query size sq = 1, data generate rate rd = 1 and data size sd = 1, we set the reply reduction rate α = 0.5, for ease of com￾puting we set energy costs for transmitting and receiving a unit data as etr = ere = 1. In addition, we use relative energy cost as performance metrics. We use the case that no storage node except the sink is deployed as the baseline. Let the energy cost for this no storage scenario be Ef . And let the energy cost after the storage nodes are deployed be Es. The relative energy cost is defined as Es/Ef . In the following part we get the corresponding results by averaging the results of 1000 independent experiments. 7.1 Effect of the Optimized Placement with Un￾limited Number of Storages Fig.6(a) shows the energy cost with varying values of storage area bound, we set rq = 1.6, rd = 1. Here we select a specified fan-shaped partition divided from the deployment area as the test case, in which the topology is fairly regular. As the maximum number of hops in this partition is n = 10, we vary the storage area bound from the 1st hop to the 10th hop. We note that the energy cost starts to decrease from 0.97 as the storage area bound increases, when it reaches the lowest value 0.91, it starts to increase as the storage area bound increases, finally it reaches the value 1.01 for the full storage case. As we can see the lowest point of the energy cost curve lies in the bound equal to 5, which right fits with our optimized parameter kopt = ⌊5.3⌋ = 5, hence it demonstrates the effectiveness of the optimized parameter kopt calculated by our methodologies. In Fig.6(b) we compare the optimized storage placement with the random storage placement in term of energy cost, as the reply reduction rate α varies. In the random storage
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有