正在加载图片...
IEEE TRANSACTIONS ON INFORMATION THEORY TABLE III: Structural information,von Neumann graph entropy,and entropy gap of specific graphs Graph Types Structural information H1 von Neumann graph entropy Hvn Entropy gap△H Complete graph Kn log2 n log2(n-1) o821+点) Complete bipartite graphKa 1+号log2(ab) 1+log2(ab)-log2(loga( a o821+2+1o82+2 Path Pn 1og2(n-1)+n log2(n -1)+1-log2 e log2e-1 Ring Rn l0g2 n log2 n+1-log2 e log2 e-1 H1(G+1)is maximized,where G+1=(V,EL U [e+}). Algorithm 1:EntropyAug Since H1(G+1)can be rewritten as Input:The graph G=(V,E)of order n,the budget k Output:A set of node pairs 1og2(2E+2)- f(d.+1)+fd,+1)+∑u,wf(d) 2E+2 1F←-O,H←-0: 2 while F<k do the edge e+1 maximizing H1(G+1)should also minimize the Va:list sort V in non-decreasing degree order; edge centrality EC(u,v)=f(du+1)-f(du)+f(d+1)- head←0,tail←lVl-l,T←+o; f(d),where di is the degree of node i in GL. while head<tail do We present the pseudocode of our fast algorithm Entropy- for i=head+1,head +2,...,tail do Aug in Algorithm 1,which leverages the pruning strategy to if EC(V,[head],Vli])>T then accelerate the process of finding a single new edge that creates 8 |tail←i-l;break; a largest increase of the von Neumann entropy.EntropyAug if (V,[head],Vli])E then starts by initiating an empty set F used to contain the node u←V,[head],v←Va[, pairs to be found and an entropy value H used to record the maximum structural information in the graph evolution T←EC(u,v: tail←-i-l;break process (line 1).In each following iteration,it sorts the set 12 head←head+l: of nodes V in non-decreasing degree order (line 3).Note that 13 E←EU{(u,)},F←FU{(u,v): the edge centrality EC(u,v)has a nice monotonic property: 14 if孔1(G)>H then H←-H1(G),F*←F: EC(u1,v1)<EC(u2,v2)if min{du,du}<min{dua,doz} 15 if H=log2 n then break; and maxfdu,d<max{du2,de).With the sorted list of 16 return F* nodes Va,the monotonicity of EC(u,v)can be translated into EC(V.[i],V[])<EC(V [i2],V[j2])if the indices satisfy i<j,i2 j2,i<i2,and j<j2.Thus,using the two pointers (head,tail}and a threshold T,it can prune the Vlog2 being attained when minfpi,qi}=0 for each i E N. search space and find the desired non-adjacent node pair as However,the Jensen-Shannon distance cannot measure the fast as possible (line 4-12).It then adds the non-adjacent node similarity between high dimensional objects such as matrices pair minimizing EC(u,v)into F and update the graph G and graphs.Therefore,Majtey et al.[24]introduce a quantum (line 13).The structural information of the updated graph is Jensen-Shannon divergence to measure the similarity between computed to determine whether F is the optimal subset till mixed quantum states,the formal definition of which is current iteration (line 14-15). presented in the following, Definition 6(Quantum Jensen-Shannon divergence between B.Graph Similarity Measure quantum states):The quantum Jensen-Shannon divergence Entropy based graph similarity measure aims to compare between two quantum states p and o is defined as graphs using Jensen-Shannon divergence.The Jensen-Shannon divergence,as a symmetrized and smoothed version of the Dojs(p,a)=tr plogp+ologapta loga Kullback-Leibler divergence,is defined formally in the fol- 2 2 2 lowing Definition 5. Definition 5(Jensen-Shannon divergence):Let P and Q be where p,o are symmetric and positive semi-definite density two probability distributions on the same support set N matrices with tr(p)=1 and tr(o)=1. {1,...,N},where P=(p1,...,pN)and Q=(1,...,qN). The square root of DoJs(p,)has also been proved to be The Jensen-Shannon divergence between P and Q is defined a metric on the space of density matrices [52],[53].Since as the Laplacian matrix L of a weighted undirected graph G Ds(P,Q)=H(P+Q)/2)-H(P)/2-H(Q)/2, is symmetric and positive semi-definite,we can view as a density matrix.Therefore,the quantum Jensen-Shannon where H(P)loi is the entropy of the distri- divergence can be used to measure the similarity between two bution P. graphs Gi and G2.given that they share the same node set, The square root of DJs(P,Q),also known as Jensen- i.e.V(G1)=V(G2) Shannon distance,has been proved [51]to be a bounded metric Definition 7(Quantum Jensen-Shannon divergence between on the space of distributions over N,with its maximum value graphs):The quantum Jensen-Shannon divergence betweenIEEE TRANSACTIONS ON INFORMATION THEORY 8 TABLE III: Structural information, von Neumann graph entropy, and entropy gap of specific graphs. Graph Types Structural information H1 von Neumann graph entropy Hvn Entropy gap ∆H Complete graph Kn log2 n log2 (n − 1) log2 (1 + 1 n−1 ) Complete bipartite graph Ka,b 1 + 1 2 log2 (ab) 1 + 1 2 log2 (ab) − log2(1+ b a ) 2b − log2(1+ a b ) 2a log2(1+ b a ) 2b + log2(1+ a b ) 2a Path Pn log2 (n − 1) + 1 n−1 log2 (n − 1) + 1 − log2 e log2 e − 1 Ring Rn log2 n log2 n + 1 − log2 e log2 e − 1 H1(Gl+1) is maximized, where Gl+1 = (V, El ∪ {el+1}). Since H1(Gl+1) can be rewritten as log2 (2|El | + 2) − f(du + 1) + f(dv + 1) + P i6=u,v f(di) 2|El | + 2 , the edge el+1 maximizing H1(Gl+1) should also minimize the edge centrality EC(u, v) = f(du + 1) − f(du) + f(dv + 1) − f(dv), where di is the degree of node i in Gl . We present the pseudocode of our fast algorithm Entropy￾Aug in Algorithm 1, which leverages the pruning strategy to accelerate the process of finding a single new edge that creates a largest increase of the von Neumann entropy. EntropyAug starts by initiating an empty set F used to contain the node pairs to be found and an entropy value H used to record the maximum structural information in the graph evolution process (line 1). In each following iteration, it sorts the set of nodes V in non-decreasing degree order (line 3). Note that the edge centrality EC(u, v) has a nice monotonic property: EC(u1, v1) ≤ EC(u2, v2) if min{du1 , dv1 } ≤ min{du2 , dv2 } and max{du1 , dv1 } ≤ max{du2 , dv2 }. With the sorted list of nodes Vs, the monotonicity of EC(u, v) can be translated into EC(Vs[i1], Vs[j1]) ≤ EC(Vs[i2], Vs[j2]) if the indices satisfy i1 < j1, i2 < j2, i1 < i2, and j1 < j2. Thus, using the two pointers {head,tail} and a threshold T, it can prune the search space and find the desired non-adjacent node pair as fast as possible (line 4-12). It then adds the non-adjacent node pair minimizing EC(u, v) into F and update the graph G (line 13). The structural information of the updated graph is computed to determine whether F is the optimal subset till current iteration (line 14-15). B. Graph Similarity Measure Entropy based graph similarity measure aims to compare graphs using Jensen-Shannon divergence. The Jensen-Shannon divergence, as a symmetrized and smoothed version of the Kullback-Leibler divergence, is defined formally in the fol￾lowing Definition 5. Definition 5 (Jensen-Shannon divergence): Let P and Q be two probability distributions on the same support set ΩN = {1, . . . , N}, where P = (p1, . . . , pN ) and Q = (q1, . . . , qN ). The Jensen-Shannon divergence between P and Q is defined as DJS(P, Q) = H((P + Q)/2) − H(P)/2 − H(Q)/2, where H(P) = − PN i=1 pi log pi is the entropy of the distri￾bution P. The square root of DJS(P, Q), also known as Jensen￾Shannon distance, has been proved [51] to be a bounded metric on the space of distributions over ΩN , with its maximum value Algorithm 1: EntropyAug Input: The graph G = (V, E) of order n, the budget k Output: A set of node pairs 1 F ← ∅, H ← 0; 2 while |F| < k do 3 Vs: list ← sort V in non-decreasing degree order; 4 head ← 0, tail ← |Vs| − 1, T ← +∞; 5 while head < tail do 6 for i = head + 1, head + 2, . . . ,tail do 7 if EC(Vs[head], Vs[i]) ≥ T then 8 tail ← i − 1; break; 9 if (Vs[head], Vs[i]) ∈/ E then 10 u ← Vs[head], v ← Vs[i], T ← EC(u, v); 11 tail ← i − 1; break; 12 head ← head + 1; 13 E ← E ∪ {(u, v)}, F ← F ∪ {(u, v)}; 14 if H1(G) > H then H ← H1(G), F ∗ ← F; 15 if H = log2 n then break; 16 return F ∗ . √ log 2 being attained when min{pi , qi} = 0 for each i ∈ ΩN . However, the Jensen-Shannon distance cannot measure the similarity between high dimensional objects such as matrices and graphs. Therefore, Majtey et al. [24] introduce a quantum Jensen-Shannon divergence to measure the similarity between mixed quantum states, the formal definition of which is presented in the following, Definition 6 (Quantum Jensen-Shannon divergence between quantum states): The quantum Jensen-Shannon divergence between two quantum states ρ and σ is defined as DQJS(ρ,σ) = tr  ρ log ρ + σ log σ 2 − ρ + σ 2 log ρ + σ 2  where ρ,σ are symmetric and positive semi-definite density matrices with tr(ρ) = 1 and tr(σ) = 1. The square root of DQJS(ρ,σ) has also been proved to be a metric on the space of density matrices [52], [53]. Since the Laplacian matrix L of a weighted undirected graph G is symmetric and positive semi-definite, we can view L tr(L) as a density matrix. Therefore, the quantum Jensen-Shannon divergence can be used to measure the similarity between two graphs G1 and G2, given that they share the same node set, i.e. V (G1) = V (G2). Definition 7 (Quantum Jensen-Shannon divergence between graphs): The quantum Jensen-Shannon divergence between
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有