正在加载图片...
become NP hard,see Wagner and Wagner (1993)for a discussion.Spectral clustering is a way to solve relaxed versions of those problems.We will see that relaxing Ncut leads to normalized spectral clustering,while relaxing RatioCut leads to unnormalized spectral clustering(see also the tutorial slides by Ding (2004)). 5.1 Approximating RatioCut for k =2 Let us start with the case of RatioCut and k =2,because the relaxation is easiest to understand in this setting.Our goal is to solve the optimization problem min RatioCut(A,A). (1) ACV We first rewrite the problem in a more convenient form.Given a subset A CV we define the vector f=(f1,...,fn)'E R"with entries A/4 if∈A (2) V14/A if∈A. Now the RatioCut objective function can be conveniently rewritten using the unnormalized graph Laplacian.This is due to the following calculation: Ff=∑,-护 -(倜周+(隔周 iEA.jEA -ma(++习 cut(A,A) 1A+☒+A川+☒ =IV1·RatioCut(A,A), Additionally,we have 立-倜-得-倜-倜= In other words,the vector f as defined in Equation(2)is orthogonal to the constant one vector 1. Finally,note that f satisfies Altogether we can see that the problem of minimizing(1)can be equivalently rewritten as subject toff as defined in Eq.(2),fvn. (3) This is a discrete optimization problem as the entries of the solution vector f are only allowed to take two particular values,and of course it is still NP hard.The most obvious relaxation in this setting is 10become NP hard, see Wagner and Wagner (1993) for a discussion. Spectral clustering is a way to solve relaxed versions of those problems. We will see that relaxing Ncut leads to normalized spectral clustering, while relaxing RatioCut leads to unnormalized spectral clustering (see also the tutorial slides by Ding (2004)). 5.1 Approximating RatioCut for k = 2 Let us start with the case of RatioCut and k = 2, because the relaxation is easiest to understand in this setting. Our goal is to solve the optimization problem min A⊂V RatioCut(A, A). (1) We first rewrite the problem in a more convenient form. Given a subset A ⊂ V we define the vector f = (f1, . . . , fn) 0 ∈ ❘ n with entries fi =    q |A|/|A| if vi ∈ A − q |A|/|A| if vi ∈ A. (2) Now the RatioCut objective function can be conveniently rewritten using the unnormalized graph Laplacian. This is due to the following calculation: f 0Lf = 1 2 Xn i,j=1 wij (fi − fj ) 2 = 1 2 X i∈A,j∈A wij   s |A| |A| + s |A| |A|   2 + 1 2 X i∈A,j∈A wij  − s |A| |A| − s |A| |A|   2 = cut(A, A)  |A| |A| + |A| |A| + 2 = cut(A, A)  |A| + |A| |A| + |A| + |A| |A|  = |V | · RatioCut(A, A). Additionally, we have Xn i=1 fi = X i∈A s |A| |A| − X i∈A s |A| |A| = |A| s |A| |A| − |A| s |A| |A| = 0. In other words, the vector f as defined in Equation (2) is orthogonal to the constant one vector ✶. Finally, note that f satisfies kfk 2 = Xn i=1 f 2 i = |A| |A| |A| + |A| |A| |A| = |A| + |A| = n. Altogether we can see that the problem of minimizing (1) can be equivalently rewritten as min A⊂V f 0Lf subject to f ⊥ ✶, fi as defined in Eq. (2), kfk = √ n. (3) This is a discrete optimization problem as the entries of the solution vector f are only allowed to take two particular values, and of course it is still NP hard. The most obvious relaxation in this setting is 10
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有