正在加载图片...
Scalable Graph Hashing with Feature Transformation Model and Leamning Sequential Learning Strategy o Direct relaxation may lead to poor performance We adopt a sequential learning strategy in a bit-wise complementary manner o Residual definition: R:=cS->sgn(K(X)wi)sgn(K(X)wi)T R1=cS Objective function: min R:-sgn(K (X)w:)sgn(K (X)w:) s.t.wfK(X)TK(X)wt=1 By relaxation,we can get: min -tr(wK(X)TR:K(X)wt) Wt s.t.wlK(X)TK(X)wt=1 日卡三4元,互Q0 Li (http://cs.nju.edu.cn/lvj) Learning to Hash LAMDA,CS.NJU 26/43Scalable Graph Hashing with Feature Transformation Model and Learning Sequential Learning Strategy Direct relaxation may lead to poor performance We adopt a sequential learning strategy in a bit-wise complementary manner Residual definition: Rt = cSe − Pt−1 i=1 sgn(K(X)wi)sgn(K(X)wi) T R1 = cSe Objective function: min wt ||Rt − sgn(K(X)wt)sgn(K(X)wt) T ||2 F s.t. wT t K(X) T K(X)wt = 1 By relaxation, we can get: min wt −tr(wT t K(X) T RtK(X)wt) s.t. wT t K(X) T K(X)wt = 1 Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA, CS, NJU 26 / 43
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有