正在加载图片...
e一LFH-B-KSHMLHTSPLHITQ◆=AGHLSH★一PCAH◆SH●SIKH 0.8 0.65 0.7 0.6 0.55 0.5 0 0.45 0.4 0.35 0.3 03 0.2 0.25 03 8162432 48 64 128 8162432 48 .64 96 128 Code Length Code Length (a)CIFAR-10 (b)NUS-WIDE Figure 4:MAP results with different code lengths 010 0.6 M 0.55 -04 0.5 -0.6 0.45 wA -0.8 1 0.35 -1.2 -1.4 0.25 -1.6 03 -1.8 LFH-Full 0.15 LFH-Ful -LFH-Stochastic -LFH-Stochastic 汤 0. 40 60 80 100 0 20 40 60 80 100 Iteration Iteration Figure 2:Convergence curve. Figure 3:MAP during the iterations. achieve better accuracy than data-independent and unsu- We can find that the data-independent hashing method- pervised data-dependent methods.Furthermore,the accu- s require the least amount of training time,and the su- pervised data-dependent hashing methods need the most racy of our LFH method is much higher than other methods including these supervised data-dependent methods KSH. amount of training time.Compared to other supervised SPLH,and MLH. data-dependent hashing methods,the training time of LFH The precision-recall curves with different code lengths will is much smaller than that of MLH and is comparable to that of KSH and SPLH.For large code lengths,our LFH is even be illustrated in the Appendix at the end of this paper(refer to Figure 9 and Figure 10),which will also show that our faster than KSH and SPLH.This is because the number of LFH method can significantly outperform other state-of-the- iterations needed to learn U decreases as the code length increases. art hashing methods. 3.6 Performance using Full Supervised Infor- 3.5 Computational Cost mation Figure 5(a)and Figure 5(b)show the average training For the results reported in Section 3.4 and Section 3.5, time of different hashing methods with different code length- we adopt the same strategy as that in [17]to train KSH, s on the CIFAR-10 and NUS-WIDE datasets,respectively. SPLH,and MLH by sampling only 2,000 labeled points due The reported values are in seconds in a logarithmic scale. to their high time complexity.To get a deeper comparison,LFH KSH MLH SPLH ITQ AGH LSH PCAH SH SIKH 8 16 24 32 48 64 96 128 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Code Length MAP (a) CIFAR-10 8 16 24 32 48 64 96 128 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 Code Length MAP (b) NUS-WIDE Figure 4: MAP results with different code lengths. 0 20 40 60 80 100 −2 −1.8 −1.6 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 x 108 Iteration Objective Function LFH−Full LFH−Stochastic Figure 2: Convergence curve. achieve better accuracy than data-independent and unsu￾pervised data-dependent methods. Furthermore, the accu￾racy of our LFH method is much higher than other methods including these supervised data-dependent methods KSH, SPLH, and MLH. The precision-recall curves with different code lengths will be illustrated in the Appendix at the end of this paper (refer to Figure 9 and Figure 10), which will also show that our LFH method can significantly outperform other state-of-the￾art hashing methods. 3.5 Computational Cost Figure 5(a) and Figure 5(b) show the average training time of different hashing methods with different code length￾s on the CIFAR-10 and NUS-WIDE datasets, respectively. The reported values are in seconds in a logarithmic scale. 0 20 40 60 80 100 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 Iteration MAP LFH−Full LFH−Stochastic Figure 3: MAP during the iterations. We can find that the data-independent hashing method￾s require the least amount of training time, and the su￾pervised data-dependent hashing methods need the most amount of training time. Compared to other supervised data-dependent hashing methods, the training time of LFH is much smaller than that of MLH and is comparable to that of KSH and SPLH. For large code lengths, our LFH is even faster than KSH and SPLH. This is because the number of iterations needed to learn U decreases as the code length increases. 3.6 Performance using Full Supervised Infor￾mation For the results reported in Section 3.4 and Section 3.5, we adopt the same strategy as that in [17] to train KSH, SPLH, and MLH by sampling only 2,000 labeled points due to their high time complexity. To get a deeper comparison
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有