正在加载图片...
LFH -BKSH一MLH一SPLHITQ◆AGH一LSH★一PCAH◆一SH●SIKH 8162432 4864 96 128 81624324864 96 128 Code Length Code Length (a)CIFAR-10 (b)NUS-WIDE Figure 5:Training time with different code lengths. we perform another experiment on smaller datasets where the full supervised information can be used for training.We LFH-Full randomly sample a subset of CIFAR-10 with 5000 points for LEH-Stochasti evaluation.We also include LFH with stochastic learning KSH-FUll to better demonstrate its effectiveness.Figure 6 and Fig- SPLH-Full ure 7 show the accuracy and computational cost for these MLH-FUll methods. 0.8 ]LFH-Full ☐LFH-Stochastic KSH-Full SPLH-FUll MLH-Full 22 4 96 0 Code Length Figure 7:Computational cost on CIFAR-10 subset with full labels. Hence.we can conclude that our LFH method can out- 48 64 96 perform other supervised hashing methods in terms of both Code Length accuracy and computational cost. Figure 6:Accuracy on CIFAR-10 subset with full 3.7 Case Study labels. In Figure 8,we demonstrate the hamming ranking result- s for some example queries on the CIFAR-10 dataset.For each query image,the top (nearest)ten images returned by We can see that our LFH,even with stochastic learning, different hashing methods are shown.We use red rectan- can achieve higher MAP than other methods with full labels gles to indicate the images that are not in the same class as used.The training speed of LFH with full labels is compara- the query image.That is to say,the images with red rect- ble to that of KSH and SPLH,and is much faster than that angles are wrongly returned results.It is easy to find that of MLH.The LFH with stochastic learning beats all other our LFH method exhibits minimal errors compared to other methods in training time. supervised hashing methods.LFH KSH MLH SPLH ITQ AGH LSH PCAH SH SIKH 8 16 24 32 48 64 96 128 −2 −1 0 1 2 3 4 5 Code Length Log Training Time (a) CIFAR-10 8 16 24 32 48 64 96 128 −1 0 1 2 3 4 5 Code Length Log Training Time (b) NUS-WIDE Figure 5: Training time with different code lengths. we perform another experiment on smaller datasets where the full supervised information can be used for training. We randomly sample a subset of CIFAR-10 with 5000 points for evaluation. We also include LFH with stochastic learning to better demonstrate its effectiveness. Figure 6 and Fig￾ure 7 show the accuracy and computational cost for these methods. 32 48 64 96 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Code Length MAP LFH−Full LFH−Stochastic KSH−Full SPLH−Full MLH−Full Figure 6: Accuracy on CIFAR-10 subset with full labels. We can see that our LFH, even with stochastic learning, can achieve higher MAP than other methods with full labels used. The training speed of LFH with full labels is compara￾ble to that of KSH and SPLH, and is much faster than that of MLH. The LFH with stochastic learning beats all other methods in training time. 32 48 64 96 0 1 2 3 4 5 6 Code Length Log Training Time LFH−Full LFH−Stochastic KSH−Full SPLH−Full MLH−Full Figure 7: Computational cost on CIFAR-10 subset with full labels. Hence, we can conclude that our LFH method can out￾perform other supervised hashing methods in terms of both accuracy and computational cost. 3.7 Case Study In Figure 8, we demonstrate the hamming ranking result￾s for some example queries on the CIFAR-10 dataset. For each query image, the top (nearest) ten images returned by different hashing methods are shown. We use red rectan￾gles to indicate the images that are not in the same class as the query image. That is to say, the images with red rect￾angles are wrongly returned results. It is easy to find that our LFH method exhibits minimal errors compared to other supervised hashing methods
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有