正在加载图片...
JIANG et al.:DEEP DISCRETE SUPERVISED HASHING 6003 TABLE V TABLE VI MAP OF THE HAMMING RANKING TASK ON CIFAR-10 DATASET. MAP OF THE HAMMING RANKING TASK ON SVHN DATASET THE BEST ACCURACY IS SHOWN IN BOLDFACE THE BEST ACCURACY IS SHOWN IN BOLDFACE CIFAR-10 SVHN Method 12 bits 24 bits 32 bits 48 bits Method 12 bits 24 bits 32 bits 48 bits DDSH 0.76950.82890.83520.8194 DDSH 0.5735 0.67440.70310.7184 DSDH 0.7442 0.7868 0.7991 0.8142 DSDH 0.5121 0.5670 0.5866 0.5839 DPSH 0.6844 0.7225 0.7396 0.7460 DPSH 0.3790 0.4216 0.4337 0.4557 DSH 0.6457 0.7492 0.7857 0.8113 DSH 0.3702 0.4802 0.5232 0.5828 DHN 0.6725 0.7107 0.7045 0.7135 DHN 0.3800 0.4096 0.4158 0.4302 NDH 0.5620 0.6404 0.655 0.6772 NDH 0.2177 0.2710 0.2563 0.2803 COSDISH 0.6085 0.6827 0.6959 0.7158 COSDISH 0.2381 0.2951 0396 0.3408 SDH 0.5200 0.6461 0.6577 0.6688 SDH 0.1509 0.2996 0.3202 0.3335 FastH 0.6202 0.6731 0.6870 0.7163 FastH 0.2516 0.2961 0.3177 03436 LFH 0.4009 0.647 0.6571 0.6995 LFH 0.1933 0.2558 0.2839 0.3253 ITQ 0.2580 0.2725 0.28340.2936 ITO 0.1108 0.1138 0.1149 0.1159 LSH 0.1468 0.1725 0.1798 0.1929 LSH 0.1074 0.1082 0.1093 0.1109 TABLE VII In our experiment,ground-truth neighbors are defined based MAP OF THE HAMMING RANKING TASK ON NUS-WIDE DATASET on whether two data points share at least one class label. THE BEST ACCURACY IS SHOWN IN BOLDFACE We carry out Hamming ranking task and hash lookup task to evaluate DDSH and baselines.We report the Mean Average NUS-WIDE Method 12 bits 24 bits 32 bits 48 bits Precision(MAP),top-k precision,precision-recall curve and DDSH 0.7911 0.81650.8217 0.8259 case study for Hamming ranking task.Specifically,given a DSDH 0.7916 0.8059 0.8063 0R180 query xg,we can calculate its average precision(AP)through DPSH 0.7882 0.8085 0.8167 0.8234 the following equation: DSH 0.7622 0.7940 0.7968 0.8081 DHN 0.7900 0.8101 0.8092 0.8180 H 0.7015 0.7351 0.7447 0744g 1 AP(xg)= >P(k)I(k), COSDISH 0.733 0.7643 0.7868 0.7993 Rk =1 SDH 0.7385 0.7616 0.7697 0.7720 FastH 0.7412 0.7830 0.7948 0.8085 where R&is the number of the relevant samples,P(k)is the LFH 0.7049 0.7594 0.7778 0.7936 ITQ 0.5053 0.5037 0503 0.5054 precision at cut-off k in the returned sample list and I1(k)is LSH 0.3407 0.3506 0.3509 0.3706 an indicator function which equals 1 if the kth returned sample is a ground-truth neighbor of xg.Otherwise,I(k)is 0.Given TABLE VIII O queries,we can compute the MAP as follows: MAP OF THE HAMMING RANKING TASK ON CLOTHINGIM DATASET THE BEST ACCURACY IS SHOWN IN BOLDFACE 1 MAP= >AP(xq). Q q=1 Method ClothingIM 12 bits 24 bits 32 bits 48 bits Because NUS-WIDE is relatively large,the MAP value on DDSH 0.2763 0.3667 0.3878 0.4008 NUS-WIDE is calculated based on the top 5000 returned DSDH 0.2903 0.3285 0.3413 0.3475 DPSH 0.1950 0.2087 0.2162 0.2181 neighbors.The MAP values for other datasets are calculated DSH 0.1730 0.1870 0.1912 0.2021 based on the whole retrieval set. DHN 0.1909 0.2243 0.2120 0.2488 For hash lookup task,we report mean hash lookup success NDH 0.1857 0.2276 0.2338 0.2354 COSDISH 0.1871 0.2358 0.2567 0.2756 rate (SR)within Hamming radius 0,1 and 2 [281.When at SDH 0.1518 0.1865 0.1941 0.1973 least one ground-truth neighbor is retrieved within a specific FastH 0.1736 0.2066 0.2167 0.2440 Hamming radius,we call it a lookup success.The hash lookup LFH 0.1548 1.1591 0.2128 0.2579 ITQ 0.50 0.1214 0.1228 0.1259 success rate (SR)can be calculated as follows: LSH 0.08340.08940.0914 0.0920 SR=∑ I(#retrieved ground-truth for query x>0) q=1 Q accuracy in most cases compared with all baselines,including Here,I()is an indicator function,i.e.,I(true)=I and deep hashing methods,non-deep supervised hashing methods, I(false)=0.O is the total number of query images. non-deep unsupervised hashing methods and data-independent methods. By comparing ITQ to LSH,we can find that the B.Experimental Result data-dependent hashing methods can significantly outper- 1)Hamming Ranking Task:Table V,Table VI,Table VII form data-independent hashing methods.By comparing NDH, and Table VIII reports the MAP result on CIFAR-10,SVHN,COSDISH,SDH,FastH and LFH to ITQ,we can find NUS-WIDE and ClothingIM dataset,respectively.We can that supervised methods can outperform unsupervised meth- easily find that our DDSH achieves the state-of-the-art retrieval ods because of the effect of using supervised information.JIANG et al.: DEEP DISCRETE SUPERVISED HASHING 6003 TABLE V MAP OF THE HAMMING RANKING TASK ON CIFAR-10 DATASET. THE BEST ACCURACY IS SHOWN IN BOLDFACE In our experiment, ground-truth neighbors are defined based on whether two data points share at least one class label. We carry out Hamming ranking task and hash lookup task to evaluate DDSH and baselines. We report the Mean Average Precision (MAP), top-k precision, precision-recall curve and case study for Hamming ranking task. Specifically, given a query xq , we can calculate its average precision (AP) through the following equation: AP(xq ) = 1 Rk N k=1 P(k)I1(k), where Rk is the number of the relevant samples, P(k) is the precision at cut-off k in the returned sample list and I1(k) is an indicator function which equals 1 if the kth returned sample is a ground-truth neighbor of xq. Otherwise, I1(k) is 0. Given Q queries, we can compute the MAP as follows: MAP = 1 Q Q q=1 AP(xq ). Because NUS-WIDE is relatively large, the MAP value on NUS-WIDE is calculated based on the top 5000 returned neighbors. The MAP values for other datasets are calculated based on the whole retrieval set. For hash lookup task, we report mean hash lookup success rate (SR) within Hamming radius 0, 1 and 2 [28]. When at least one ground-truth neighbor is retrieved within a specific Hamming radius, we call it a lookup success. The hash lookup success rate (SR) can be calculated as follows: S R = Q q=1 I(#retrieved ground-truth for query xq > 0) Q Here, I(·) is an indicator function, i.e., I(true) = 1 and I(false) = 0. Q is the total number of query images. B. Experimental Result 1) Hamming Ranking Task: Table V, Table VI, Table VII and Table VIII reports the MAP result on CIFAR-10, SVHN, NUS-WIDE and Clothing1M dataset, respectively. We can easily find that our DDSH achieves the state-of-the-art retrieval TABLE VI MAP OF THE HAMMING RANKING TASK ON SVHN DATASET. THE BEST ACCURACY IS SHOWN IN BOLDFACE TABLE VII MAP OF THE HAMMING RANKING TASK ON NUS-WIDE DATASET. THE BEST ACCURACY IS SHOWN IN BOLDFACE TABLE VIII MAP OF THE HAMMING RANKING TASK ON CLOTHING1M DATASET. THE BEST ACCURACY IS SHOWN IN BOLDFACE accuracy in most cases compared with all baselines, including deep hashing methods, non-deep supervised hashing methods, non-deep unsupervised hashing methods and data-independent methods. By comparing ITQ to LSH, we can find that the data-dependent hashing methods can significantly outper￾form data-independent hashing methods. By comparing NDH, COSDISH, SDH, FastH and LFH to ITQ, we can find that supervised methods can outperform unsupervised meth￾ods because of the effect of using supervised information.
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有