正在加载图片...
Table 2:Training time on TINY-1M (in second). Method 32 bits 64 bits 96 bits 128 bits 256 bits SGH 34.49 52.37 71.53 89.65 164.23 ITQ 31.72 60.62 89.01 149.18 322.06 AGH 18.60+1438.60 19.40+1438.60 20.08+1438.60 22.48+1438.60 25.09+1438.60 DGH-I 187.57+1438.60 296.99+1438.60 518.57+1438.60 924.08+1438.60 1838.30+1438.60 DGH-R 217.06+1438.60 360.18+1438.60615.74+1438.60 1089.10+1438.60 2300.10+1438.60 PCAH 4.29 4.54 4.75 5.85 6.49 LSH 1.68 1.77 1.84 2.55 3.76 Table 3:Training time on MIRFLICKR-1M (in second). Method 32 bits 64 bits 96 bits 128 bits 256 bits SGH 41.51 59.02 74.86 97.25 168.35 ITO 36.17 64.61 89.50 132.71 285.10 AgH 17.99+1564.86 18.80+1564.86 20.30+1564.86 19.87+1564.86 21.60+1564.86 DGH-I■T 85.81+1564.86 143.68+1564.86215.41+1564.86 352.73+1564.86 739.56+156486 DGH-R 116.25+1564.86 206.24+1564.86 308.32+1564.86 517.97+1564.86 1199.44+1564.86 PCAH 7.65 7.90 8.47 9.23 10.42 LSH 2.44 2.43 2.71 3.38 4.21 0.75 0.75 e 08 0.7 0.7 0.6 0日 0.6 0.65 0.4 0 --64 bits 05 --64 bits 64 bits 64 bits 128 bits 128 bits ←128bits 128 bits D.5 0.5 100200n 400500600 100 200 300 400500600 2 3 4 2 3 4 (a)TINY-IM (b)MIRFLICKR-1M (a)TINY-1M (b)MIRFLICKR-1M Figure 3:Top-1000 precision using different numbers of ker- Figure 4:Top-1000 precision using different values of p nel bases(m) cations,we need to choose a suitable m for tradeoff between and 128 bits.The cases with other numbers of bits are similar accuracy and cost. to these reported results,which are omitted for space saving. Figure 4 shows the Top-1000 precision on TINY-1M and We can see that SGH achieves the best accuracy for different MIRFLICKR-IM for different values of p.We can find that numbers of K. the accuracy is not sensitive to p when 1 <p <5.One nice 4.4 Speed phenomenon is that our method achieves the best accuracy when p=2 on both datasets. We report the time consumption on two datasets in Table 2 and Table 3,respectively.Please note the "+1438.60"and "+1564.86"in the two tables denote the anchor graph con- 5 Conclusion structing time in AGH and DGH.It is fair to include this an- In this paper,we have proposed a novel method,called SGH, chor graph constructing time for comparison because the time for large-scale graph hashing.SGH is scalable by avoid- of our SGH also includes all the training time.We can find ing explicitly computing the similarity matrix,and simulta- that our SGH is much faster than AGH and DGH in all cas- neously SGH can preserve the entire similarity information es,and is faster than ITQ in most cases.LSH and PCAH are in the dataset.Experiments show that SGH can outperfor- faster than SGH,but their accuracy is not satisfactory. m the state-of-the-art methods in terms of both accuracy and scalability. 4.5 Sensitivity to Parameters Figure 3 shows the Top-1000 precision on TINY-IM and 6 Acknowledgements MIRFLICKR-1M for different numbers of kernel bases(m). We can find that better accuracy can be achieved with larger This work is supported by the NSFC (No.61472182),and m,which is consistent with our intuition.However,larger m the Fundamental Research Funds for the Central Universi- will result in higher computation cost.Hence,in real appli- ties (No.20620140510).Table 2: Training time on TINY-1M (in second). Method 32 bits 64 bits 96 bits 128 bits 256 bits SGH 34.49 52.37 71.53 89.65 164.23 ITQ 31.72 60.62 89.01 149.18 322.06 AGH 18.60 + 1438.60 19.40 + 1438.60 20.08 + 1438.60 22.48 + 1438.60 25.09 + 1438.60 DGH-I 187.57 + 1438.60 296.99 + 1438.60 518.57 + 1438.60 924.08 + 1438.60 1838.30 + 1438.60 DGH-R 217.06 + 1438.60 360.18 + 1438.60 615.74 + 1438.60 1089.10 + 1438.60 2300.10 + 1438.60 PCAH 4.29 4.54 4.75 5.85 6.49 LSH 1.68 1.77 1.84 2.55 3.76 Table 3: Training time on MIRFLICKR-1M (in second). Method 32 bits 64 bits 96 bits 128 bits 256 bits SGH 41.51 59.02 74.86 97.25 168.35 ITQ 36.17 64.61 89.50 132.71 285.10 AGH 17.99 + 1564.86 18.80 + 1564.86 20.30 + 1564.86 19.87 + 1564.86 21.60 + 1564.86 DGH-I 85.81 + 1564.86 143.68 + 1564.86 215.41 + 1564.86 352.73 + 1564.86 739.56 + 1564.86 DGH-R 116.25 + 1564.86 206.24 + 1564.86 308.32 + 1564.86 517.97 + 1564.86 1199.44 + 1564.86 PCAH 7.65 7.90 8.47 9.23 10.42 LSH 2.44 2.43 2.71 3.38 4.21 (a) TINY-1M (b) MIRFLICKR-1M Figure 3: Top-1000 precision using different numbers of ker￾nel bases (m) and 128 bits. The cases with other numbers of bits are similar to these reported results, which are omitted for space saving. We can see that SGH achieves the best accuracy for different numbers of K. 4.4 Speed We report the time consumption on two datasets in Table 2 and Table 3, respectively. Please note the “+1438.60” and “+1564.86” in the two tables denote the anchor graph con￾structing time in AGH and DGH. It is fair to include this an￾chor graph constructing time for comparison because the time of our SGH also includes all the training time. We can find that our SGH is much faster than AGH and DGH in all cas￾es, and is faster than ITQ in most cases. LSH and PCAH are faster than SGH, but their accuracy is not satisfactory. 4.5 Sensitivity to Parameters Figure 3 shows the Top-1000 precision on TINY-1M and MIRFLICKR-1M for different numbers of kernel bases (m). We can find that better accuracy can be achieved with larger m, which is consistent with our intuition. However, larger m will result in higher computation cost. Hence, in real appli- (a) TINY-1M (b) MIRFLICKR-1M Figure 4: Top-1000 precision using different values of ρ cations, we need to choose a suitable m for tradeoff between accuracy and cost. Figure 4 shows the Top-1000 precision on TINY-1M and MIRFLICKR-1M for different values of ρ. We can find that the accuracy is not sensitive to ρ when 1 ≤ ρ ≤ 5. One nice phenomenon is that our method achieves the best accuracy when ρ = 2 on both datasets. 5 Conclusion In this paper, we have proposed a novel method, called SGH, for large-scale graph hashing. SGH is scalable by avoid￾ing explicitly computing the similarity matrix, and simulta￾neously SGH can preserve the entire similarity information in the dataset. Experiments show that SGH can outperfor￾m the state-of-the-art methods in terms of both accuracy and scalability. 6 Acknowledgements This work is supported by the NSFC (No. 61472182), and the Fundamental Research Funds for the Central Universi￾ties (No. 20620140510)
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有