正在加载图片...
IEEE TRANSACTIONS ON IMAGE PROCESSING.VOL.XX,NO.X.XXX 2019 9 CCA-ITQ:-CMFH:-MLBE:SCM;SePHrnd+SePHm*-GSPH;DLFH;。-KDLFH I→T@8bits Ta32bits I→Ta64bits 0.8 0.8 0.8 0.6 06 0.6 0.4 04 0.2 0. 0.2 0.2 0.5 0 0.5 0.5 0.5 Recall Recall Recall →Ia32 bits T→I@64bits 0.8 1 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0. 0.2 0 0.5 0 0.5 0 0.5 0.5 Recall Recall Recall Recall Fig.2.Precision-recall curve on IAPR-TC12 dataset. CCA-ITQ: CMFH:MLBE:-SCM:◆-SePHd+- SePHim◆GSPH:+DLFH;一KDLFH. I→T@8bits I→T16 bits I→Ta32bit I→T64 bits 0.9 1 17 1 0.9 0.9 0.9 0.8 0.8 0.8 0.7 0.7 0. 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0. 0. 0 0.5 0 0.5 1 0 0.5 0 0.5 Recall Recall Recall Recall T→I@l6bits T→I@32bits T→IaG4bits 1 1 1 1 0.9 0.9 0.9 0.9 5 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 06 0.6 0.5 0.5 0.5 0.5 0 0.5 0 0.5 0.5 0 0.5 Recall Recall Recall Recall Fig.3.Precision-recall curve on MIRFLICKR-25K dataset. E.Hash Lookup Task F.Training Speed To evaluate the training speed of DLFH,we adopt different In real applications,retrieval with hash lookup can usually number of data points from retrieval set to construct training achieve constant or sub-linear search speed.For hash lookup set and then report the training time.Table VI presents protocols,we report precision-recall curves to evaluate the proposed DLFH,KDLFH and baselines on three datasets. the training time (in second)for our DLFH and baselines, where ""denotes that we cannot carry out corresponding Figure 2,Figure 3 and Figure 4 show the precision-recall experiments due to out-of-memory errors.We can find that the curve on IAPR-TC12,MIRFLICKR-25K and NUS-WIDE unsupervised method CCA-ITQ is the fastest method because datasets,respectively.Once again,we can find that DLFH and it does not use supervised information for training.Although KDLFH can significantly outperform all baselines in all cases. the training of CCA-ITQ is fast,the accuracy of it is low.IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. X, XXX 2019 9 Recall 0 0.5 1 Pre cisio n 0.2 0.4 0.6 0.8 Recall 0 0.5 1 Pre cisio n 0.2 0.4 0.6 0.8 1 Recall 0 0.5 1 Pre cisio n 0.2 0.4 0.6 0.8 1 Recall 0 0.5 1 Pre cisio n 0.2 0.4 0.6 0.8 1 Recall 0 0.5 1 Pre cisio n 0.2 0.4 0.6 0.8 Recall 0 0.5 1 Pre cisio n 0.2 0.4 0.6 0.8 1 Recall 0 0.5 1 Pre cisio n 0.2 0.4 0.6 0.8 1 Recall 0 0.5 1 Pre cisio n 0.2 0.4 0.6 0.8 1 Fig. 2. Precision-recall curve on IAPR-TC12 dataset. Recall 0 0.5 1 Pre cisio n 0.5 0.6 0.7 0.8 0.9 Recall 0 0.5 1 Pre cisio n 0.5 0.6 0.7 0.8 0.9 1 Recall 0 0.5 1 Pre cisio n 0.5 0.6 0.7 0.8 0.9 1 Recall 0 0.5 1 Pre cisio n 0.5 0.6 0.7 0.8 0.9 1 Recall 0 0.5 1 Pre cisio n 0.5 0.6 0.7 0.8 0.9 1 Recall 0 0.5 1 Pre cisio n 0.5 0.6 0.7 0.8 0.9 1 Recall 0 0.5 1 Pre cisio n 0.5 0.6 0.7 0.8 0.9 1 Recall 0 0.5 1 Pre cisio n 0.5 0.6 0.7 0.8 0.9 1 Fig. 3. Precision-recall curve on MIRFLICKR-25K dataset. E. Hash Lookup Task In real applications, retrieval with hash lookup can usually achieve constant or sub-linear search speed. For hash lookup protocols, we report precision-recall curves to evaluate the proposed DLFH, KDLFH and baselines on three datasets. Figure 2, Figure 3 and Figure 4 show the precision-recall curve on IAPR-TC12, MIRFLICKR-25K and NUS-WIDE datasets, respectively. Once again, we can find that DLFH and KDLFH can significantly outperform all baselines in all cases. F. Training Speed To evaluate the training speed of DLFH, we adopt different number of data points from retrieval set to construct training set and then report the training time. Table VI presents the training time (in second) for our DLFH and baselines, where “–” denotes that we cannot carry out corresponding experiments due to out-of-memory errors. We can find that the unsupervised method CCA-ITQ is the fastest method because it does not use supervised information for training. Although the training of CCA-ITQ is fast, the accuracy of it is low
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有