正在加载图片...
but in different images,may be partitioned into different number of regions,we simply set the parameter m in Algo- rithm I to 3.The parameter C and Gaussian kernel param- eter r for SVM in LIBSVM [2]are set to 1 and 2-3 respec- tively.Table 3 lists the results of the average AUC values (in percent)for all 20 categories with 95%confidence inter- 0.9 val over 50 rounds of test.Once again,EC-SVM achieves better results than MILES. ADD-SVM Table 3.Average AUC values (in percent)for all 20 categories 0.85 +一MLES with 95%confidence interval over 50 rounds of test on the COREL -eEC-SVM image set. 0.8 5 10 6 20 2 2 Noise Level (d) MILES 64.4±1.1 72.2±0.8 79.6±0.6 Figure 6.Comparison of sensitivity to labeling noise on the EC-SVM 76.4±0.6 80.0±0.483.2±0.3 COREL data set Figure 5 shows the average AUC values with 95%confi- 120 images from Category 7("CheckeredScarf)and Cat- dence interval for each category when n =1. egory 12("FabricSoftenerBox").The training and test sets are of the same size.The average classification accuracy with 95%confidence interval over 30 randomly generated 95 + MILES EC-SVM test sets is shown in Figure 7.We can see that EC-SVM is much more robust than MILES on the SIVAL data set 08 0.99 0.75 0.85 0.8 23456789101112131415161718 19 0.75 Category ID +一MILES Figure 5.Comparison on the COREL data set with one positive 0.7 e-EC-SVM and one negative examples labeled. 06 3 4 5 67 5.2.Sensitivity to Labeling Noise Num of Training Images with Negated Labeis for Each Class Figure 7.Comparison of sensitivity to labeling noise on the SIVAL We use the same setting as that in MILES [3]to evaluate data set. the noise sensitivity on the COREL data set.We add d of noise by changing the labels of d%of positive bags and The SIVAL data set differs from the COREL data set in d%of negative bags.We compare EC-SVM with DD-SVM many aspects.In COREL,the target object occupies a large and MILES under different noise levels based on 200 im- portion of the whole image,while in SIVAL the main part of ages from Category 2("Historical buildings")and Category an image is the background.Furthermore,the background 7("Horses").The training and test sets are of the same size. of some category in COREL is always specific to that cat- The average classification accuracy over five randomly gen- egory of images.For example,in general,the background erated test sets is shown in Figure 6.We can see that MILES in the images of"Historical buildings"(Category 2)is very and EC-SVM are much more robust than DD-SVM,and the different from the background in the images of"Horses" robustness of EC-SVM is comparable with MILES. (Category 7),which can be easily seen from Figure 4.But We further test the noise sensitivity of EC-SVM on the for SIVAL,the background for one category can appear for SIVAL data set.We compare EC-SVM with MILES under another category.MILES uses all the instances,from both different noise levels (n/30,n 1,...,9),by negating the positive training bags and negative training bags,as the ba- labels of n positive and n negative training images,based on sis for feature construction [3].This will make the effectbut in different images, may be partitioned into different number of regions, we simply set the parameter m in Algo￾rithm 1 to 3. The parameter C and Gaussian kernel param￾eter r for SVM in LIBSVM [2] are set to 1 and 2 −3 respec￾tively. Table 3 lists the results of the average AUC values (in percent) for all 20 categories with 95% confidence inter￾val over 50 rounds of test. Once again, EC-SVM achieves better results than MILES. Table 3. Average AUC values (in percent) for all 20 categories with 95% confidence interval over 50 rounds of test on the COREL image set. n 1 2 4 MILES 64.4 ± 1.1 72.2 ± 0.8 79.6 ± 0.6 EC-SVM 76.4± 0.6 80.0±0.4 83.2±0.3 Figure 5 shows the average AUC values with 95% confi- dence interval for each category when n = 1. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Category ID Average AUCs with 95% Confidence Interval MILES EC−SVM Figure 5. Comparison on the COREL data set with one positive and one negative examples labeled. 5.2. Sensitivity to Labeling Noise We use the same setting as that in MILES [3] to evaluate the noise sensitivity on the COREL data set. We add d% of noise by changing the labels of d% of positive bags and d% of negative bags. We compare EC-SVM with DD-SVM and MILES under different noise levels based on 200 im￾ages from Category 2 (“Historical buildings”) and Category 7 (“Horses”). The training and test sets are of the same size. The average classification accuracy over five randomly gen￾erated test sets is shown in Figure 6. We can see that MILES and EC-SVM are much more robust than DD-SVM, and the robustness of EC-SVM is comparable with MILES. We further test the noise sensitivity of EC-SVM on the SIVAL data set. We compare EC-SVM with MILES under different noise levels (n/30, n = 1, . . . , 9), by negating the labels of n positive and n negative training images, based on 0 5 10 15 20 0.8 0.85 0.9 0.95 1 Noise Level (d) Classification Accuracy DD−SVM MILES EC−SVM Figure 6. Comparison of sensitivity to labeling noise on the COREL data set. 120 images from Category 7 (“CheckeredScarf”) and Cat￾egory 12 (“FabricSoftenerBox”). The training and test sets are of the same size. The average classification accuracy with 95% confidence interval over 30 randomly generated test sets is shown in Figure 7. We can see that EC-SVM is much more robust than MILES on the SIVAL data set. 0 1 2 3 4 5 6 7 8 9 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Num of Training Images with Negated Labels for Each Class Average Accuracy with 95% Confidence Interval MILES EC−SVM Figure 7. Comparison of sensitivity to labeling noise on the SIVAL data set. The SIVAL data set differs from the COREL data set in many aspects. In COREL, the target object occupies a large portion of the whole image, while in SIVAL the main part of an image is the background. Furthermore, the background of some category in COREL is always specific to that cat￾egory of images. For example, in general, the background in the images of “Historical buildings” (Category 2) is very different from the background in the images of “Horses” (Category 7), which can be easily seen from Figure 4. But for SIVAL, the background for one category can appear for another category. MILES uses all the instances, from both positive training bags and negative training bags, as the ba￾sis for feature construction [3]. This will make the effect
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有