正在加载图片...
.142. 智能系统学报 第11卷 Mean:65.4463 CR2 Mean:747100 Mean:707238 CR d292313 5i25.2076 Std247166 4an22681 Mean.64.199 CR4 Men:56.3894 St30.6914 CRS Sd27.0634CR Sd318258 6 candidate regions(CR1-CR6) 10 template regions(TI-T10) L1-tracker 0.4 NLSSSC-tracke Original ELSSC-tracke 02 100.040.05 006 0.0 0.11 140.07 320100.i10 00700800的 wse co质iens withN忆SSYC'-rakr Fig.3 Comparisions of sparsity and stability with the original 1,-.NLSSSC-.and our ELSSC-optimization.The sparse coefficients only are accurated to the second decimal place. 3.3 Experimental results for visual target tracking large size of candidates based on the definition of in- We evaluate the investigated algorithms compara-tegral image,the feature extraction of these candidates tively,using the center location errors,the average is so fast,that its cost can be ignored),which makes success rates,and the average frames per second.The it robust for occlusion,scale variation,and blur. results are shown in Figs.4&5 and in Tables 3&4.The NISSST and our original and improved trackers all templates of NISSST,the original ELSST,and the im-work well,when the targets are occluded;our two proved ELSST are shown in Fig.4(g-o).Overall,our trackers work even better. original and improved trackers outperform the other For motion blur,our two trackers work better than state-of-the-art algorithms. IVT and the original l-tracker.Moreover,CovTracking For occlusion,five algorithms,except IVT,func- also reveals its ability to handle blur (e.g.,#4,#9, tion satisfactorily,especially at #206,#366 of the and #38 in Fig.4(d,o).In the former sequence,the Dudek sequence in Fig.4(b)(the head in tracking is animal runs and jumps fast (motion blur)with a lot of covered by the hand and glasses),#143,#265,#496 water splashing occlusion),while in the latter,the of the Faceocc2 sequence in Fig.4(c)(the head in man ropes skipping and the camera cannot take the tracking is covered by the book),#85,#108,#433 of clear face of the man.IVT and I,-tracker fail both from the Girl sequence in Fig.4(e)(the head in tracking #4 in Fig.4(d),and never recover after that.Our o- turns right,turns back,and blocks someone else),riginal and improved ELSS lost the target in #31 and and #56,#104,#301 of the Face sequence in Fig.4 #41,then recovered in #33 and #44 Fig.4(d)).In (i)(the head in tracking is also covered by the #12 to #21 and #44 to #71,the improved ELSST works book).After the target recovers from occlusion,these better than original ELSST,CovTracking,I-tracker, five trackers can seek it quickly.IVT works poorly,e- and NLSSST. ven loses the target in #10 of the Girl sequence Fig.5 For rotation and scale variation,our trackers also (e)),because the number of positive and negative perform robustly (Figs.4(a,c,e,g,j)and 5(a,c,e, samples is limited (considering the learning g,j).When the surfer falls forward and backward,the efficiency),and the incremental updating of the girl turns left and right,moves towards and away from classifier in IVT is less effective.CovTracking has a the camera,the man turns left and right,the car turnsFig. 3 Comparisions of sparsity and stability with the original l 1 ⁃, NLSSSC⁃, and our ELSSC⁃optimization. The sparse coefficients only are accurated to the second decimal place. 3.3 Experimental results for visual target tracking We evaluate the investigated algorithms compara⁃ tively, using the center location errors, the average success rates, and the average frames per second. The results are shown in Figs. 4&5 and in Tables 3&4. The templates of NLSSST, the original ELSST, and the im⁃ proved ELSST are shown in Fig. 4(g⁃o). Overall, our original and improved trackers outperform the other state⁃of⁃the⁃art algorithms. For occlusion, five algorithms, except IVT, func⁃ tion satisfactorily, especially at # 206, # 366 of the Dudek sequence in Fig. 4 (b) (the head in tracking is covered by the hand and glasses), #143, #265, #496 of the Faceocc2 sequence in Fig. 4 ( c) ( the head in tracking is covered by the book), #85, #108, #433 of the Girl sequence in Fig .4 (e) (the head in tracking turns right, turns back, and blocks someone else), and #56, #104, #301 of the Face sequence in Fig. 4 ( i ) ( the head in tracking is also covered by the book). After the target recovers from occlusion, these five trackers can seek it quickly. IVT works poorly, e⁃ ven loses the target in #10 of the Girl sequence (Fig. 5 (e)), because the number of positive and negative samples is limited ( considering the learning efficiency), and the incremental updating of the classifier in IVT is less effective. CovTracking has a large size of candidates (based on the definition of in⁃ tegral image, the feature extraction of these candidates is so fast, that its cost can be ignored), which makes it robust for occlusion, scale variation, and blur. NLSSST and our original and improved trackers all work well, when the targets are occluded; our two trackers work even better. For motion blur, our two trackers work better than IVT and the original l 1 ⁃tracker. Moreover, CovTracking also reveals its ability to handle blur ( e. g., #4, #9, and #38 in Fig. 4(d,o). In the former sequence, the animal runs and jumps fast (motion blur) with a lot of water splashing ( occlusion), while in the latter, the man ropes skipping and the camera cannot take the clear face of the man. IVT and l 1 ⁃tracker fail both from #4 in Fig. 4(d), and never recover after that. Our o⁃ riginal and improved ELSS lost the target in #31 and #41, then recovered in #33 and #44 (Fig. 4(d)). In #12 to #21 and #44 to #71, the improved ELSST works better than original ELSST, CovTracking, l 1 ⁃tracker, and NLSSST. For rotation and scale variation, our trackers also perform robustly (Figs. 4(a,c,e,g,j) and 5( a,c,e, g,j). When the surfer falls forward and backward, the girl turns left and right, moves towards and away from the camera, the man turns left and right, the car turns ·142· 智 能 系 统 学 报 第 11 卷
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有