正在加载图片...
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING,VOL.X,NO.X,XXX 200X 9 From Definition 1,we have Pr(l(B)=+1|t)-Pr(l(B)=-1|t) d2(t,Bi) exp )--e(f】 2exp d2(t,Bi) -1 Furthermore,we have Pr(l(Bi)=+1t)>Pr(l(Bi)=-1t) ÷Pr(l(B)=+1|t)-Pr(l(B)=-1|t)≥0 台2exp d2(t,Bi) 02 -1≥0 ÷d(t,B:)≤otV1n2 (5) Hence,if ,=o:vIn 2,the decision function defined in (4)will label the bags in accordance with the Bayes decision rule 口 Therefore,if t is a true positive instance,there must exist a decision function as defined in (4)to label the bags well,meaning that the distances from t to the positive bags are expected to be smaller than those to the negative bags. For a negative instance (i.e.,false positive instance),however,its distances to the positive and negative bags do not exhibit the same distribution as those from t.Since some positive bags may also contain negative instances just like the negative bags,the distances from the negative instance to the positive bags may be as random as those to the negative bags.This distributional difference provides an informative hint for identifying the true positive instances. 3.2.2 Disambiguation Method Unlike in the previous subsection,t does not necessarily refer to a true positive instance in this subsection.However,we still define a decision function as that in (4)even when t is a negative instance.The difference is that if t is a negative instance,the corresponding decision function cannot label the bags well.It is this very phenomenon that forms the basis of our disambiguation method. Definition 2:The empirical precision of the decision function in (4)is defined as follows: P(0)= 1 nt+”1+,(B)(B) (6) n++n- 2 i=1 March 1,2009 DRAFTIEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. X, NO. X, XXX 200X 9 From Definition 1, we have Pr(l(Bi) = +1 | t) − Pr(l(Bi) = −1 | t) = exp  − d 2 (t, Bi) σ 2 t  −  1 − exp  − d 2 (t, Bi) σ 2 t  = 2 exp  − d 2 (t, Bi) σ 2 t  − 1. Furthermore, we have Pr(l(Bi) = +1 | t) ≥ Pr(l(Bi) = −1 | t) ⇔ Pr(l(Bi) = +1 | t) − Pr(l(Bi) = −1 | t) ≥ 0 ⇔ 2 exp  − d 2 (t, Bi) σ 2 t  − 1 ≥ 0 ⇔ d(t, Bi) ≤ σt √ ln 2. (5) Hence, if θt = σt √ ln 2, the decision function defined in (4) will label the bags in accordance with the Bayes decision rule. Therefore, if t is a true positive instance, there must exist a decision function as defined in (4) to label the bags well, meaning that the distances from t to the positive bags are expected to be smaller than those to the negative bags. For a negative instance (i.e., false positive instance), however, its distances to the positive and negative bags do not exhibit the same distribution as those from t. Since some positive bags may also contain negative instances just like the negative bags, the distances from the negative instance to the positive bags may be as random as those to the negative bags. This distributional difference provides an informative hint for identifying the true positive instances. 3.2.2 Disambiguation Method Unlike in the previous subsection, t does not necessarily refer to a true positive instance in this subsection. However, we still define a decision function as that in (4) even when t is a negative instance. The difference is that if t is a negative instance, the corresponding decision function cannot label the bags well. It is this very phenomenon that forms the basis of our disambiguation method. Definition 2: The empirical precision of the decision function in (4) is defined as follows: Pt(θt) = 1 n+ + n− n + X +n− i=1 1 + h t θt (Bi)l(Bi) 2 , (6) March 1, 2009 DRAFT
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有