正在加载图片...
106 G. Pitsilis et al 3 Background research Trust has long been a concern for scientists and much work has been done to for malize it in computing environments [11, 12- As well as being context specific,it has important characteristics such as asymmetry, subjectivity, and under specific circumstances, transitivity. It is also related to tasks in the sense that entities are trusted to perform a particular task. A simplistic approach would be to determine the evels of trust and distrust that should be placed on some entity from its probabilistic behavior as seen from trustor's point of view. In this sense, trust can be thought of as the level of belief established between two entities in relation to a certain context In uncertain probabilities theory [13] the metric which expresses the belief is called opinion. Because there is always imperfect knowledge as opinions are based on ob servations, lack of knowledge should be considered when assessing them. Subjective Logic framework deals with the absence of both trust and distrust by introducing the uncertainty property in opinions. This framework uses a simple intuitive representa tion of uncertain probabilities by using a three dimensional metric that comprises belief(b), disbelief (d)and uncertainty(u). Between b, d and u the following equa- tion holds b+d+u=l which is known as the Belief Function Additivity Theorem Building up opinions requires the existence of evidence, but even though opinions in the form(b, d, u) are better manageable due to the quite flexible calculus that opi nion space provides, evidence is usually available only in other forms, that are es- entially more understandable to humans Having this in mind, we could use the ratings given by the users as evidence, also called behavioral data, for forming trust relationships between them in a CF system. The Beta Distribution Probability Function can offer an al ternative representation of uncertain probabilities [14], making it possible to approximate opinions from be- havioral data. However, data in that evidence space are considered as sets of obser vations and therefore must be provided strictly in binary form representing the poss- ible two outcomes of a process, x or x. So, a behavior is described by the number of x and x that derives from the set of observations. In [10] there is a mapping be tween Evidence Spaces and Opinion Spaces where the uncertainty property(u)is solely dependent on the quantity of observations. In contrast, other similarity based pproaches such as that in [15] are based on the idea of linking users indirectly us ing predictability measures, but, to our knowledge, these have not been tested in real environments As we mentioned above, the requirement for trust to become transitive in long chains is that a common purpose exists along the chain. According to this, only the last relationship should be concerned with trust for a certain purpose and all the oth- er trust relationships in the chain should be with respect to the ability to recommend for the given purpose. The former is called functional trust and the latter recom- mender trust. It is worth mentioning the existence of other approaches to making re- commander systems trust-enabled such as [16] where there is no distinction between functional and recommender trust. Also in some other solutions [17 that are used for predicting scores in recommender systems using webs of trust, the notion of trust is confused with similarity even though they are essentially different. Subjective logic provides a useful algebra for calculating trust in long chains of neighbors but it requires that opinions be expressed in(b, d, u) format which existing modeling tech3 Background Research Trust has long been a concern for scientists and much work has been done to for￾malize it in computing environments [11,12]. As well as being context specific, it has important characteristics such as asymmetry, subjectivity, and under specific circumstances, transitivity. It is also related to tasks in the sense that entities are trusted to perform a particular task. A simplistic approach would be to determine the levels of trust and distrust that should be placed on some entity from its probabilistic behavior as seen from trustor’s point of view. In this sense, trust can be thought of as the level of belief established between two entities in relation to a certain context. In uncertain probabilities theory [13] the metric which expresses the belief is called opinion. Because there is always imperfect knowledge as opinions are based on ob￾servations, lack of knowledge should be considered when assessing them. Subjective Logic framework deals with the absence of both trust and distrust by introducing the uncertainty property in opinions. This framework uses a simple intuitive representa￾tion of uncertain probabilities by using a three dimensional metric that comprises belief (b), disbelief (d) and uncertainty (u). Between b,d and u the following equa￾tion holds b+d+u=1 which is known as the Belief Function Additivity Theorem. Building up opinions requires the existence of evidence, but even though opinions in the form (b,d,u) are better manageable due to the quite flexible calculus that opi￾nion space provides, evidence is usually available only in other forms, that are es￾sentially more understandable to humans. Having this in mind, we could use the ratings given by the users as evidence, also called behavioral data, for forming trust relationships between them in a CF system. The Beta Distribution Probability Function can offer an alternative representation of uncertain probabilities [14], making it possible to approximate opinions from be￾havioral data. However, data in that evidence space are considered as sets of obser￾vations and therefore must be provided strictly in binary form representing the poss￾ible two outcomes of a process, x or x . So, a behavior is described by the number of x and x that derives from the set of observations. In [10] there is a mapping be￾tween Evidence Spaces and Opinion Spaces where the uncertainty property (u) is solely dependent on the quantity of observations. In contrast, other similarity based approaches such as that in [15] are based on the idea of linking users indirectly us￾ing predictability measures, but, to our knowledge, these have not been tested in real environments. As we mentioned above, the requirement for trust to become transitive in long chains is that a common purpose exists along the chain. According to this, only the last relationship should be concerned with trust for a certain purpose and all the oth￾er trust relationships in the chain should be with respect to the ability to recommend for the given purpose. The former is called functional trust and the latter recom￾mender trust. It is worth mentioning the existence of other approaches to making re￾commender systems trust-enabled such as [16] where there is no distinction between functional and recommender trust. Also in some other solutions [17] that are used for predicting scores in recommender systems using webs of trust, the notion of trust is confused with similarity even though they are essentially different. Subjective logic provides a useful algebra for calculating trust in long chains of neighbors but it requires that opinions be expressed in (b,d,u) format which existing modeling tech- 106 G. Pitsilis et al
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有