正在加载图片...
EXPLANATION IN RECOMMENDER SYSTEMS 181 effects on user acceptance of the system's recommendations. The most onvincing explanation of why a movie was recommended was one in which users were shown a histogram of the ratings of the same movie by similar users. Moreover, grouping together of good ratings (4 or and bad ratings(I or 2)and separation of ambivalent ratings(3)was found to increase the effectiveness of the histogram approach. Inter estingly, the second most convincing explanation was a simple state ment of the system's performance in the past e.g Movie Lens has predicted correctly for you 80% of the time in the past Another important finding was that some of the explanations eval uated had a negative impact on acceptance, the goal of explanation in this instance, showing that no explanation may be better than one that is poorly designed CBR recommender systems that can explain their recommenda- tions include Shimazu's(2002) Expert Clerk and McSherry's(2003b) First Case. Expert Clerk can explain why it is proposing two contrast ing products in terms of the trade-offs between their positive and neg ative features e.g This blouse is more expensive but the material is silk. That one is cheaper but the material is polyester Its explanations are based on assumed preferences with respect to attributes not mentioned in the user's query. For example, a blouse made of silk is assumed to be preferred to one made of polyester In a similar way, First Case can explain why one case is more highly recommended than another by highlighting the benefits it offers (McSherry, 2003b). As the following example illustrates, it can also explain why a given product, such as a personal computer, is recom- mended in terms of the compromises it involves with respect to the Case 38 differs from your query only in processor speed and mon- itor size. It is better than Case 50 in terms of memory and price However. the potentia of explanation in recommender sys- tems is not limited to explaining why a particular item is recom- mended. In this paper, we present a CBr recommender system that can also explain the relevance of any question the user is asked in terms of its ability to discriminate between competing cases. Reilly et al. s(2005)dynamic approach to critiquing in recommender sys tems differs from traditional critiquing approaches (e.g. Burke, 2002)EXPLANATION IN RECOMMENDER SYSTEMS 181 effects on user acceptance of the system’s recommendations. The most convincing explanation of why a movie was recommended was one in which users were shown a histogram of the ratings of the same movie by similar users. Moreover, grouping together of good ratings (4 or 5) and bad ratings (1 or 2) and separation of ambivalent ratings (3) was found to increase the effectiveness of the histogram approach. Inter￾estingly, the second most convincing explanation was a simple state￾ment of the system’s performance in the past e.g. MovieLens has predicted correctly for you 80% of the time in the past. Another important finding was that some of the explanations eval￾uated had a negative impact on acceptance, the goal of explanation in this instance, showing that no explanation may be better than one that is poorly designed. CBR recommender systems that can explain their recommenda￾tions include Shimazu’s (2002) ExpertClerk and McSherry’s (2003b) First Case. ExpertClerk can explain why it is proposing two contrast￾ing products in terms of the trade-offs between their positive and neg￾ative features e.g. This blouse is more expensive but the material is silk. That one is cheaper but the material is polyester. Its explanations are based on assumed preferences with respect to attributes not mentioned in the user’s query. For example, a blouse made of silk is assumed to be preferred to one made of polyester. In a similar way, First Case can explain why one case is more highly recommended than another by highlighting the benefits it offers (McSherry, 2003b). As the following example illustrates, it can also explain why a given product, such as a personal computer, is recom￾mended in terms of the compromises it involves with respect to the user’s preferences. Case 38 differs from your query only in processor speed and mon￾itor size. It is better than Case 50 in terms of memory and price. However, the potential role of explanation in recommender sys￾tems is not limited to explaining why a particular item is recom￾mended. In this paper, we present a CBR recommender system that can also explain the relevance of any question the user is asked in terms of its ability to discriminate between competing cases. Reilly et al.’s (2005) dynamic approach to critiquing in recommender sys￾tems differs from traditional critiquing approaches (e.g. Burke, 2002)
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有