正在加载图片...
L Chen P Pu be returned. Among the initial set of recommendations, the user either accepts a result, or takes a near solution to activate the critiquing panel(by clicking on the button "Value Comparison"along with the product, see Fig 3). Once the critiquing criteria have been built in the critiquing panel, the system will refine the user's preference model and adjust the relative importance of all critiqued attributes (i.e, the weight of improved attribute(s)will be increased and that of compromised attribute(s)will be decreased) The search engine will then apply a combination of elimination-by-aspect(EBA)and ADD strategy(Payne et al. 1993). The combined strategy begins with EBA to first eliminate products that do not reach the minimal acceptable value (i.e, cutoff) of the improved attribute(s), and WADD is then applied to examine the remaining alternatives in more detail to select ones that best satisfy all of the user's tradeoff criteria. This example-and-critiquing process completes one cycle of interaction, and it continues as long as the user wants to refine the results. 3 Control variables In a summary, the components contained by both Dynamic Critiquing and E xample Critiquing can be categorized into two independent variables: the number of recom mendations that users could examine at a time based on which to perform critiquing, and the critiquing aid by which users could specify specific feedback criteria. As introduced before, two typical combinations of the two variables are single-item system-suggested critiquing and k-item user-initiated critiquing, but there should be more combination possibilities. In this section, we mainly discuss each variable's possible values 3.1 Critiquing coverage(the number of recommendations) Here we refer the critiquing coverage to the number of example products that are recommended to users for them to choose the final choice or critiqued object. In the Example Critiquing system, multiple examples are displayed during each recommen- dation cycle, because its objective is to stimulate users to make self-initiated critiques On the contrary, the lich system-suggested critiques are generated. This simple display strat egy has the advantage of not overwhelming users with too much information, but it deprives users of the right of choosing their own interested critiquing product, and potentially brings them the risk of engaging in a longer interaction session The critiquing coverage can be further separated into two sub-variables: the number of the first rounds recommendations right after users' initial preference specification (called Nr), and the number of items (i.e, tradeoff alternatives)in the later cycle afte each critiquing action(called NCR). The two numbers can be equal or different. For example, in Dynamic Critiquing and Example Critiquing, they are both equal to l or 7 It is also possible to set them differently, for example, NIR as I and NCR as 7 if users are only interested in one best matching product according to their initial preferences, but would like to see multiple alternatives comparable with their critiqued reference once critiquing a product176 L. Chen, P. Pu be returned. Among the initial set of recommendations, the user either accepts a result, or takes a near solution to activate the critiquing panel (by clicking on the button “Value Comparison” along with the product, see Fig. 3). Once the critiquing criteria have been built in the critiquing panel, the system will refine the user’s preference model and adjust the relative importance of all critiqued attributes (i.e., the weight of improved attribute(s) will be increased and that of compromised attribute(s) will be decreased). The search engine will then apply a combination of elimination-by-aspect (EBA) and WADD strategy (Payne et al. 1993). The combined strategy begins with EBA to first eliminate products that do not reach the minimal acceptable value (i.e., cutoff) of the improved attribute(s), and WADD is then applied to examine the remaining alternatives in more detail to select ones that best satisfy all of the user’s tradeoff criteria. This example-and-critiquing process completes one cycle of interaction, and it continues as long as the user wants to refine the results. 3 Control variables In a summary, the components contained by both DynamicCritiquing and E xample￾Critiquing can be categorized into two independent variables: the number of recom￾mendations that users could examine at a time based on which to perform critiquing, and the critiquing aid by which users could specify specific feedback criteria. As introduced before, two typical combinations of the two variables are single-item system-suggested critiquing and k-item user-initiated critiquing, but there should be more combination possibilities. In this section, we mainly discuss each variable’s possible values. 3.1 Critiquing coverage (the number of recommendations) Here we refer the critiquing coverage to the number of example products that are recommended to users for them to choose the final choice or critiqued object. In the ExampleCritiquing system, multiple examples are displayed during each recommen￾dation cycle, because its objective is to stimulate users to make self-initiated critiques. On the contrary, the FindMe and DynamicCritiquing agent only returns one product based on which system-suggested critiques are generated. This simple display strat￾egy has the advantage of not overwhelming users with too much information, but it deprives users of the right of choosing their own interested critiquing product, and potentially brings them the risk of engaging in a longer interaction session. The critiquing coverage can be further separated into two sub-variables: the number of the first round’s recommendations right after users’ initial preference specification (called NIR), and the number of items (i.e., tradeoff alternatives) in the later cycle after each critiquing action (called NCR). The two numbers can be equal or different. For example, in DynamicCritiquing and ExampleCritiquing, they are both equal to 1 or 7. It is also possible to set them differently, for example, NIR as 1 and NCR as 7 if users are only interested in one best matching product according to their initial preferences, but would like to see multiple alternatives comparable with their critiqued reference once critiquing a product. 123
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有