正在加载图片...
L Chen P Pu To achieve our goal, we have conducted a series of three trials. In our first user trial we compared two well-known critiquing-based recommender agents which respec tively represent a typical setup combination of critiquing coverage and critiquing aid Concretely, one is the Dynamic Critiquing system that shows one recommended prod- uct during each interaction cycle, accompanied by a user-initiated unit critiquing area d a list of system-suggested compound critiques. Another is the Example Critiquin system that returns multiple products in a display and stimulates users in building and composing critiques to one of the shown products in their self-motivated way. The experimental results show that the Example Critiquing agent achieved significantly higher decision accuracy (in terms of both objective and subjective measures) and users'behavioral intentions (i.e, intention to purchase and return), while requiring lower level of interaction and cognitive effort. In the second trial, we modified Example Critiquing and DynamicCritiquing to make their critiquing coverage (i.e, the number of recommended items during each cycle) constant and keep the difference only on their critiquing aids. The results surprisingly showed that there is no significant difference between the two modified versions in terms of both objective and subjective measures. Further analysis of participants comments revealed the pros and cons of system-suggested critiques and user-initiated critiquing support. Additionally, combining the results with the first trials, we found that giving users the choice of critiquing one of multiple items(as opposed to just one) has significantly positive impacts on increasing their decision accuracy and confidence particularly in the first recommendation cycle and saving objective effort in the later critiquing rounds The third user trial was conducted to measure users' performance in a hybrid cri tiquing system where system-suggested critiques and user-initiated critiquing aid was combined on one screen. Analyzing users'critiquing application frequency in such system shows that the application of user-initiated critiquing support in creating users own critiques is relatively higher than picking suggested critique options. Moreove the respective practical effects of user-initiated and system-suggested critiquing facil ities were identified. That is, they are both significantly contributive to improve users decision confidence and return intention, and system-suggested critiques are even effective in saving effort perception Therefore, all of our trial results infer that giving users multiple recommended products as critiqued options and providing them both system-suggested and user- initiated critiquing aids for specifying concrete critiquing criteria can obtain substantial benefits Another contribution of our work is that we have established a user -evaluation framework. It contains both objective variables such as decision accuracy, task com pletion time and interaction effort, and subjective measures like perceived cognitive ffort, decision confidence and trusting intentions. All of these factors are fundamen tally important, given that a recommender systems ultimate goal should be to allow its users to achieve high decision accuracy and build high trust in it, and require then to expend a minimal amount of effort to obtain these benefits(Haubl and Trifts 2000: Chen and Pu 2005; Pu and Chen 2005) The rest of this paper is organized as follows. We first introduce existing critiquing based recommender systems, with Dynamic Critiquing and Example Critiquing as two170 L. Chen, P. Pu To achieve our goal, we have conducted a series of three trials. In our first user trial, we compared two well-known critiquing-based recommender agents which respec￾tively represent a typical setup combination of critiquing coverage and critiquing aid. Concretely, one is the DynamicCritiquing system that shows one recommended prod￾uct during each interaction cycle, accompanied by a user-initiated unit critiquing area and a list of system-suggested compound critiques. Another is the ExampleCritiquing system that returns multiple products in a display and stimulates users in building and composing critiques to one of the shown products in their self-motivated way. The experimental results show that the ExampleCritiquing agent achieved significantly higher decision accuracy (in terms of both objective and subjective measures) and users’ behavioral intentions (i.e., intention to purchase and return), while requiring lower level of interaction and cognitive effort. In the second trial, we modified ExampleCritiquing and DynamicCritiquing to make their critiquing coverage (i.e., the number of recommended items during each cycle) constant and keep the difference only on their critiquing aids. The results surprisingly showed that there is no significant difference between the two modified versions in terms of both objective and subjective measures. Further analysis of participants’ comments revealed the pros and cons of system-suggested critiques and user-initiated critiquing support. Additionally, combining the results with the first trial’s, we found that giving users the choice of critiquing one of multiple items (as opposed to just one) has significantly positive impacts on increasing their decision accuracy and confidence particularly in the first recommendation cycle and saving objective effort in the later critiquing rounds. The third user trial was conducted to measure users’ performance in a hybrid cri￾tiquing system where system-suggested critiques and user-initiated critiquing aid was combined on one screen. Analyzing users’ critiquing application frequency in such system shows that the application of user-initiated critiquing support in creating users’ own critiques is relatively higher than picking suggested critique options. Moreover, the respective practical effects of user-initiated and system-suggested critiquing facil￾ities were identified. That is, they are both significantly contributive to improve users’ decision confidence and return intention, and system-suggested critiques are even effective in saving effort perception. Therefore, all of our trial results infer that giving users multiple recommended products as critiqued options and providing them both system-suggested and user￾initiated critiquing aids for specifying concrete critiquing criteria can obtain substantial benefits. Another contribution of our work is that we have established a user-evaluation framework. It contains both objective variables such as decision accuracy, task com￾pletion time and interaction effort, and subjective measures like perceived cognitive effort, decision confidence and trusting intentions. All of these factors are fundamen￾tally important, given that a recommender system’s ultimate goal should be to allow its users to achieve high decision accuracy and build high trust in it, and require them to expend a minimal amount of effort to obtain these benefits (Häubl and Trifts 2000; Chen and Pu 2005; Pu and Chen 2005). The rest of this paper is organized as follows. We first introduce existing critiquing￾based recommender systems, with DynamicCritiquing and ExampleCritiquing as two 123
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有