正在加载图片...
Interaction design guidelines and more recent Dynamic Critiquing agents(Reilly et al. 2004; McCarthy et al. 2005c) The advantage, as detailed in related literatures(Reilly et al. 2004; McCarthy et al. 2004b: Mc Sherry 2004), is that system-suggested critiques can not only expose the knowledge of remaining recommendation opportunities, but also potentially accel- erate the user's critiquing process if they can correspond well to the user's intended feedback criteria An alternative critiquing mechanism does not propose pre-computed critiques, but provides a facility to stimulate users to freely create and combine critiques themselves (so called user-initiated critiquing support in this paper). As a typical application, the Example Critiquing agent has been developed for this goal, and its focus is showing examples and facilitating users to compose their self-initiated critiques( Pu and Kumar 2004). In essence, the Example Critiquing agent is capable of allowing users to choose which feature(s)to be critiqued and how to critique it (or them)under their own control. Previous work proved that it enabled users to obtain significantly higher decision accuracy and preference certainty, compared to non critiquing-based systems such as a ranked list(Pu and Kumar 2004; Pu and Chen 2005) In addition to characterizing the critiquing-based recommender system in terms of its nature of critiquing support (i. e, system-suggested critiques or user-initiated cri tiquing support), another important factor is the number of items that the systemreturms during each recommendation cycle for users to critique. For example, FindMe and Dynamic Critiquing systems return one item, whereas Example Critiquing agents show multiple k items(e. g, k=7)at a cycle. Multi-item display provides users a chance to choose the product to be critiqued after making a comparison between several options Thus, there are in nature two crucial design components contained in a critiquing- based recommender system. One is its critiquing aid: suggesting critiques for users to select or aiding them to construct their own critiques. Another is the number of recommended items(called critiquing coverage in this paper ): suggesting a single vs. nultiple products for users to critique The options are inherently related to different levels of user control in either the process of identifying the critiqued reference or the process of specifying concrete critiquing criteria. As a matter of fact, perceived behavioral control has been regarded as an important determinant of user beliefs and actual behavior(Ajzen 1991). In the context of e-commerce, it has been found to have a positive effect on customers attitudes including their perceived ease of use, perceived usefulness and trust(Novak et al. 2000; Koufaris and Hampton-Sosa 2002). User control has been also determined as one of the fundamental principles for general user interface design(Shneiderman 1997)and Web usability(Nielsen 1994). However, there are few works having studied the effect of locus of user initiative in critiquing-based recommender systems. There is indeed a complex tradeoff that underlies the successful design: giving users too much control may cause them to perform an unnecessary complex critiquing, whereas giving little or no control may force users to accept system-suggested items even though they do not match users truly-intended choices. The goal of this paper is therefore to investigate the different degrees of user control vs system support in both critiquing aid and critiquing co influence users'actual decision performance and subjective attitudes ld positively erage, so as to identify the optimal combination of components that cInteraction design guidelines 169 and more recent DynamicCritiquing agents (Reilly et al. 2004; McCarthy et al. 2005c). The main advantage, as detailed in related literatures (Reilly et al. 2004; McCarthy et al. 2004b; McSherry 2004), is that system-suggested critiques can not only expose the knowledge of remaining recommendation opportunities, but also potentially accel￾erate the user’s critiquing process if they can correspond well to the user’s intended feedback criteria. An alternative critiquing mechanism does not propose pre-computed critiques, but provides a facility to stimulate users to freely create and combine critiques themselves (so called user-initiated critiquing support in this paper). As a typical application, the ExampleCritiquing agent has been developed for this goal, and its focus is showing examples and facilitating users to compose their self-initiated critiques (Pu and Kumar 2004). In essence, the ExampleCritiquing agent is capable of allowing users to choose which feature(s) to be critiqued and how to critique it (or them) under their own control. Previous work proved that it enabled users to obtain significantly higher decision accuracy and preference certainty, compared to non critiquing-based systems such as a ranked list (Pu and Kumar 2004; Pu and Chen 2005). In addition to characterizing the critiquing-based recommender system in terms of its nature of critiquing support (i.e., system-suggested critiques or user-initiated cri￾tiquing support), another important factor is the number of items that the system returns during each recommendation cycle for users to critique. For example, FindMe and DynamicCritiquing systems return one item, whereas ExampleCritiquing agents show multiple k items (e.g., k = 7) at a cycle. Multi-item display provides users a chance to choose the product to be critiqued after making a comparison between several options. Thus, there are in nature two crucial design components contained in a critiquing￾based recommender system. One is its critiquing aid: suggesting critiques for users to select or aiding them to construct their own critiques. Another is the number of recommended items (called critiquing coverage in this paper): suggesting a single vs. multiple products for users to critique. The options are inherently related to different levels of user control in either the process of identifying the critiqued reference or the process of specifying concrete critiquing criteria. As a matter of fact, perceived behavioral control has been regarded as an important determinant of user beliefs and actual behavior (Ajzen 1991). In the context of e-commerce, it has been found to have a positive effect on customers’ attitudes including their perceived ease of use, perceived usefulness and trust (Novak et al. 2000; Koufaris and Hampton-Sosa 2002). User control has been also determined as one of the fundamental principles for general user interface design (Shneiderman 1997) and Web usability (Nielsen 1994). However, there are few works having studied the effect of locus of user initiative in critiquing-based recommender systems. There is indeed a complex tradeoff that underlies the successful design: giving users too much control may cause them to perform an unnecessary complex critiquing, whereas giving little or no control may force users to accept system-suggested items even though they do not match users’ truly-intended choices. The goal of this paper is therefore to investigate the different degrees of user control vs. system support in both critiquing aid and critiquing cov￾erage, so as to identify the optimal combination of components that could positively influence users’ actual decision performance and subjective attitudes. 123
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有