28(2012)207-21 6 instance, a message becomes more persuasive if a source is recommendations can help to facilitate critical thinking. This leads ity(Brinol& Petty2009; to our second conjecture. ieve social adaptation for recipients? This can be ing biases in human information pro- accept strong arguments and cues and authority at run counter to lese items is a bet- /than just making preference-inconsistent items avail- eference-inconsistent recommendations is at try to challenge a learner's stems and the learning sciences r systems are peer technologies through the input of many indi-instance, a message becomes more persuasive if a source is perceived as having expertise or authority (Briñol & Petty, 2009; Cialdini, 2001). Recommender systems can trigger expertise cues by providing recommendations on a wide range of topics, and authority cues can be generated through third-party endorsements, reference to awards, or explanations that the recommended items were suggested by experts. If an educational recommender system is based on a pool of strong arguments, and if the system generates authority cues and/or expertise cues, it is likely to become persuasive irrespective of learner characteristics like motivation and ability. While persuasion makes productive use of information processing biases like the tendency to follow strong arguments, or authority-endorsed arguments, biases can also represent a hindrance to certain forms of learning. For instance, a robust finding in the literature on communication science holds that people are prone to selective exposure, i.e. they attend to only parts of the information that is presented to them (Knobloch-Westerwick & Meng, 2009). In particular, many people exhibit confirmation bias, the tendency to actively seek for information that confirms initial preferences (Jonas, Schulz-Hardt, Frey, & Thelen, 2001). The reason for this bias can be traced back to dissonance theory (Festinger, 1954) which posits that people tend to avoid stimuli that create cognitive dissonance. While confirmation bias is rarely addressed in the learning sciences, we believe that it can play a pivotal role in those areas of education where the goal is to challenge existing beliefs and opinions. One such area is critical and open-minded thinking which involves that learners question not only other opinions, but also their own opinion (Stanovich & West, 1997). Critical thinking and unbiased reasoning can be linked to educationally relevant constructs like multiperspectivity (Spiro & Jehng, 1990) and informational diversity (De Wit & Greer, 2008). However, in order to become critical thinkers, learners must overcome confirmation bias, but a classical recommender system would do little to avert this bias, as it would suggest items that are consistent to a learner’s preference. What would be needed, then, is a recommender system that does the opposite, i.e. trying to capture the preferred opinions of learners, and confronting them with opposing viewpoints. For instance, if the student in our digital library example wants to write her Masters thesis on a particular theory, it might be useful to recommend at least some publications that are critical of this theory. The efficiency of preference-inconsistent recommendations in critical thinking contexts was investigated in our empirical work (Schwind, Buder, & Hesse, 2011a; Schwind, Buder, & Hesse, 2011b). Our experimental paradigm involved presenting preference-consistent and preference-inconsistent information to learners who searched for information on the controversial topic of neuro-enhancement. Simply making preference-inconsistent information available was no sufficient strategy to overcome confirmation bias, as participants in a no-recommendation control group selected preference-consistent information more frequently than preference-inconsistent information. However, when preferenceinconsistent information was not only made available, but was recommended through visual highlighting, confirmation bias was strongly reduced. Moreover, preference-inconsistent recommendations improved elaboration, as exemplified by a less confirmation-biased item recall, and by more divergent thinking patterns in subsequent essays. However, our studies have also shown that preference-inconsistent items were less liked by learners than preference-consistent items, a finding that mirrors Tang and McCalla’s (2005) viewpoint according to which information that is most useful from an educational perspective is often not the one that is liked most. While this problem might be averted by making preference-inconsistent recommendations more appealing (e.g. by verbally framing them as ‘‘challenges’’), our empirical results are promising signs that these counter-intuitive recommendations can help to facilitate critical thinking. This leads to our second conjecture. 3.1.2.1. How to achieve social adaptation for recipients? This can be accomplished by considering biases in human information processing. Learners are more likely to accept strong arguments and arguments that are accompanied by expertise cues and authority cues. Detrimental processing biases like confirmation bias can be mitigated by explicitly recommending items that run counter to a learner’s preference. Actually recommending these items is a better strategy than just making preference-inconsistent items available. The use of preference-inconsistent recommendations is helpful for educational settings that try to challenge a learner’s existing viewpoints and beliefs. 3.2. Producer role In Section 2 on recommender systems and the learning sciences it was argued that recommender systems are peer technologies that exhibit collective intelligence through the input of many individuals. However, in order to generate accurate predictions and unfold collective intelligence, recommender systems are strongly dependent on data that express how users think about a given item. Consequently, the role of users as producers of these data is a central issue in research on personalized recommender systems. In Sections 3.2.1 and 3.2.2 it is explored how recommender systems should be adapted to cater to the peculiarities of educational scenarios. Like in the section on the recipient role, the discussion is structured by two issues. The first issue pertains to system-centered adaptation, and it is guided by the design consideration of whether a recommender system should use explicit learner output like ratings, or whether implicit methods of capturing user data should be preferred. The second issue on social adaptation then explores how detrimental learner behaviors like low contribution to a recommender system can be averted by making productive use of those biases that are conducive to the production of rating data. 3.2.1. System-centered adaptation Recommender systems rely on the output from users. This creates a basic design decision of how these user data can be yielded. There are two fundamental ways to accomplish this goal, viz. implicit vs. explicit elicitation methods (Xiao & Benbasat, 2007). Implicit elicitation requires capturing user navigation (site visits, dwell times on sites, purchases) and take these data as indicators of user preferences. In contrast, explicit elicitation requires a dedicated response of users, typically in the form of ratings on items. There are a number of advantages associated with implicit preference elicitation: It is unobtrusive, i.e. it does not burden a user with the additional task of rating, thereby reducing the cold start problems that occur until the system has gathered enough preference data to yield accurate predictions (Schein, Popescul, Ungar, & Pennock, 2002). Moreover, implicit preference elicitation might be more objective than explicit ratings, as it does not involve a bias of users to respond in a socially desirable way (Fisher, 1993). However, explicit rating methods have a number of advantages as well. Kramer (2007) reported that explicit methods of eliciting user preferences led to higher acceptance rates for recommendations than implicit and opaque methods. This led Xiao and Benbasat (2007) to conclude that explicit methods might help users to gain insights on their preferences and therefore increases decision quality. Implicit methods might lead to psychological reactance, a negative reaction to a restriction of autonomy (Dillard & Shen, 2005), whereas explicit ratings are associated with user control, which in turn is related to satisfaction and trust in recommender systems (McNee et al., 2003). Another potential advantage of explicit rating 212 J. Buder, C. Schwind / Computers in Human Behavior 28 (2012) 207–216