Computers in Human Behavior 28(2012)207-216 Contents lists available at SciVerse Science Direct Computers in Human behavior ELSEVIER journalhomepagewww.elsevier.com/locate/comphumbeh Learning with personalized recommender systems: A psychological view Jurgen Buder Christina Schwind Knowledge Media Research Center, Konrad-Adenauer-Str. 40. 72072 Tubingen, Germany ARTICLE INFO A BSTRACT This paper explores the potentials of recommender systems for learning from a psychological point of Available online 29 September 2011 view. It is argued that main features of recommender systems(collective responsibility, collective intelligence, user control, guidance, personalization) fit very well to principles in the learning sciences. However, recommender systems should not be transferred from commercial to educational contexts recommender systems n a one-to-one basis, but rather need adaptations in order to facilitate learn are discussed both with regard to learners as recipients of information and learn cers of data Moreover, it is distinguished between system-centered adaptations that enal In educational contexts, and social adaptations that address typical information tions for the design of educational recommender systems and for research on e 2011 Elsevier Ltd. All rights reserved. 1 Introduction unrated items can be predicted. A common method to predict pref erences is through collaborative filtering( Sarwar, Karypis, Konstan, When we ponder over the movie that we would like to see next Riedl, 2001) which mostly comes in two varieties: In user-based weekend, or whether the new restaurant in town is worth checking filtering, the behavioral profile of a target user will be compared to out, we often rely on the experience and recommendations of the profiles of other users, and recommendations for a particulai friends and other people who we trust to be knowledgeable about item will be derived from those users who are most similar to our tastes and preferences. Getting good recommendations be- the target user. The second method is item-based filtering where comes an important issue when the number of viable options is the overall rating differences among items will be set against the too large to be perused by an individual person. Internet servers profile of a target user to arrive at personalized recommendations provide access to vast amounts of information, and consequently. Personalized recommender systems are often used in offering recommendations is one of the most pressing problems e-commerce(Schafer, Konstan, riedl, 1999). as the ability to sug for the design of electronic environments. It can be said that search gest products that are tailored to the needs and preferences of cus- engines provide recommendations, as a list of search results is or- tomers provides a unique selling point. However, in recent years dered through link analysis algorithms that show most linked-to, the potential of personalized recommender systems for non-com- and thereby most relevant Web pages on top (brin Page, 1998). mercial purposes has begun to be explored, e.g. in educational con- Similarly, a bestseller list on a commercial Website can be regarded texts. Several educational recommender systems have been as providing recommendations. However, in these cases the rec- designed that recommend a broad range of items, among them soft- ommendations are generic, i.e. different users receive the same ware functionalities( Linton& Schaefer, 2000). learning resources e or highly similar output. In contrast, personalized recommender the Web(geyer-Schulz, Hahsler, Jahn, 2001; Recker, Walker, stems try to achieve the gold standard of recommendations in Lawless, 2003), Web 2.0 resources( Drachsler et al., 2010), foreign able about a topic, but also takes the individual tastes and prefer- McGrath, Ball, 2005), test items and assignments( Rafaeli, Barak, ences of users into account an-Gur,& Toch, 2004), lecture notes(Farzan Brusilovsk Personalized recommender systems capture the traces that 2005 ) or entire courses(Farzan Brusilovsky, 2006). The applica users leave in an environment, either through page visits or expli- tions cover very different areas of learning and education like use it ratings of items, and they are based on the assumption that of library systems(Geyer-Schulz, Hahsler, Neumann, Thede page visits or high ratings are indicative of user preferences. From 2003), informal learning( Drachsler, Hummel, Koper, 2009), m ta about visited or rated items, preferences on not-visited or bile learning(Andronico et al, 2003), learning at the workplace (Aehnelt, Ebert, Beham, Lindstaedt, Paschen, 2008), or within health education( Fernandez-Luque, Karlsen, Vognild, 2009). Corresponding author. Tel. +49 7071 979 326: fax: +49 7071 979 100. E-imail addresses: ibudereiwm-kmrc de (. Buder), schwind @iwm-kmrc de(c. Many papers on personalized recommender systems focus on technical issues and problems, the ultimate question being: How 0747-5632/s- see front matter o 2011 Elsevier Ltd. All rights reserved doi:10.1016/chb2011.09002
Learning with personalized recommender systems: A psychological view Jürgen Buder ⇑ , Christina Schwind Knowledge Media Research Center, Konrad-Adenauer-Str. 40, 72072 Tübingen, Germany article info Article history: Available online 29 September 2011 Keywords: Recommender systems Learning abstract This paper explores the potentials of recommender systems for learning from a psychological point of view. It is argued that main features of recommender systems (collective responsibility, collective intelligence, user control, guidance, personalization) fit very well to principles in the learning sciences. However, recommender systems should not be transferred from commercial to educational contexts on a one-to-one basis, but rather need adaptations in order to facilitate learning. Potential adaptations are discussed both with regard to learners as recipients of information and learners as producers of data. Moreover, it is distinguished between system-centered adaptations that enable proper functioning in educational contexts, and social adaptations that address typical information processing biases. Implications for the design of educational recommender systems and for research on educational recommender systems are discussed. 2011 Elsevier Ltd. All rights reserved. 1. Introduction When we ponder over the movie that we would like to see next weekend, or whether the new restaurant in town is worth checking out, we often rely on the experience and recommendations of friends and other people who we trust to be knowledgeable about our tastes and preferences. Getting good recommendations becomes an important issue when the number of viable options is too large to be perused by an individual person. Internet servers provide access to vast amounts of information, and consequently, offering recommendations is one of the most pressing problems for the design of electronic environments. It can be said that search engines provide recommendations, as a list of search results is ordered through link analysis algorithms that show most linked-to, and thereby most relevant Web pages on top (Brin & Page, 1998). Similarly, a bestseller list on a commercial Website can be regarded as providing recommendations. However, in these cases the recommendations are generic, i.e. different users receive the same or highly similar output. In contrast, personalized recommender systems try to achieve the gold standard of recommendations in real life by mimicking a person who is not only very knowledgeable about a topic, but also takes the individual tastes and preferences of users into account. Personalized recommender systems capture the traces that users leave in an environment, either through page visits or explicit ratings of items, and they are based on the assumption that page visits or high ratings are indicative of user preferences. From data about visited or rated items, preferences on not-visited or unrated items can be predicted. A common method to predict preferences is through collaborative filtering (Sarwar, Karypis, Konstan, & Riedl, 2001) which mostly comes in two varieties: In user-based filtering, the behavioral profile of a target user will be compared to the profiles of other users, and recommendations for a particular item will be derived from those users who are most similar to the target user. The second method is item-based filtering where the overall rating differences among items will be set against the profile of a target user to arrive at personalized recommendations. Personalized recommender systems are often used in e-commerce (Schafer, Konstan, & Riedl, 1999), as the ability to suggest products that are tailored to the needs and preferences of customers provides a unique selling point. However, in recent years the potential of personalized recommender systems for non-commercial purposes has begun to be explored, e.g. in educational contexts. Several educational recommender systems have been designed that recommend a broad range of items, among them software functionalities (Linton & Schaefer, 2000), learning resources on the Web (Geyer-Schulz, Hahsler, & Jahn, 2001; Recker, Walker, & Lawless, 2003), Web 2.0 resources (Drachsler et al., 2010), foreign language lessons (Hsu, 2008), learning objects (Lemire, Boley, McGrath, & Ball, 2005), test items and assignments (Rafaeli, Barak, Dan-Gur, & Toch, 2004), lecture notes (Farzan & Brusilovsky, 2005), or entire courses (Farzan & Brusilovsky, 2006). The applications cover very different areas of learning and education like use of library systems (Geyer-Schulz, Hahsler, Neumann, & Thede, 2003), informal learning (Drachsler, Hummel, & Koper, 2009), mobile learning (Andronico et al., 2003), learning at the workplace (Aehnelt, Ebert, Beham, Lindstaedt, & Paschen, 2008), or within health education (Fernandez-Luque, Karlsen, & Vognild, 2009). Many papers on personalized recommender systems focus on technical issues and problems, the ultimate question being: How 0747-5632/$ - see front matter 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2011.09.002 ⇑ Corresponding author. Tel.: +49 7071 979 326; fax: +49 7071 979 100. E-mail addresses: j.buder@iwm-kmrc.de (J. Buder), c.schwind@iwm-kmrc.de (C. Schwind). Computers in Human Behavior 28 (2012) 207–216 Contents lists available at SciVerse ScienceDirect Computers in Human Behavior journal homepage: www.elsevier.com/locate/comphumbeh
u e. a number of users that atings to her. Once this set of neighbors is database,and
do we manage to deliver the most accurate recommendation for the current purposes? This paper, however, takes a somewhat different approach: It explores the psychological aspects of personalized recommender systems, with the ultimate question being: How do people react to and act upon recommender systems? This question will be addressed with particular emphasis on the use of recommender systems in educational contexts. Knowing the psychological impact of recommendations on users can be helpful for practitioners and researchers alike. If we have a better idea how people react to recommender systems, we can improve algorithms and interfaces in ways that make using the system more efficient and satisfactory. Understanding how users contribute data to recommender systems is important for practitioners, as problems like low participation can impede system performance. From a research perspective, a better understanding of the psychological impacts of recommender systems can inform various fields, such as educational psychology (instructional design, educational technology), social psychology (persuasion, trust building), business administration (marketing), or computer science (machine learning, HCI). The paper is structured as follows: Section 2 explores how the key characteristics of personalized recommender systems fit into current thought in the learning sciences. Section 3 discusses specific requirements that recommender systems must fulfill in order to support learning processes, both with regard to two learner roles and two types of adaptation. This discussion leads to four conjectures about how recommender systems should be adapted for educational contexts. Section 4 integrates the findings, and provides an outlook on future research. 2. Recommender systems and the learning sciences Designing and implementing workable recommender systems can be quite burdensome. Apart from a technological infrastructure that needs to store data about each possible combination of items and user, thereby generating substantial server load, a critical mass of users is one of the main roadblocks towards successful implementation (Glance, Arregui, & Dardenne, 1999). If the community of people who generate data is too small, recommendations become less precise. This begs the question of whether it is useful to implement personalized recommender systems in educational contexts. In order to answer this question, an example of a fictitious educational recommender system is introduced. This example will be used to illuminate main principles of recommender systems design, and these principles will be compared to principles in the learning sciences. In our example, a psychology student is trying to find good research literature for her Masters thesis. She logs into a digital library Website which operates a recommender system on academic publications. Let’s assume that she has never used the system before. The recommender might provide her with a list of the most popular publications on her thesis topic. This list would be similar to a bestseller list. Adjacent to each entry is a slider where she can rate each publication on a range from 1 (uninteresting) to 5 (highly interesting). She reads through the list, and selects some publications that she knows. Interestingly, she dislikes some of the popular publications, and expresses this through low ratings. Though our student does not interact with other users of the recommender system, she is part of a larger community of others who have also selected and rated thousands of publications. As shown in Fig. 1, selecting entries and rating them constitutes the activity of individuals within the community. The recommender system then aggregates all the ratings from the community’s rating database, and filters this information according to specified algorithms. For instance, if the recommender system employs userbased collaborative filtering algorithms, one step is to define a so-called neighborhood for our student, i.e. a number of users that gave the most similar ratings to her. Once this set of neighbors is established, the system goes through all publications that our student has not rated yet, and identifies those publications that received the highest average ratings from the student’s neighborhood. In the recommender interface, the system provides an output of the top 10 publications; these items constitute the recommendations. As the student (and, by the use of similarity metrics, her neighborhood) have a non-standard taste, this list might differ strongly from the original, bestseller-like list. If a publication is recommended that the student does not know, she might order it. If she likes it (gives a high rating), she will become more similar to her neighbors; if the recommendation was bad and she gives a low rating, a new neighborhood might emerge, resulting in slightly different, adjusted recommendations. This ongoing cycle between individual activities (selecting, rating) and system activities (aggregating, filtering) rests on five principles of recommender system design (see Fig. 1). First, recommender systems rely on collective responsibility. In our digital library example, the data on which book recommendations are based were generated by a community of peers (Resnick & Varian, 1997). This is in contrast to offline contexts where recommendations often come from dedicated individuals like teachers, mentors, or reviewers. Recommender systems do not hand particular power to dedicated individuals, but shift responsibility and accountability towards the user collective. A similar principle of collective responsibility can be found in the learning sciences (Scardamalia, 2002) where many scholars have suggested moving from a traditional, teacher-centered education towards a peer-centered education (Brown et al., 1993). In peercentered education as well as in recommender systems, a power structure with flat hierarchies emerges. Moreover, in both fields it is assumed that peer efforts will lead to high-quality output: The learning from peer-centered education should be at least as high as in teacher-centered education; similarly, recommendations derived from a community should at least be as good as those from dedicated experts. Second, recommender systems exhibit collective intelligence. For instance, if a particular book is recommended to our student, this recommendation cannot be traced back to the behavior of any individual user. Rather, it is the behavior of the user collective (or in the case of user-based collaborative filtering, the neighborhood) that is responsible for the recommendation. As it was shown empirically that computed recommendations are sufficiently correlated with the actual ratings of a user (Herlocker, Konstan, Borchers, & Riedl, 1999) it can be argued that these systems exhibit collective intelligence (Malone, Laubacher, & Dellarocas, 2009). This idea resonates with the notion of ‘‘group cognition’’ in the learning sciences, particularly in research on computer-supported collaborative learning (Stahl, 2006). According to this view, the output of a collaborative learning group, e.g. their discussions or the constructed artifacts, cannot be meaningfully or completely traced back to individual group members, but rather arise through complex interactions among the constituents (group members). It can be said that these emergent properties of groups can also be found in the way that recommender systems operate. Third, recommender systems are based on user control. A book that is suggested by a recommender system differs from a book that is a mandatory part of a course syllabus. Our student has the choice to follow the recommendation or not. Recommender systems preserve user autonomy, and they do not prescribe courses of action to be taken by a person. They typically support information search and retrieval, i.e. tasks of a self-directed, exploratory and often open-ended nature. In this regard, they cater to modern constructivist epistemologies in the learning sciences 208 J. Buder, C. Schwind / Computers in Human Behavior 28 (2012) 207–216
J Buder, C Schwind/Computers in Human Behavior 28(2012)207-216 Community 1. Collective responsibility Il. Collective intelligence Rating database System aggregate filter Individual Recommender interface V, Personalization IV Guidance lll, User control Fig. 1. Flow chart of the recommendation process. Principles of recommender systems are embedded. that also stress the importance of self-regulated learning 3. A psychological account of educational recommender (Boekaerts Minnaert, 1999), or discovery learning(Bruner, 1961). systems Fourth, recommender systems provide guidance. The student in the digital library example is not faced with a huge list of all pub Much attention on recommender systems has been devoted to lications on her thesis topic, but already receives a filtered list of issues of technical implementation, mathematical modeling, and ose titles that are most relevant to her search. By giving direc performance metrics(Adomavicius Tuzhilin, 2005 ) However. tions and offering hints that a user may or may not take into ac- there is a growing awareness that non-technical issues should be count, recommender systems are equivalent to an information taken into account in order to personalized recommender signpost(Konstan& Riedl, 2003). Providing guidance is also a cen- systems, particularly if these systems are applied in non-standard tral issue in the learning sciences as too much learner autonomy settings like education. Consequently, some authors began theoriz can be perceived as burdensome without some form of explicit ing about recommender system by including educational consider- or implicit structuring. As a consequence, principles in the learning ations(Drachsler, Hummel, et al, 2009: Manouselis, Drachsler, sciences often suggest using scaffolds(Vygotsky, 1978). scripts Vuorikari, Hummel, Koper, 2011: Tang McCalla, 2005: Wang (Kollar, Fischer, Hesse, 2006), or awareness functionalities 2007). The present paper also focuses on recommender systems (Engelmann, Dehler, Bodemer, Buder, 2009)in order to provide in educational contexts, but it is novel in taking a psychological uidance for self-regulated activities. The key is to strike a delicate point of view on the topic. relatively little is known about ho lance between autonomy and guidance so that guidance neither people react to and act upon information presented via recon becomes too immaterial nor too directive mender systems, and a psychological account might offer valuable Fifth, recommender systems are personalized, hints on barriers and potentials. gest items that are adaptively tailored to the nee and preferences of a user. As mentioned in the light on various issues that have to be taken into account when example, the recommendations for our student were not a gen- designing for educational recommender systems. The account is eric, bestseller-like list of most popular publications, but con- structured along a distinction that was made by Xiao and Benbasat sisted of items that were personalized with regard to her taste. (2007)in a conceptual paper on recommender systems in The notion of personalization also plays an important role in e-commerce contexts. However, while these authors put the tech- the learning sciences: Different learners do not benefit to the nology into the center by distinguishing between input character same degree from uniform types of instruction(Cronbach istics (data that a recommender system gets)and output Snow, 1977), and there is general consensus that instructional characteristics(data that a recommender system displays), we material should be adapted to the knowledge, the needs, and make the same distinction from a learner viewpoint. In other the abilities of learners. Consequently, learning technologies such words, our account distinguishes between a recipient role where as intelligent tutoring systems(Anderson, Boyle, Reiser, 1985) learners are confronted with recommended items and a producer or adaptive hypermedia environments(Brusilovsky, 2001)tailor role where learners generate data that are the basis for system information to the needs and abilities of learners. Recommender computations. The distinction between different roles(recipient systems are based on the same general idea by matching their vS producer)serves as a structural element for the remainder of output to a users historically developed profile this paper. For each role, two issues of recommender system adap Of course, the identified principles of the learning sciences- tation for educational contexts will be discussed The first issue re shifting responsibility towards peers, harnessing collective intelli fers to system-centered adaptations: In order to work properly gence,enabling user control, providing scaffolds, and tailoring to educational contexts, recommender systems must provide the needs, abilities, and interests- are embedded within many infor- right kind of information so that learning from recommendations mation technologies, but personalized recommender systems com- is enabled(recipient role). Moreover, proper functioning of recom- bine all of these principles. In this regard, they exhibit features that mender systems requires that user generate data on which system have the potential to leverage learning processes. computations can be performed(producer role). Apart from these However, the fit of recommender systems into learning con- basic, system-centered adaptations the second issue explored for texts by no means implies that they can be transferred from their recipient roles and producer roles pertains to social adaptation current, mostly commercial context into educational contexts on a Human information processing in general, and learning in particu one-to-one basis. Rather, they must be adapted to the peculiarities lar can be characterized by bounded rationality(Simon, 1959) of educational scenarios. Section 3 addresses the issues that have Navigation and selection of items in a recommender system(red to be taken into account in order to fruitfully apply recommender pient role)and rating of items (producer role) are influenced by a tems in the educational realm number of social psychological factors that can be linked to
that also stress the importance of self-regulated learning (Boekaerts & Minnaert, 1999), or discovery learning (Bruner, 1961). Fourth, recommender systems provide guidance. The student in the digital library example is not faced with a huge list of all publications on her thesis topic, but already receives a filtered list of those titles that are most relevant to her search. By giving directions and offering hints that a user may or may not take into account, recommender systems are equivalent to an information signpost (Konstan & Riedl, 2003). Providing guidance is also a central issue in the learning sciences as too much learner autonomy can be perceived as burdensome without some form of explicit or implicit structuring. As a consequence, principles in the learning sciences often suggest using scaffolds (Vygotsky, 1978), scripts (Kollar, Fischer, & Hesse, 2006), or awareness functionalities (Engelmann, Dehler, Bodemer, & Buder, 2009) in order to provide guidance for self-regulated activities. The key is to strike a delicate balance between autonomy and guidance so that guidance neither becomes too immaterial nor too directive. Fifth, recommender systems are personalized, i.e. they suggest items that are adaptively tailored to the needs, interests, and preferences of a user. As mentioned in the digital library example, the recommendations for our student were not a generic, bestseller-like list of most popular publications, but consisted of items that were personalized with regard to her taste. The notion of personalization also plays an important role in the learning sciences: Different learners do not benefit to the same degree from uniform types of instruction (Cronbach & Snow, 1977), and there is general consensus that instructional material should be adapted to the knowledge, the needs, and the abilities of learners. Consequently, learning technologies such as intelligent tutoring systems (Anderson, Boyle, & Reiser, 1985) or adaptive hypermedia environments (Brusilovsky, 2001) tailor information to the needs and abilities of learners. Recommender systems are based on the same general idea by matching their output to a user’s historically developed profile. Of course, the identified principles of the learning sciences – shifting responsibility towards peers, harnessing collective intelligence, enabling user control, providing scaffolds, and tailoring to needs, abilities, and interests – are embedded within many information technologies, but personalized recommender systems combine all of these principles. In this regard, they exhibit features that have the potential to leverage learning processes. However, the fit of recommender systems into learning contexts by no means implies that they can be transferred from their current, mostly commercial context into educational contexts on a one-to-one basis. Rather, they must be adapted to the peculiarities of educational scenarios. Section 3 addresses the issues that have to be taken into account in order to fruitfully apply recommender systems in the educational realm. 3. A psychological account of educational recommender systems Much attention on recommender systems has been devoted to issues of technical implementation, mathematical modeling, and performance metrics (Adomavicius & Tuzhilin, 2005). However, there is a growing awareness that non-technical issues should be taken into account in order to improve personalized recommender systems, particularly if these systems are applied in non-standard settings like education. Consequently, some authors began theorizing about recommender system by including educational considerations (Drachsler, Hummel, et al., 2009; Manouselis, Drachsler, Vuorikari, Hummel, & Koper, 2011; Tang & McCalla, 2005; Wang, 2007). The present paper also focuses on recommender systems in educational contexts, but it is novel in taking a psychological point of view on the topic. Relatively little is known about how people react to and act upon information presented via recommender systems, and a psychological account might offer valuable hints on barriers and potentials. In the following, we propose a conceptualization that sheds a light on various issues that have to be taken into account when designing for educational recommender systems. The account is structured along a distinction that was made by Xiao and Benbasat (2007) in a conceptual paper on recommender systems in e-commerce contexts. However, while these authors put the technology into the center by distinguishing between input characteristics (data that a recommender system gets) and output characteristics (data that a recommender system displays), we make the same distinction from a learner viewpoint. In other words, our account distinguishes between a recipient role where learners are confronted with recommended items and a producer role where learners generate data that are the basis for system computations. The distinction between different roles (recipient vs. producer) serves as a structural element for the remainder of this paper. For each role, two issues of recommender system adaptation for educational contexts will be discussed. The first issue refers to system-centered adaptations: In order to work properly in educational contexts, recommender systems must provide the right kind of information so that learning from recommendations is enabled (recipient role). Moreover, proper functioning of recommender systems requires that user generate data on which system computations can be performed (producer role). Apart from these basic, system-centered adaptations the second issue explored for recipient roles and producer roles pertains to social adaptations. Human information processing in general, and learning in particular can be characterized by bounded rationality (Simon, 1959). Navigation and selection of items in a recommender system (recipient role) and rating of items (producer role) are influenced by a number of social psychological factors that can be linked to Fig. 1. Flow chart of the recommendation process. Principles of recommender systems are embedded. J. Buder, C. Schwind / Computers in Human Behavior 28 (2012) 207–216 209
10 J Buder, C Schwind/ Computers in Human Behavior 28(2012)207-216 bounded rationality. For instance, we do not always attend to the fact that humans show preferences for particular types of informa- Ther munity could benefit from such an activity (DaM.. 2005): tion, and these inherent biases are not always conducive to information from which we learn most (tang Mccalla and we do not always contribute information even if an entire learning. Several ways of adapting recommender systems are explored that are based on ideas such as increasing the persua erefore, educational recommender systems can b siveness of recommendations, or providing counter-intuitive by introducing social adaptations that facilitate those inf recommendations processing biases that are conducive to learning or attenu biases that are detrimental to learning. These distinctions result in four structural elements(recipient 3.1.1. System-centered adaptation role vs producer role: system-centered adaptation vs social adap Whereas classical recommender systems in e-commerce try to on)for the following sections. In Sections 3. 1 and 3. 2 these is- adapt to the taste of a user, educational recommender systems will be discussed based on theoretical considerations as well should be personalized with regard to learner knowledge and as empirical results from various fields of research. Table 1 gives learning activities. For a number of reasons, learner knowledge an overview of the literature that informed the following and learning activities are more difficult to assess than user taste (Drachsler, Hummel, Koper, 2009): Learning is a gradual process extending over a longer stretch of time. In commercial contexts. effectiveness of a recommender system can be assessed by captur 3. 1. Recipient role ng whether a customer has purchased a recommended item. In contrast, learning does not have clear-defined and measurable Relatively little is known about how recommendations are per-"learning events"that immediately provide information about re ceived by users. Sections 3.1.1 and 3. 1.2 describe issues pertaining ommender system effectiveness. Not only are constructs like to the learners'roles as recipients of information. First, Section 3. 1. 1 knowledge and activities difficult to assess, they are also con- on system-centered adaptation addresses the fact that in classical stantly changing, and they rest on multiple sequential dependen- e-commerce scenarios recommendations are tailored to user taste cies, i. e at any given time there can be items that are too easy or (Schafer et al., 1999). In contrast, for educational contexts recom- too difficult for a learner. This creates numerous situational con- endations must be tailored to learner knowledge and learner straints: An expert in a domain needs different recommendations activities. Second, Section 3. 1.2 on social adaptation refers to the than a novice: different learning styles (e.g. reproducing of reviewed studies about recommender systems. Field Finding ent role and system-centered adaptation Computer science Drachsler, Hummel, Koper Conceptua tional technology Reflects on differences between recommenders for learning vs. commerce Drachsler. Hummel. van den Nadolski et al. (2009) 2 Educational technology Hybrid system leads to higher efficiency in learning Computer science/ Collaborative filtering and hybrid systems outperform no recommendations Recipient role and social adaptation McNee et al. (2006) Makes a case that personalities are ascribed to recommender Schwind et al. (2011a) Educational psychology N=123) lower evaluation Schwind et al.(2011b) pirical (lab experiments, Educational psychology eference- inconsistency reduces confirmation bias and leads to N=210) Tang and McCalla (2005) y endations are not always liked st(preference-inconsistency) Yoo and Gretzel (2011) Social psychology Discusses persuasion of recommender systems through source characteristics Producer role and system-centered adaptation Task transparency leads to higher acceptance(ma McNee et al.(2003 HCI ser control in sign-up increases loyalty(makes a case for explicit Schein et al. (2002 al (simulatio Computer science gues for implicit elicitation to overcome cold-start Xiao and Benbasat(2007) Conceptu ntroduces distinction between implicit vs. explicit elicitation cer role and social adaptation riment, N=268 Herlocker et al. (2004) HCI Makes a case that motivation for contribution can differ strongly Ling et ts,N=2715) Ludford et aL (2004) Social psychology Utility instruction increases rating activity xperiment, N=245) Rashid et al. (2006) Empirical (field cl social psychology Utility interface increases rating activity experiment, N= 160) Note: Classifications into type of study and findings are reported only as they pertain to this pa
bounded rationality. For instance, we do not always attend to the information from which we learn most (Tang & McCalla, 2005); and we do not always contribute information even if an entire community could benefit from such an activity (Dawes, 1980). Therefore, educational recommender systems can be improved by introducing social adaptations that facilitate those information processing biases that are conducive to learning or attenuate those biases that are detrimental to learning. These distinctions result in four structural elements (recipient role vs. producer role; system-centered adaptation vs. social adaptation) for the following sections. In Sections 3.1 and 3.2 these issues will be discussed based on theoretical considerations as well as empirical results from various fields of research. Table 1 gives an overview of the literature that informed the following discussion. 3.1. Recipient role Relatively little is known about how recommendations are perceived by users. Sections 3.1.1 and 3.1.2 describe issues pertaining to the learners’ roles as recipients of information. First, Section 3.1.1 on system-centered adaptation addresses the fact that in classical e-commerce scenarios recommendations are tailored to user taste (Schafer et al., 1999). In contrast, for educational contexts recommendations must be tailored to learner knowledge and learner activities. Second, Section 3.1.2 on social adaptation refers to the fact that humans show preferences for particular types of information, and these inherent biases are not always conducive to learning. Several ways of adapting recommender systems are explored that are based on ideas such as increasing the persuasiveness of recommendations, or providing counter-intuitive recommendations. 3.1.1. System-centered adaptation Whereas classical recommender systems in e-commerce try to adapt to the taste of a user, educational recommender systems should be personalized with regard to learner knowledge and learning activities. For a number of reasons, learner knowledge and learning activities are more difficult to assess than user taste (Drachsler, Hummel, & Koper, 2009): Learning is a gradual process extending over a longer stretch of time. In commercial contexts, effectiveness of a recommender system can be assessed by capturing whether a customer has purchased a recommended item. In contrast, learning does not have clear-defined and measurable ‘‘learning events’’ that immediately provide information about recommender system effectiveness. Not only are constructs like knowledge and activities difficult to assess, they are also constantly changing, and they rest on multiple sequential dependencies, i.e. at any given time there can be items that are too easy or too difficult for a learner. This creates numerous situational constraints: An expert in a domain needs different recommendations than a novice; different learning styles (e.g. reproducing Table 1 Overview of reviewed studies about recommender systems. Study Type Field Finding Recipient role and system-centered adaptation Adomavicius and Tuzhilin (2011) Conceptual (review) Computer science Introduces context-aware algorithms Burke (2002) Empirical (simulation) Computer science Compares different hybrid recommender algorithms Drachsler, Hummel, & Koper (2009) Conceptual Educational technology Reflects on differences between recommenders for learning vs. commerce Drachsler, Hummel, van den Berg, et al. (2009) Empirical (field experiment, N = 250) Educational technology Hybrid system leads to higher efficiency in learning Nadolski et al. (2009) Empirical (simulation) Computer science/ educational technology Collaborative filtering and hybrid systems outperform no recommendations Recipient role and social adaptation McNee et al. (2006) Conceptual HCI Makes a case that personalities are ascribed to recommender systems (basis for persuasion) Schwind et al. (2011a) Empirical (online experiment, N = 123) Educational psychology preference-inconsistency reduces confirmation bias, but leads to lower evaluation Schwind et al. (2011b) Empirical (lab experiments, N = 210) Educational psychology Preference-inconsistency reduces confirmation bias and leads to better elaboration Tang and McCalla (2005) Conceptual Educational technology Argues that educational recommendations are not always liked most (preference-inconsistency) Yoo and Gretzel (2011) Conceptual Social psychology Discusses persuasion of recommender systems through source characteristics Producer role and system-centered adaptation Kramer (2007) Empirical (experiments; N = 363) Marketing Task transparency leads to higher acceptance (makes a case for explicit ratings) McNee et al. (2003) Empirical (field experiment, N = 163) HCI User control in sign-up increases loyalty (makes a case for explicit ratings) Schein et al. (2002) Empirical (simulation) Computer science Argues for implicit elicitation to overcome cold-start Xiao and Benbasat (2007) Conceptual Marketing Introduces distinction between implicit vs. explicit elicitation Producer role and social adaptation Harper et al. (2007) Empirical (field experiment, N = 268) Social psychology Social comparison increases rating activity Herlocker et al. (2004) Conceptual HCI Makes a case that motivation for contribution can differ strongly Ling et al. (2005) Empirical (field experiments, N = 2715) Social psychology Goal setting and utility instructions increase rating activity Ludford et al. (2004) Empirical (field experiment, N = 245) Social psychology Utility instruction increases rating activity Rashid et al. (2006) Empirical (field experiment, N = 160) HCI/social psychology Utility interface increases rating activity Note: Classifications into type of study and findings are reported only as they pertain to this paper. 210 J. Buder, C. Schwind / Computers in Human Behavior 28 (2012) 207–216
16 211 ion)(Ent-2003). Moreover, involving learners into the very process of recom- mendations caters well to the spirit of learning as a constructive collaborative activity. This leads to our first conjecture: for re are in orde g activities. rT itiit h -osdo arners in of these two strategies ld be that learner nd affective reactions e design of lars hold nity to fee mendatior strategy is le n edu- hybrid rec strong ner involver ner control higher satisfac retzel,2011).For
orientation, achieving orientation, and meaning orientation) (Entwistle, 1988) might require different recommendations; recommendations might ideally take metacognitive skills and strategies (Weinstein & Mayer, 1986) into account; and they should be adapted to the goals of a learner (Boekaerts, 1998) – for instance, a learner who wants to find an explanation on a specific algebraic problem needs different recommendations than a learner who seeks for general resources on algebra. As Drachsler, Hummel, & Koper (2009) maintained, educational recommender systems would ideally be able to situationally identify those items that correspond to a learner’s zone of proximal development (Vygotsky, 1978), the level of ability that the learner is able to master through scaffolding. In other words, in order to adapt to learner knowledge and learning activities, recommender systems must be contextaware (Adomavicius & Tuzhilin, 2011). The classical approach of collaborative filtering through the analysis of simple ratings might not be very helpful, as a high rating could mean that a learner found an item easy, or challenging, or fun. However, there are different ways to achieve context-awareness. A first strategy to make recommender systems context-aware is to use machine intelligence, e.g. through so-called hybrid recommender systems (Burke, 2002). These combine recommender algorithms like collaborative filtering with content-based filters and/or learning modeling techniques (Brusilovsky, 2001). By way of our digital library example from Section 2, a publication recommender system could use a hybrid model that combines rating information with metadata. For instance, if our student gave a high rating for an article, the system could automatically increase the probability that other publications from the same author are recommended. Ontologies, tags and metadata can be used to describe learning items in more detail, and modeling techniques can be used to describe learners in more detail. Having ontologies can also help to address the problem of sequential dependencies among learning items, and they might pave the way for systems that do not recommend isolated items, but actual learning paths (Drachsler, Hummel, & Koper, 2009). As to date, there are few examples of hybrid educational recommender systems that go beyond a prototype development, let alone a full system evaluation. However, in a detailed computer simulation study, Nadolski et al. (2009) found that different types of recommender systems yielded much better results (graduation percentages, user satisfaction, graduation times) than no recommendations. Further, the authors found that hybrid recommender systems outperformed purely rating-based and purely ontologybased recommender systems, although not by a significant margin. A real-world investigation of hybrid educational recommender systems compared a group using a hybrid personalized recommender system for learning activities with a no-recommendation control group (Drachsler, Hummel, van den Berg, et al, 2009). In a usage study covering 4 months, they found that groups using the recommender system did not complete more activities, but completed them faster, exhibited a greater variety of learning paths, and expressed higher satisfaction. These examples show that hybrid educational recommender systems are likely to have measurable effects on learning-related variables. A second strategy to increase context-awareness does not rely on machine intelligence, but on involving learners into the recommendation process. For instance, learners could choose among different learning paths depending on their learning styles or concrete learning goals. Moreover, dialogs could be provided that give learners an opportunity to feed back on the situational adequacy of received recommendations. While it appears that the learner involvement strategy is less popular among system designers than the use of hybrid recommender systems, it should be noted that active learner involvement might have additional benefits. For instance, learner control and customizability of system output are related to higher satisfaction and trust (McNee, Lam, Konstan, & Riedl, 2003). Moreover, involving learners into the very process of recommendations caters well to the spirit of learning as a constructive and collaborative activity. This leads to our first conjecture: 3.1.1.1. How to achieve system-centered adaptation for recipients? Recommender systems must be context-aware in order to correctly diagnose learner knowledge and learning activities. This can be accomplished either through machine intelligence (hybrid recommender systems) or through involvement of learners into the recommendation process itself (customization, feedback loops). It is an empirical question which of these two strategies is superior, but a tentative conclusion could be that learner involvement has additional educational benefits. 3.1.2. Social adaptation Information processing has a social dimension: It is colored by attitudes, judgments, stereotypes, and affective reactions (Bandura, 1986). This lends a social dimension to the design of educational recommender systems as well. Some scholars hold that computers are social actors (Nass, Moon, Morkes, Kim, & Fogg, 1997), or that they are persuasive technologies that can exert social influence (Fogg, 2003). More specifically, the idea that recommender systems are perceived as social actors is supported by the observation that users ascribe a personality to them (McNee, Riedl, & Konstan, 2006). Personalized recommender systems mimic a knowledgeable person, a person that does not only have information about a huge number of items, but also about the tastes and preferences of a user. However, we do not always follow a recommendation by a human being, and of course the same might apply to recommendations from a recommender system. This raises questions about the conditions under which the selection of recommended items can be influenced. In order to answer these questions, it is helpful to re- flect on biases in human information processing. Some of these biases are conducive to learning and can be put to good use by making recommendations more appealing. Other biases in information processing are detrimental to particular types of learning, so recommender systems should be designed to overcome these biases. We now turn to conducive biases in the context of literature on persuasion, followed by detrimental biases in the context of selective exposure literature. Dual-process models of persuasion have outlined the boundary conditions that determine whether people are more or less inclined to follow a persuasive message such as a recommendation. According to the elaboration likelihood model (Petty & Cacioppo, 1986), the degree to which a persuasive message is elaborated depends on a recipient’s motivation and ability to process the message. Low personal relevance of the message topic undermines motivation, whereas distraction during processing impedes ability. If motivation and ability are high, messages are carefully scrutinized, and persuasion mainly depends on so-called message characteristics; in contrast, if motivation and/or ability are low, persuasion mainly depends on so-called source characteristics (McGuire, 1969). As motivation and ability are not directly controllable, design of educational recommender systems should try to unfold persuasive power through message characteristics and source characteristics. As to message characteristics, the variable that is most often associated with them is argument strength. For instance, the elaboration likelihood model predicts that under conditions of high elaboration (high motivation and ability), a strong argument becomes persuasive, whereas a weak argument is likely to be rejected. As a consequence, the item pool of an educational recommender system should contain as many strong arguments as possible. A second way to influence the persuasiveness of a recommender system is through source characteristics, i.e. perceived attributes of a sender (Yoo & Gretzel, 2011). For J. Buder, C. Schwind / Computers in Human Behavior 28 (2012) 207–216 211
28(2012)207-21 6 instance, a message becomes more persuasive if a source is recommendations can help to facilitate critical thinking. This leads ity(Brinol& Petty2009; to our second conjecture. ieve social adaptation for recipients? This can be ing biases in human information pro- accept strong arguments and cues and authority at run counter to lese items is a bet- /than just making preference-inconsistent items avail- eference-inconsistent recommendations is at try to challenge a learner's stems and the learning sciences r systems are peer technologies through the input of many indi-
instance, a message becomes more persuasive if a source is perceived as having expertise or authority (Briñol & Petty, 2009; Cialdini, 2001). Recommender systems can trigger expertise cues by providing recommendations on a wide range of topics, and authority cues can be generated through third-party endorsements, reference to awards, or explanations that the recommended items were suggested by experts. If an educational recommender system is based on a pool of strong arguments, and if the system generates authority cues and/or expertise cues, it is likely to become persuasive irrespective of learner characteristics like motivation and ability. While persuasion makes productive use of information processing biases like the tendency to follow strong arguments, or authority-endorsed arguments, biases can also represent a hindrance to certain forms of learning. For instance, a robust finding in the literature on communication science holds that people are prone to selective exposure, i.e. they attend to only parts of the information that is presented to them (Knobloch-Westerwick & Meng, 2009). In particular, many people exhibit confirmation bias, the tendency to actively seek for information that confirms initial preferences (Jonas, Schulz-Hardt, Frey, & Thelen, 2001). The reason for this bias can be traced back to dissonance theory (Festinger, 1954) which posits that people tend to avoid stimuli that create cognitive dissonance. While confirmation bias is rarely addressed in the learning sciences, we believe that it can play a pivotal role in those areas of education where the goal is to challenge existing beliefs and opinions. One such area is critical and open-minded thinking which involves that learners question not only other opinions, but also their own opinion (Stanovich & West, 1997). Critical thinking and unbiased reasoning can be linked to educationally relevant constructs like multiperspectivity (Spiro & Jehng, 1990) and informational diversity (De Wit & Greer, 2008). However, in order to become critical thinkers, learners must overcome confirmation bias, but a classical recommender system would do little to avert this bias, as it would suggest items that are consistent to a learner’s preference. What would be needed, then, is a recommender system that does the opposite, i.e. trying to capture the preferred opinions of learners, and confronting them with opposing viewpoints. For instance, if the student in our digital library example wants to write her Masters thesis on a particular theory, it might be useful to recommend at least some publications that are critical of this theory. The efficiency of preference-inconsistent recommendations in critical thinking contexts was investigated in our empirical work (Schwind, Buder, & Hesse, 2011a; Schwind, Buder, & Hesse, 2011b). Our experimental paradigm involved presenting preference-consistent and preference-inconsistent information to learners who searched for information on the controversial topic of neuro-enhancement. Simply making preference-inconsistent information available was no sufficient strategy to overcome confirmation bias, as participants in a no-recommendation control group selected preference-consistent information more frequently than preference-inconsistent information. However, when preferenceinconsistent information was not only made available, but was recommended through visual highlighting, confirmation bias was strongly reduced. Moreover, preference-inconsistent recommendations improved elaboration, as exemplified by a less confirmation-biased item recall, and by more divergent thinking patterns in subsequent essays. However, our studies have also shown that preference-inconsistent items were less liked by learners than preference-consistent items, a finding that mirrors Tang and McCalla’s (2005) viewpoint according to which information that is most useful from an educational perspective is often not the one that is liked most. While this problem might be averted by making preference-inconsistent recommendations more appealing (e.g. by verbally framing them as ‘‘challenges’’), our empirical results are promising signs that these counter-intuitive recommendations can help to facilitate critical thinking. This leads to our second conjecture. 3.1.2.1. How to achieve social adaptation for recipients? This can be accomplished by considering biases in human information processing. Learners are more likely to accept strong arguments and arguments that are accompanied by expertise cues and authority cues. Detrimental processing biases like confirmation bias can be mitigated by explicitly recommending items that run counter to a learner’s preference. Actually recommending these items is a better strategy than just making preference-inconsistent items available. The use of preference-inconsistent recommendations is helpful for educational settings that try to challenge a learner’s existing viewpoints and beliefs. 3.2. Producer role In Section 2 on recommender systems and the learning sciences it was argued that recommender systems are peer technologies that exhibit collective intelligence through the input of many individuals. However, in order to generate accurate predictions and unfold collective intelligence, recommender systems are strongly dependent on data that express how users think about a given item. Consequently, the role of users as producers of these data is a central issue in research on personalized recommender systems. In Sections 3.2.1 and 3.2.2 it is explored how recommender systems should be adapted to cater to the peculiarities of educational scenarios. Like in the section on the recipient role, the discussion is structured by two issues. The first issue pertains to system-centered adaptation, and it is guided by the design consideration of whether a recommender system should use explicit learner output like ratings, or whether implicit methods of capturing user data should be preferred. The second issue on social adaptation then explores how detrimental learner behaviors like low contribution to a recommender system can be averted by making productive use of those biases that are conducive to the production of rating data. 3.2.1. System-centered adaptation Recommender systems rely on the output from users. This creates a basic design decision of how these user data can be yielded. There are two fundamental ways to accomplish this goal, viz. implicit vs. explicit elicitation methods (Xiao & Benbasat, 2007). Implicit elicitation requires capturing user navigation (site visits, dwell times on sites, purchases) and take these data as indicators of user preferences. In contrast, explicit elicitation requires a dedicated response of users, typically in the form of ratings on items. There are a number of advantages associated with implicit preference elicitation: It is unobtrusive, i.e. it does not burden a user with the additional task of rating, thereby reducing the cold start problems that occur until the system has gathered enough preference data to yield accurate predictions (Schein, Popescul, Ungar, & Pennock, 2002). Moreover, implicit preference elicitation might be more objective than explicit ratings, as it does not involve a bias of users to respond in a socially desirable way (Fisher, 1993). However, explicit rating methods have a number of advantages as well. Kramer (2007) reported that explicit methods of eliciting user preferences led to higher acceptance rates for recommendations than implicit and opaque methods. This led Xiao and Benbasat (2007) to conclude that explicit methods might help users to gain insights on their preferences and therefore increases decision quality. Implicit methods might lead to psychological reactance, a negative reaction to a restriction of autonomy (Dillard & Shen, 2005), whereas explicit ratings are associated with user control, which in turn is related to satisfaction and trust in recommender systems (McNee et al., 2003). Another potential advantage of explicit rating 212 J. Buder, C. Schwind / Computers in Human Behavior 28 (2012) 207–216
213 comparisons(Festinger, 1954). This was investi- 007). They gave bogus sar Bre through ra tional rec 3.2.1.1.Hov ers? Recon tation metl between par itation m Through rat merits of able metac investigate gh so- mative parisons or th thers. identity(Ta of rat- and this adher s both ymous(Reich assoch about van Yperen
methods is that users might be better able than computers to make judgments on subjective and fuzzy rating categories (Norman, 1993). For instance, in the digital library example it would be dif- ficult for implicit elicitation methods to differentiate those publications that our student selected and subsequently regarded as inappropriate from those publications that she found useful after reading the abstract. This problem would not have occurred if both the inappropriate and the useful item had been explicitly rated. Recommender systems in e-commerce have the primary goal to make users aware of items that might be interesting to them, thereby increasing cross-sell of items. In this light, it is quite reasonable not to burden users with additional rating activities and opt for implicit preference elicitation. In contrast, for educational recommender systems the use of ratings might have additional benefits, as explicit elicitation can be regarded as a form of participation (Ling et al., 2005). Participation is a highly pervasive notion in the learning sciences. For instance, the amount of contributions that a learner produces during interaction is regarded as an important antecedent of learning outcomes (Cohen, 1994). Participation requires learners to reflect on an issue, thereby leading to deeper elaboration (Kollar et al., 2006), and in this regard explicit rating instructions can serve as a valuable metacognitive prompt (Palincsar & Brown, 1984). For this reason, explicit elicitation of user data through ratings appears to be a promising approach for educational recommender systems. This leads to our third conjecture: 3.2.1.1. How to achieve system-centered adaptation for producers? Recommender systems can either be fueled by explicit elicitation methods (ratings) or implicit methods. The positive link between participation and learning speaks in favor of explicit elicitation methods rather than unobtrusive and implicit methods. Through rating activities, learners are required to reflect on the merits of a recommended item, and this might function as a valuable metacognitive prompting strategy. 3.2.2. Social adaptation In Section 3.1.2 on the recipient role it was argued that human information processing has a social dimension and is colored by biases, preferences, and habits. A similar case can be made for the producer role, particularly in cases of explicit elicitation through ratings. Rating an item represents a social dilemma (Dawes, 1980). Such a dilemma occurs when (a) it appears rational for each individual to withhold rather than produce or share information, and (b) it is better for the collective if every member contributed rather than withheld information. This is the case for recommender systems where rating requires some effort, but the immediate benefit of rating is not evident to a user. Social dilemmas can lead to detrimental behaviors like social loafing and free-riding (Karau & Williams, 1993). As social loafing directly impedes the quality of a recommender system, this raises the question of how this detrimental behavior can be averted. As users are highly likely to respond to social cues, the basic idea here is to emphasize the social aspects of a recommender system. While recommender systems are peer technologies, there is no direct peer-to-peer interaction, and the community of users remains anonymous and invisible to an individual. However, by making the community more visible, powerful social psychological mechanisms can be evoked. Two strategies are built on these mechanisms, and their impact on the quantity of ratings has been investigated empirically. The first strategy makes use of the normative power of groups, either through introduction of social comparisons or through goal setting. Depending on prevalent social identity (Tajfel & Turner, 1986), individuals adhere to group norms, and this adherence can even be stronger when members are anonymous (Reicher, Spears, & Postmes, 1995). If the group norm is about member productivity, rating quantity can be increased by introducing social comparisons (Festinger, 1954). This was investigated by Harper, Li, Chen and Konstan (2007). They gave bogus feedback to participants of the movie recommender system MovieLens which indicated that the number of ratings that an individual has provided was lower, the same, or higher than a comparable group of community members, and contrasted this to a condition without such feedback. It was shown that upward comparison (feedback about under-performance) led to the highest number of produced ratings in the following week. Even downward comparison (feedback about over-performance) led to a higher number of ratings than the control condition without social comparison information. A different normative approach was investigated by Ling et al. (2005) who reported that setting of concrete norms and goals like rating a fixed number of items led to higher productivity than setting unspecific ‘‘do your best’’-goals, at least when the goal seemed attainable. Taken together, it appears that making norms of a community salient can exert normative power which in turn increases productivity. The second strategy to appeal to the social nature of recommender systems is by making one’s contribution more valuable. The collective effort model (Karau & Williams, 1993) posits that social loafing will be reduced when people believe that their contribution is useful to a community. Moreover, it states that people contribute more when they identify with similar others. In the digital library example of Section 2, our student had a taste that differed from the mainstream. In this regard, her ratings are particularly valuable for the sub-group of like-minded people. But she might neither know that her taste is special, nor that there are like-minded people. Therefore it would be helpful if some information on the utility of ratings were provided. This issue has been investigated in three recommender-related studies. Two of these studies manipulated utility by telling subjects that they either had a very unique taste (high utility) or a very typical taste (low utility) (Ling et al., 2005; Ludford, Cosley, Frankowski, & Terveen, 2004). The authors confirmed the prediction of the collective effort model that uniqueness instructions lead to more contributions. A third study, conducted by Rashid et al. (2006) used a more technology-oriented approach to employ utility information. The authors created a recommender interface where each unrated item had a display that indicated how helpful it would be for target persons if it were rated. In line with the collective effort model, it was found that displaying the rating utility led to higher contribution rates than a control condition. It was also confirmed that people felt more motivated to contribute for the good of similar target persons than dissimilar others. Contrary to expectations, fewer items were rated when the benefit to oneself was stressed. This is somewhat surprising, as the quality of recommendations for an individual increases with the number of ratings that this individual has produced. However Herlocker, Konstan, Terveen, & Riedl (2004) have pointed out that motivations for user ratings can be quite different: Some users simply want to express themselves; some users are driven by social motivations like helping others, or manipulating others; and for some users, gaming the system is the main motivation to provide ratings in a recommender system. In all these examples, utility for oneself does not play a major role. In contrast, making the social impact of one’s recommendation salient is an effective method to boost rating activities. This leads to our fourth conjecture. 3.2.2.1. How to achieve social adaptation for producers? This can be accomplished either by making group norms visible (through social comparisons or concrete goal setting), or by providing information about the usefulness that rating provides for (similar) others. For educational contexts, the strategy of stressing the utility of ratings might be superior to the social comparison strategy, as both upward and downward comparisons were reported to be associated with negative affect (Buunk, Collins, Taylor, van Yperen, & J. Buder, C. Schwind / Computers in Human Behavior 28 (2012) 207–216 213
214 J. Buder, C. Schwind/Computers in Human Behavior 28(2012)207-216 Table 2 Contrast bet mmerce requirements and educational requirements with regard to recommender system design, and resulting design strategies for educational contexts. Issue Requirement for e- Educational requirement Design strategy for educational recommender systems commerce Recipient role/system- Personalization with Context-aware personalization with regard Hybrid systems regard to taste to knowl ledge/ac tivities Learner feedback Recipient role/social P preference-consistency th Persuasiveness; challenges and critical strong arguments: expertise/authority cues: preferen adaptation Producer role/system-Low burden: implicit Participation, metacognitive stimulation Explicit ratings centered adaptation elicitation Producer role/social High number of ratings High number of ratings Providing utility information adaptation Dakof, 1990). Moreover, providing utility information is more importantly, user activity should be minimized, e.g. by implicit likely to appeal to the collaborative spirit that fuels recommender g hybrid systems with ontol systems. This strategy of minimal s, though it can be argued th ated in an age 4. Conclusions This paper explored the potentials of personalized jected, or ns nder systems
Dakof, 1990). Moreover, providing utility information is more likely to appeal to the collaborative spirit that fuels recommender systems. 4. Conclusions This paper explored the potentials of personalized recommender systems in educational settings. It is argued that recommender systems fit nicely to important principles in the learning sciences: (1) Recommender systems are peer technologies that shift responsibility away from dedicated experts. (2) Recommender systems are technologies where the quality of content is not traceable to any individual output, but rather to the collective behavior of a community. (3) Recommender systems provide user control, thereby facilitating self-regulated, exploratory, and autonomous learning. (4) Recommender systems provide guidance to learning activities. (5) Recommender systems are adaptively tailored to the needs and requirements of learners. However, it would clearly be a mistake to apply standard recommender systems to learning scenarios without adapting them to educational needs. Section 3 of this paper structured issues surrounding educational applications of recommender systems with regard to two roles that learners exhibit, viz. as recipients of information and as producers of data. For each of these two roles, two issues were discussed: One with regard to system-oriented adaptations that enable proper functioning of educational recommender systems, the other with regard to social adaptations that exert an influence on how learners react to and act upon recommendations. On the basis of theoretical and empirical findings from various research fields design-related questions were posed and answered. A summary of that discussion can be found in Table 2. The leftmost column of this Table represents the four issues that were raised in the discussion. The second and third columns contrast the requirements for classical recommender systems in e-commerce vs. educational recommender systems. And the rightmost column proposes design strategies for educational recommender systems. Rather than repeating the issues discussed in preceding sections, we’d like to point out some recurring thoughts, particularly about the differences between e-commerce recommender systems and educational recommender systems. In classical e-commerce scenarios, the main goal of designers is to increase cross-sell of items. The corresponding recommender systems can be characterized by a number of typical features. These systems are adapted to user taste, and they try to keep the potential burden of system usage to a minimum. As a consequence, the solutions rely heavily on constraining system output, preferably through machine intelligence. Although there is some awareness among designers that recommender systems should support serendipity and diversity of result lists (Ziegler, McNee, Konstan, & Lausen, 2005), the general consensus seems to be that a recommender system should not come up with anything that is unexpected by a user. Most importantly, user activity should be minimized, e.g. by implicit preference elicitation, or by employing hybrid systems with ontologies and user modeling techniques. This strategy of minimal interference might be useful in commercial contexts, though it can be argued that it appears as somewhat outdated in an age where user-generated content has become so pervasive. However, it is quite evident that such a strategy should not be adapted for learning contexts. Educational recommender systems are not geared at selling items, but at facilitating learning. Learning is an active and constructive process (Vygotsky, 1978), therefore it seems only natural to involve and engage learners in the very process they undergo. This is reflected in the suggested design strategies of Table 2: It can be helpful both for system performance and for learner satisfaction to provide customization options, and to give opportunities to leave feedback on system accuracy. Rating of items should not be regarded as a burden to learning, but rather as an opportunity for learning. Making the community visible by emphasizing the utility that rating has for others fuels the collaborative spirit that is needed for effective recommender systems. And finally, educational recommender systems should involve learners by challenging them. Rather than providing learners with a strongly constrained environment, they should leave ample room for exploration and confront learners with unexpected content, thus allowing for serendipity and learning through discovery. All these differences between e-commerce and educational contexts call for an adaptation of personalized recommender systems so that they can become powerful tools for learning. Of course, this overview of the potentials of recommender systems in educational contexts is not exhaustive. For instance, we did not cover interface issues, basically because we think that they do not require specific adaptations for educational scenarios. Readers who are interested in these aspects might refer to usability studies (Herlocker, Konstan, & Riedl, 2000), studies showing serial position effects for recommended items (Felfernig et al., 2007), or work on the display of ratings (Cosley, Lam, Albert, Konstan, & Riedl, 2003). Our overview was also restricted to the impact of recommendations during learning, and we did not address general aspects of recommender use, most of which is covered by literature on trust (Swearingen & Sinha, 2002). Finally, we focused on the most common types of recommender systems, thereby excluding variations like recommender systems for groups (Buder & Schwind, 2011), or people recommender (Cai et al., 2011). Many of the design considerations that were suggested in this paper rest on speculation. This is due to the scarcity of research in a relatively new field. We hold that research on personalized recommender systems should not only focus on the development of better algorithms and implementations, but should be complemented by sound, empirical work on how learners react to and act upon recommender systems. In order to see whether the assumptions of this paper can be confirmed, must be rejected, or require refinement, three types of research are needed. First, we need more practical implementations of recommender systems Table 2 Contrast between e-commerce requirements and educational requirements with regard to recommender system design, and resulting design strategies for educational contexts. Issue Requirement for ecommerce Educational requirement Design strategy for educational recommender systems Recipient role/systemcentered adaptation Personalization with regard to taste Context-aware personalization with regard to knowledge/activities Hybrid systems Learner feedback Customization Recipient role/social adaptation Persuasiveness; preference-consistency Persuasiveness; challenges and critical thinking Strong arguments; expertise/authority cues; preferenceinconsistent recommendations Producer role/systemcentered adaptation Low burden; implicit elicitation Participation, metacognitive stimulation Explicit ratings Producer role/social adaptation High number of ratings High number of ratings Providing utility information 214 J. Buder, C. Schwind / Computers in Human Behavior 28 (2012) 207–216
J Buder, C Schwind/Computers in Human Behavior 28(2012)207-216 in educational contexts. Though some applications are aroun Buunk, B P, Collins, R L, Taylor, S E, van Yperen, N w,& Dakof, G.A.(1990). Th many of them do not move beyond a prototype develo pment. ffective consequences of social comparison: Either direction has its ups and Implementation studies would typically be case studies or experi- downs. Journal of personality and Social Psychology, 59, 1238-1249. doi: 10. 1037/ ments with atool vS no tool "condition. Second, we need more Cai, x, Bain, M, Krzywicki, A, Wobcke, w Kim, Y S, Compton, P et al.(2011). pplied experimental research that varies boundary condition twor but keeps technology constant. For instance, it would be interes ing to see how one and the same recommender system works with Cialdini, R B (2001). Influence: Science and practice(4th ed. )Boston, MA: Allyn different tasks, or different learners with different learning styles and different proficiency levels. This would help illuminating the Cohen, E. G.( 1994) Restructuring the classroom: Conditions for productive small doi:10.3102 potentials and the limitations of educational recommender sys- tems And third, we need more research on the basic psychological Cosley. D. Lam,S K Albert, L, Konstan, J.A.& Riedl (2003) Is seeing believing? mechanisms that are addressed when learners use a recommender Korhonen (Eds h. Proceedings of the ACM CH/ Cons opinions. In G. Cockton system. For instance, our own empirical work on the effectiveness Computing Systems (pp 585-592) New York, NY: ACM Press. doi: 10.1145/ of preference-inconsistent recommendations, while not employing 611.642713 a full-blown recommender system, can be regarded as a step to- Cronbach, LJ.& Snow, R.E. (1977). Aptitudes and instructional methods: A handbook wards uncovering the psychological dynamics that specific types Dawes, R M (1980). Social dilemmas. Annual Review of Psychology, 31.169-193 Technologies need to have an added value in order to become De wit. FRC, Greer, LL. 2008. The black-box deciphered: meta-analysis of tea incorporated in everyday learning settings. Technologies must make us capable of accomplishing things that are impossible to Dillard, en, L(2005) On the nature of reactance and its role in persuasive yield by any other means Recommender systems have many fasci- health communication. Communication Monographs. 72, 144-168. doi: 10.1080/ ating features, among them providing learners with access to Drachsler, H. Hummel, H G K. Koper R (2009). Identifying the goal, user model information that no other method can accomplish. In light of these ns of recommender systems for formal and informal learning potentials it is more than likely that recommenders for learning are here to stay. However, their exploration by social scientists has just Drachsler H Hummel.H GK, an den Berg. B.Eshuis. J.Waterink, W.Nadolski, begun in self-organised learning networks. Jourmal of educational Technology 8 Sociery. Drachsler, H, Rutledge, L, van Rosmalen, P, Hummel, H, Pecceu, D, Arts, T, et al. References (2010)ReMashed-An usability study of a recommender system for mash-ups learning. International Journal of emerging Technologies in Learning, 5.7- ems. &. survey o t he ssateowahe t naed CSCL: A psychological perspective. Computers in Human Behavior, 25, 949-960. Adomavicius, G,& Tuzhilin, A(2011). re recommender systems. In E. Entwistle N. (1988 Motivational factors in students' approaches to learning In R. 23 N technical approach towards supporting intra-organ se third o International Joi rence on Artificial Intelligence(pp. 49-56). Farzan, R.& Brusilovsky, P.(2006). Social na In a course Technology Enhanced Learning (Vol. 5192, pp 33-3 in, Germany: Springer ter science. 4018 91-10 doi:10.1007/978-3-540-87605-24. Felfernig, A, Friedrich, G, Gula M, Kruggel, T, Leitner, G et al.(2007). Anderson, ].R, boyle, C. F,& Reiser, B J(1985). Intelligent tutoring systems. Persuasive recommendation: Serial position effects in knowledge-ba ne,228.456-462.doi:10.1126/ scien doi:10.1007/978-3-540-77006-034 003). Integrating a multi-agent recommendation system into a mobile Fernandez-Luque R,& vognild, L K.(2009). Challenges and learning management system. Proceedings of artificial Intelligence int Mobile Bandura, A (1986). Social foundations of thought and action: A social cognitive theory. formatics in a united and healthy Europe(pp. 903-907) Stockholm, Sweden: Englewood Cliffs, NJ: Prentice Hall OS Press goal theory perspective. Learning and Instruction, 8, 13-22 r own learning: A Festinger, L(1954) A theory of social companson processes. Human Relations, 7. Boekaerts, M (1998) Boosting students'capacity to promote 117-140.doi:10.1177/001872675400700202. umal of Consumer Research, 20, 303-3 arning. International Journal of Educational Research, 31, 533-544. doi: 10.1016 Fogg. B J. (2003) Persuasive technology: Using computers to change what we think and Brin, S.& Page. L(1993). The anatomy of a large-scale hypertextual web search Geyer-Schulz A, Hahsler. M-& Jahn, M.(2001). Educational and scientific 552(9800110X iversity. International Journal of Engineering Education, 17, 153-163 Brinol, P.& Petty, P. E(2009). Source factors in persuasion: A self-validation Geyer-SchuIz, A, Hahsler, M, Neumann, A,& Thede, A(2003) An integration oach. European Review of Social Psychology, 20, 49-96. doi: 10.108 463280802643640. Schader, W. Gaul, M. vichi(Eds ) Between data science and applied data L, Ash, D, Rutherford, M, Nakagawa, K, Gordon, A, analysis(pp 412-420) Berlin, Germany: Springer he classroom. In G. Salomon(Ed ) Distribut ignitions: Psychological and educational considerations (pp. 188-228). New Bruner, ]S(1961 act of discovery. Harvard Educational Review, 31, 21-32. Harper, F. M, Li, S.X Chen, Y, Konstan,J. A(2007). So Midd 148-159) Berlin. Germany: Springer.doi:10.1007/9783-540-77006020 Herlocker, J. L, Konstan, I A, Borchers, A&RiedL, J.(1999). An algorithmic orative Learning (vol. 2, pp. 796-800) Hong Kong, China: Internationa rmation Retrieval (pp. 203-237). Berkeley, CA: ACM Press. doi: 10. 1145/ 312624312682 and User-Adapted Interaction, 12. 331-370. doi: 10.1023 Herlocker, J L, Konstan, J. A, Riedl, J(2000), Explaining collaborative filterin A:1021240730564
in educational contexts. Though some applications are around, many of them do not move beyond a prototype development. Implementation studies would typically be case studies or experiments with a ‘‘tool vs. no tool’’ condition. Second, we need more applied experimental research that varies boundary conditions, but keeps technology constant. For instance, it would be interesting to see how one and the same recommender system works with different tasks, or different learners with different learning styles and different proficiency levels. This would help illuminating the potentials and the limitations of educational recommender systems. And third, we need more research on the basic psychological mechanisms that are addressed when learners use a recommender system. For instance, our own empirical work on the effectiveness of preference-inconsistent recommendations, while not employing a full-blown recommender system, can be regarded as a step towards uncovering the psychological dynamics that specific types of recommendations create. Technologies need to have an added value in order to become incorporated in everyday learning settings. Technologies must make us capable of accomplishing things that are impossible to yield by any other means. Recommender systems have many fascinating features, among them providing learners with access to information that no other method can accomplish. In light of these potentials it is more than likely that recommenders for learning are here to stay. However, their exploration by social scientists has just begun. References Adomavicius, G., & Tuzhilin, A. (2005). Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Transactions on Knowledge and Data Engineering, 17, 734–749. doi:10.1109/ TKDE.2005.99. Adomavicius, G., & Tuzhilin, A. (2011). Context-aware recommender systems. In F. Ricci, L. Rokach, B. Shapira, & P. B. Kantor (Eds.), Recommender systems handbook: A complete guide for research scientists and practitioners (pp. 217–253). New York, NY: Springer. Aehnelt, M., Ebert, M., Beham, G., Lindstaedt, S., & Paschen, A. (2008). A sociotechnical approach towards supporting intra-organizational collaboration. In P. Dillenbourg & M. Specht (Eds.). Proceedings of the Third European Conference on Technology Enhanced Learning (Vol. 5192, pp. 33–38). Berlin, Germany: Springer. doi:10.1007/978-3-540-87605-2_4. Anderson, J. R., Boyle, C. F., & Reiser, B. J. (1985). Intelligent tutoring systems. Science, 228, 456–462. doi:10.1126/science.228.4698.456. Andronico, A., Carbonaro, A., Casadei, G., Colazzo, L., Molinari, A., & Ronchetti, M. (2003). Integrating a multi-agent recommendation system into a mobile learning management system. Proceedings of Artificial Intelligence in Mobile System, 123–132. Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall. Boekaerts, M. (1998). Boosting students’ capacity to promote their own learning: A goal theory perspective. Learning and Instruction, 8, 13–22. doi:10.1016/S0959- 4752(98)00008-5. Boekaerts, M., & Minnaert, A. (1999). Self-regulation with respect to informal learning. International Journal of Educational Research, 31, 533–544. doi:10.1016/ S0883-0355(99)00020-8. Brin, S., & Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems, 30, 107–117. doi:10.1016/S0169- 7552(98)00110-X. Briñol, P., & Petty, P. E. (2009). Source factors in persuasion: A self-validation approach. European Review of Social Psychology, 20, 49–96. doi:10.1080/ 10463280802643640. Brown, A. L., Ash, D., Rutherford, M., Nakagawa, K., Gordon, A., & Campione, J. C. (1993). Distributed expertise in the classroom. In G. Salomon (Ed.), Distributed cognitions: Psychological and educational considerations (pp. 188–228). New York, NJ: Cambridge University Press. Bruner, J. S. (1961). The act of discovery. Harvard Educational Review, 31, 21–32. Brusilovsky, P. (2001). Adaptive hypermedia. User Modeling and User-Adapted Interaction, 11, 87–110. doi:10.1023/A:1011143116306. Buder, J., & Schwind, C. (2011). Recommender systems: A technology to foster individual and collaborative learning. In H. Spada, G. Stahl, N. Miyake, & N. Law (Eds.). Proceedings of the Ninth International Conference on Computer-Supported Collaborative Learning (Vol. 2, pp. 796–800). Hong Kong, China: International Society of the Learning Sciences. Burke, R. (2002). Hybrid recommender systems: Survey and experiments. User Modeling and User-Adapted Interaction, 12, 331–370. doi:10.1023/ A:1021240730564. Buunk, B. P., Collins, R. L., Taylor, S. E., van Yperen, N. W., & Dakof, G. A. (1990). The affective consequences of social comparison: Either direction has its ups and downs. Journal of Personality and Social Psychology, 59, 1238–1249. doi:10.1037/ 0022-3514.59.6.1238. Cai, X., Bain, M., Krzywicki, A., Wobcke, W., Kim, Y. S., Compton, P., et al. (2011). Collaborative filtering for people to people recommendation in social networks. Lecture Notes in Computer Science, 6464, 476–485. doi:10.1007/978-3-642- 17432-2_48. Cialdini, R. B. (2001). Influence: Science and practice (4th ed.). Boston, MA: Allyn & Bacon. Cohen, E. G. (1994). Restructuring the classroom: Conditions for productive small groups. Review of Educational Research, 64, 1–35. doi:10.3102/ 00346543064001001. Cosley, D., Lam, S. K., Albert, I., Konstan, J. A., & Riedl, J. (2003). Is seeing believing? How recommender system interfaces affect users’ opinions. In G. Cockton & P. Korhonen (Eds.), Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (pp. 585–592). New York, NY: ACM Press. doi:10.1145/ 642611.642713. Cronbach, L. J., & Snow, R. E. (1977). Aptitudes and instructional methods: A handbook for research on interactions. New York, NY: Irvington. Dawes, R. M. (1980). Social dilemmas. Annual Review of Psychology, 31, 169–193. doi:10.1146/annurev.ps.31.020180.001125. De Wit, F.R.C., Greer, L.L., 2008. The black-box deciphered: A meta-analysis of team diversity, conflict, and team performance. Academy of Management Best Paper Proceedings. Dillard, J., & Shen, L. (2005). On the nature of reactance and its role in persuasive health communication. Communication Monographs, 72, 144–168. doi:10.1080/ 03637750500111815. Drachsler, H., Hummel, H. G. K., & Koper, R. (2009). Identifying the goal, user model and conditions of recommender systems for formal and informal learning. Journal of Digital Information, 10. Drachsler, H., Hummel, H. G. K., van den Berg, B., Eshuis, J., Waterink, W., Nadolski, R., et al. (2009). Effects of the ISIS recommender system for navigation support in self-organised learning networks. Journal of Educational Technology & Society, 12, 122–135. Drachsler, H., Rutledge, L., van Rosmalen, P., Hummel, H., Pecceu, D., Arts, T., et al. (2010). ReMashed – An usability study of a recommender system for mash-ups for learning. International Journal of Emerging Technologies in Learning, 5, 7–11. doi:10.3991/ijet.v5s1.1191. Engelmann, T., Dehler, J., Bodemer, D., & Buder, J. (2009). Knowledge awareness in CSCL: A psychological perspective. Computers in Human Behavior, 25, 949–960. doi:10.1016/j.chb.2009.04.004. Entwistle, N. (1988). Motivational factors in students’ approaches to learning. In R. R. Schmeck (Ed.), Learning strategies and learning styles (pp. 21–51). New York, NY: Plenum. Farzan, R., Brusilovsky, P., 2005. Social navigation support in e-learning: What are the real footprints? In: B. Mobasher & S.S. Anand (Eds.), Proceedings of the 19th International Joint Conference on Artificial Intelligence (pp. 49-56). Farzan, R., & Brusilovsky, P. (2006). Social navigation support in a course recommendation system. Computer Science, 4018, 91–100. Felfernig, A., Friedrich, G., Gula, B., Hitz, M., Kruggel, T., Leitner, G., et al. (2007). Persuasive recommendation: Serial position effects in knowledge-based recommender systems. Lecture Notes in Computer Science, 4744, 283–294. doi:10.1007/978-3-540-77006-0_34. Fernandez-Luque, L., Karlsen, R., & Vognild, L. K. (2009). Challenges and opportunities of using recommender systems for personalized health education. In K. P. Adlassnig, B. Blobel, J. Mantas, & I. Masic (Eds.), Medical informatics in a united and healthy Europe (pp. 903–907). Stockholm, Sweden: IOS Press. Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7, 117–140. doi:10.1177/001872675400700202. Fisher, R. J. (1993). Social desirability bias and the validity of indirect questioning. Journal of Consumer Research, 20, 303–315. Fogg, B. J. (2003). Persuasive technology: Using computers to change what we think and do. San Francisco, CA: Morgan Kaufman Publishers. Geyer-Schulz, A., Hahsler, M., & Jahn, M. (2001). Educational and scientific recommender systems: Designing the information channels of the virtual university. International Journal of Engineering Education, 17, 153–163. Geyer-Schulz, A., Hahsler, M., Neumann, A., & Thede, A. (2003). An integration strategy for distributed recommender services in legacy library systems. In M. Schader, W. Gaul, & M. Vichi (Eds.), Between data science and applied data analysis (pp. 412–420). Berlin, Germany: Springer. Glance, N. S., Arregui, D., & Dardenne, M. (1999). Making recommender systems work for organizations. Proceedings of the Fourth International Conference on the Practical Application of Intelligent Agents and Multi-agent Technology, 57–76. Harper, F. M., Li, S. X., Chen, Y., & Konstan, J. A. (2007). Social comparisons to motivate contributions to an online community. In Y. de Kort, W. Ijsselsteijn, C. Midden, B. Eggen, & B. J. Fogg (Eds.). Persuasive Technology (Vol. 4744, pp. 148–159). Berlin, Germany: Springer. doi:10.1007/978-3-540-77006-0_20. Herlocker, J. L., Konstan, J. A., Borchers, A., & Riedl, J. (1999). An algorithmic framework for performing collaborative filtering. In F. Gey, M. Hearst, & R. Tong (Eds.), Proceedings of the 22nd ACM IR Conference on Research and Development in Information Retrieval (pp. 203–237). Berkeley, CA: ACM Press. doi:10.1145/ 312624.312682. Herlocker, J. L., Konstan, J. A., & Riedl, J. (2000). Explaining collaborative filtering recommendations. In W. Kellog & S. Whittaker (Eds.), Proceedings of the ACM J. Buder, C. Schwind / Computers in Human Behavior 28 (2012) 207–216 215
216 J Buder, C Schwind/ Computers in Human Behavior 28(2012)207-216 Conference on Computer Suppor ive Work (pp 241-250). New York, Rafaeli, S, Barak, M, Dan-Gur, Y.& Toch, E.(2004). QSIA-A Web-based NY: ACM Press. doi: 10. 1145/35 for learning assessin Herlocker, J. L, Konstan, J. A Terveen, L G,& RiedL, J. T.(2004). Evaluating ollaborative filtering recommender systems. ACM Transactions on Information Rashid, A. M, Ling. K Tassone, R. D Resnick, P, Kraut, R,& Riedl, J(2006). Hsu, M-H(200S) Proposing an ESL recommender teaching and learning odden, P. Aoki, E. Cutrell, R. Jeffries, &G. Olson(Eds ) Proceedings of the ACM ms with Applications, 34 on Human Factors in Computing Systems(pp. 955-958)New onas, E, Schulz-Hardt, S, Frey, D.& Thelen, N. (2001) Confirmation bias in Recker, M. M, walker. ss,K(2003). What do you recommend rces for education. Instructional Science, 31, 299-316. doi: 10. 1023/ Social Psychology, 80, 557-571 3514804.557. Karau, S.J& williams, K D (1993). Social loafing: A meta-analytic review tion phenomena. European Review of Social Psychology, 6, 161-198. heoretical integration. Journal of personality and Social Psychology, 65, 681-706 Resnick, P. Varian, H.R.(1997). Recommender systems. Communications of the other way: Selective ACM,40(3),56-58.doi:10.1145|24510824512 litical information. Sarwar, B M, Karypis, G, Konstan, J.A. Riedl ].(2001) Item-based collaborative L. Fischer. F nalysis Educational Psychology Review, 18, 159-185. doi: 10.1007 /s10648-006- (pp. 285-296) New York, NY: ACM Press. doi: 10.1 Scardamalia, M. (2002) Collective cognitive respo for the ad Konstan, J. A, & RiedL ]. (2003). Collaborative filtering: Supporting soc mith(Ed), Liberal education in a knowledge societ large, crowded infospaces. In K. Hook, D pp. 67-98) Chica esigning in formation spaces: The social navigation approach(pp 43-82) Lond Schafer, JB, Konstan, J,& Riedl, J(1999). Recommender systems in Kramer, T.(2007). The effect of measurement task transparency on preference Proceedings of the First ACM ce on Electronic Commerce(pp. 158-166). New York, NY: ACM Press. do urnal of Schein, AL, Popescul, A, Ungar, LH& Pennock, D M. (2002). Methods and m ext-aware learning object recommendation. journal of op 253-260) New York, NY: ACM Press. Interactive Technology and Smart 179-188.doi:10.1108 doi:10.1145/564376.564421 Schwind, C. Buder, , Hesse. FW.(2011a). I will do it, but Ling. K Beenen, G, Ludford, P]- X, Chang, K,Li, Kx et al. (2005). Using Com Linton, F,& Schaefer, H.-P.(2000). Recommender systems for learning: Building Schwind. er, J.&4.42. 197899> ms(pp 349-352) New York, NY: ACM 08.doi:10.1023 he Ninth Ludford, P.J- Cosley, D, Frankowski, D,& Terveen, L(2004). Think different 2, 374-381). Hong Kong, China: In k, NY: ACM Press. doi: 10. 1145/98 Spiro, R,& Jehng J. C.(1990). Cognitive flexibility and hypertext: Theory and Malone, T w, Laubacher, R,& Dellarocas, C (2009). Harnessing crowds: Map collective intelligence (Report No. 4732-09). Cambridge, MA: Mr Sloan School of Managemen high technology (pp. 163-205). Hillsdale, NJ: Erlbaum. Manouselis, N, Drachsler, H, Vuorikan, R, HummeL, H G K,& Koper, R (2011) Stahl, G.(2006). Gr upport for building collaborative B& P. B. Kantor(dy6 mmended learning. In上k Stano (pp. 387-415) New York, NY ng. Journal of educati J(1969).The nature of attitudes and attitude change. In G Lindzey &E. Swearingen, K,&542-357 doi: 10.103710022-0663.89.2.342. Aronson(Eds ) The handbook of social psychology (vol 3. pp. 163-314) Reading, Tajfel, H, Turner, -C(1986). The identity theory of intergroup behavior. In ago Nelson hal Austin(Eds Psychology of intergroup relations(pp. 7-24). A Corbett(Eds ) Proceedings of the Ninth Intermational Conference on User Tang. TY,& McCalla, G (2005) Smart recommendation for an evolving e-learning Modeling (pp. 178-188) Berlin, Germany: Springer. doi: 10.1007 / 3-540- 44963-9-24 McNee, S M Riedl, ],& Konstan, J. A(2006). Making re ons better: An Vygotsky, L S.(1978). Mind in society: The development of higher psychological alytic model for human-l Wang P. Y (2007). The analysis and design of educational r oki, E Cuttrell, R. Jeffries, &G. Olson(Eds ) Proceedi he ACM CHI R. Carlsen, K McFerrin, J. Price, R. Weber, D. A. willis(Eds. Proceedings of Mayer, R.E.(1986) The teaching of learning strategies. In M. oper, R(2009). Simulating Wittrock(Ed ) Handbook of research on teaching(pp. 315-327) New York, NY: in learning networks: A case recommendation strategies. of Artificial Societies and Social Simulation, Xiao, B, Benbasat, I (2007) E-commerce product recommendation agents: Use. Nass, C, Moon, Y. Morkes, J- Kim, E.-Y& Fogg, B.J. (1997). Computers are social In values and the Yoo Gretzel, U.(2011). more credible a esign of computer technology(pp. 137-162) Stanford, CA: CSLI Press chara Norman, D A(1993). Things that make us smart: Defending human attributes in the age of the machine. Boston, MA: Addison-Wesley book: A complete guide for research r.doi:10.1 chension-monitoring activities. Cognition and Instruction, 1 978-0-3 207/s153269 Ziegler, C, McNee, S. M, Konstan, G. A,& Lausen, G.(2005). Improving E,& Cacioppo, T (1986). The elaboration likelihood model of perst n L Berkowitz (Ed Advances in experimental social psychology(VoL. 19. ds ) Proceedings of the 14th Intemational pp. 123-205) New York, NY: Academic Press New York, NY: ACM Press. doi: 10. 1145/1060745 10607
Conference on Computer Supported Cooperative Work (pp. 241–250). New York, NY: ACM Press. doi:10.1145/358916.358995. Herlocker, J. L., Konstan, J. A., Terveen, L. G., & Riedl, J. T. (2004). Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems, 22, 5–53. Hsu, M.-H. (2008). Proposing an ESL recommender teaching and learning system. Expert Systems with Applications, 34, 2102–2110. doi:10.1016/ j.eswa.2007.02.041. Jonas, E., Schulz-Hardt, S., Frey, D., & Thelen, N. (2001). Confirmation bias in sequential information search after preliminary decisions: An expansion of dissonance theoretical research on selective exposure to information. Journal of Personality and Social Psychology, 80, 557–571. doi:10.1037/0022- 3514.80.4.557. Karau, S. J., & Williams, K. D. (1993). Social loafing: A meta-analytic review and theoretical integration. Journal of Personality and Social Psychology, 65, 681–706. doi:10.1037/0022-3514.65.4.681. Knobloch-Westerwick, S., & Meng, J. (2009). Looking the other way: Selective exposure to attitude-consistent and counterattitudinal political information. Communication Research, 36, 426–448. doi:10.1177/0093650209333030. Kollar, I., Fischer, F., & Hesse, F. W. (2006). Collaboration scripts – A conceptual analysis. Educational Psychology Review, 18, 159–185. doi:10.1007/s10648-006- 9007-2. Konstan, J. A., & Riedl, J. (2003). Collaborative filtering: Supporting social navigation in large, crowded infospaces. In K. Höök, D. Benyon, & A. J. Munro (Eds.), Designing information spaces: The social navigation approach (pp. 43–82). London, England: Springer. Kramer, T. (2007). The effect of measurement task transparency on preference construction and evaluations of personalized recommendations. Journal of Marketing Research, 44, 224–233. doi:10.1509/jmkr.44.2.224. Lemire, D., Boley, H., McGrath, S., & Ball, M. (2005). Collaborative filtering and inference rules for context-aware learning object recommendation. Journal of Interactive Technology and Smart Education, 2, 179–188. doi:10.1108/ 17415650580000043. Ling, K., Beenen, G., Ludford, P. J., Wang, X., Chang, K., Li, K. X., et al. (2005). Using social psychology to motivate contributions to online communities. Journal of Computer-Mediated Communication, 10. Linton, F., & Schaefer, H.-P. (2000). Recommender systems for learning: Building user and expert models through long-term observation of application use. User Modeling and User-Adapted Interaction, 10, 181–208. doi:10.1023/ A:1026521931194. Ludford, P. J., Cosley, D., Frankowski, D., & Terveen, L. (2004). Think different: Increasing online community participation using uniqueness and group dissimilarity. In E. Dykstra-Erickson & M. Tscheligi (Eds.), Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (pp. 631–638). New York, NY: ACM Press. doi:10.1145/985692.985772. Malone, T. W., Laubacher, R., & Dellarocas, C. (2009). Harnessing crowds: Mapping the genome of collective intelligence (Report No. 4732-09). Cambridge, MA: MIT Sloan School of Management. Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011). Recommender systems in technology enhanced learning. In F. Ricci, L. Rokach, B. Shapira, & P. B. Kantor (Eds.), Recommender systems handbook: A complete guide for research scientists and practitioners (pp. 387–415). New York, NY: Springer. doi:10.1007/978-0-387-85820-3_12. McGuire, W. J. (1969). The nature of attitudes and attitude change. In G. Lindzey & E. Aronson (Eds.). The handbook of social psychology (Vol. 3, pp. 163–314). Reading, MA: Addison-Wesley. McNee, S. M., Lam, S. K., Konstan, J. A., & Riedl, J. (2003). Interfaces for eliciting new user preferences in recommender systems. In P. Brusilovsky, F. de Rosis, & A. Corbett (Eds.), Proceedings of the Ninth International Conference on User Modeling (pp. 178–188). Berlin, Germany: Springer. doi:10.1007/3-540- 44963-9-24. McNee, S. M., Riedl, J., & Konstan, J. A. (2006). Making recommendations better: An analytic model for human-recommender interaction. In R. Grinter, T. Rodden, P. Aoki, E. Cuttrell, R. Jeffries, & G. Olson (Eds.), Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (pp. 1103–1108). New York, NY: ACM Press. doi:10.1145/1125451.1125660. Nadolski, R. J., van den Berg, B., Berlanga, A. J., Drachsler, H., Hummel, H. G. K., & Koper, R. (2009). Simulating light-weight personalised recommender systems in learning networks: A case for pedagogy-oriented and rating-based hybrid recommendation strategies. Journal of Artificial Societies and Social Simulation, 12. Nass, C., Moon, Y., Morkes, J., Kim, E.-Y., & Fogg, B. J. (1997). Computers are social actors: A review of current research. In B. Friedman (Ed.), Human values and the design of computer technology (pp. 137–162). Stanford, CA: CSLI Press. Norman, D. A. (1993). Things that make us smart: Defending human attributes in the age of the machine. Boston, MA: Addison-Wesley. Palincsar, A. S., & Brown, A. L. (1984). Reciprocal teaching of comprehensionfostering and comprehension-monitoring activities. Cognition and Instruction, 1, 117–175. doi:10.1207/s1532690xci0102_1. Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. In L. Berkowitz (Ed.). Advances in experimental social psychology (Vol. 19, pp. 123–205). New York, NY: Academic Press. Rafaeli, S., Barak, M., Dan-Gur, Y., & Toch, E. (2004). QSIA – A Web-based environment for learning assessing and knowledge sharing in communities. Computers & Education, 43, 273–289. doi:10.1016/j.compedu.2003.10.008. Rashid, A. M., Ling, K., Tassone, R. D., Resnick, P., Kraut, R., & Riedl, J. (2006). Motivating participation by displaying the value of contribution. In R. Grinter, T. Rodden, P. Aoki, E. Cutrell, R. Jeffries, & G. Olson (Eds.), Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (pp. 955–958). New York, NY: ACM Press. doi:10.1145/1124772.1124915. Recker, M. M., Walker, A., & Lawless, K. (2003). What do you recommend? Implementation and analyses of collaborative information filtering of web resources for education. Instructional Science, 31, 299–316. doi:10.1023/ A:1024686010318. Reicher, S., Spears, R., & Postmes, T. (1995). A social identity model of deindividuation phenomena. European Review of Social Psychology, 6, 161–198. doi:10.1080/14792779443000049. Resnick, P., & Varian, H. R. (1997). Recommender systems. Communications of the ACM, 40(3), 56–58. doi:10.1145/245108.245121. Sarwar, B. M., Karypis, G., Konstan, J. A., & Riedl, J. (2001). Item-based collaborative filtering recommendation algorithms. In V. Y. Shen, N. Saito, M. R. Lyu, & M. E. Zurko (Eds.), Proceedings of the 10th International Conference on World Wide Web (pp. 285–296). New York, NY: ACM Press. doi:10.1145/371920.372071. Scardamalia, M. (2002). Collective cognitive responsibility for the advancement of knowledge. In B. Smith (Ed.), Liberal education in a knowledge society (pp. 67–98). Chicago, IL: Open Court. Schafer, J.B., Konstan, J., & Riedl, J. (1999). Recommender systems in e-commerce. Proceedings of the First ACM conference on Electronic Commerce (pp. 158–166). New York, NY: ACM Press. doi:10.1145/336992.337035. Schein, A.I., Popescul, A., Ungar, L.H., & Pennock, D.M. (2002). Methods and metrics for cold-start recommendations. In K. Järvelin, M. Beaulieu, R. Baeza-Yates, & S. Hyon-Myaeng (Eds.), Proceedings of the 25nd ACM IR Conference on Research and Development in Information Retrieval (pp. 253–260). New York, NY: ACM Press. doi:10.1145/564376.564421. Schwind, C., Buder, J., & Hesse, F.W. (2011a). I will do it, but I don’t like it: User reactions to preference-inconsistent recommendations. In D. Tan, S. Amershi, B. Begole,W. A. Kellog, & M. Tungare (Eds.), Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (pp. 349–352). New York, NY: ACM Press. doi:10.1145/1978942.1978992. Schwind, C., Buder, J., & Hesse, F. W. (2011b). Fostering social navigation and elaboration of controversial topics with preference-inconsistent recommendations. In H. Spada, G. Stahl, N. Miyake, & N. Law (Eds.), Proceedings of the Ninth International Conference on Computer-Supported Collaborative Learning (Vol. 2, pp. 374–381). Hong Kong, China: International Society of the Learning Sciences. Simon, H. (1959). Theories of decision making in economics and behavioural science. American Economic Review, 49, 253–283. Spiro, R. J., & Jehng, J. C. (1990). Cognitive flexibility and hypertext: Theory and technology for the nonlinear and multidimensional traversal of complex subject matter. In D. Nix & R. Spiro (Eds.), Cognition, education, and multimedia: Exploring ideas in high technology (pp. 163–205). Hillsdale, NJ: Erlbaum. Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge. Cambridge, MA: MIT Press. doi:10.1080/10447310802209414. Stanovich, K. E., & West, R. F. (1997). Reasoning independently of prior belief and individual differences in actively open-minded thinking. Journal of Educational Psychology, 89, 342–357. doi:10.1037/0022-0663.89.2.342. Swearingen, K., & Sinha, R. (2002). Interaction design for recommender systems. Proceedings of the Conference on Designing Interactive Systems. Tajfel, H., & Turner, J. C. (1986). The social identity theory of intergroup behavior. In S. Worchel & W. G. Austin (Eds.), Psychology of intergroup relations (pp. 7–24). Chicago, IL: Nelson Hall. Tang, T. Y., & McCalla, G. (2005). Smart recommendation for an evolving e-learning system. International Journal on e-Learning, 4, 105–129. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press. Wang, P. Y. (2007). The analysis and design of educational recommender systems. In R. Carlsen, K. McFerrin, J. Price, R. Weber, & D. A. Willis (Eds.), Proceedings of Conference of Society for Information and Technology and Teacher Education (pp. 2134-2140). Chesapeake, VA: AACE. Weinstein, C. E., & Mayer, R. E. (1986). The teaching of learning strategies. In M. Wittrock (Ed.), Handbook of research on teaching (pp. 315–327). New York, NY: Macmillan. Xiao, B., & Benbasat, I. (2007). E-commerce product recommendation agents: Use, characteristics, and impact. Management Information Systems Quarterly, 31, 137–209. Yoo, K.-H., & Gretzel, U. (2011). Creating more credible and persuasive recommender systems: The influence of source characteristics on recommender systems evaluations. In F. Ricci, L. Rokach, B. Shapira, & P. B. Kantor (Eds.), Recommender systems handbook: A complete guide for research scientists and practitioners (pp. 455–477). New York, NY: Springer. doi:10.1007/ 978-0-387-85820-3. Ziegler, C., McNee, S. M., Konstan, G. A., & Lausen, G. (2005). Improving recommendation lists through topic diversification. In A. Ellis & T. Hagino (Eds.), Proceedings of the 14th International World Wide Web Conference (pp. 22- 32). New York, NY: ACM Press. doi:10.1145/1060745.1060754. 216 J. Buder, C. Schwind / Computers in Human Behavior 28 (2012) 207–216