u e. a number of users that atings to her. Once this set of neighbors is database,anddo we manage to deliver the most accurate recommendation for the current purposes? This paper, however, takes a somewhat different approach: It explores the psychological aspects of personalized recommender systems, with the ultimate question being: How do people react to and act upon recommender systems? This question will be addressed with particular emphasis on the use of recommender systems in educational contexts. Knowing the psychological impact of recommendations on users can be helpful for practitioners and researchers alike. If we have a better idea how people react to recommender systems, we can improve algorithms and interfaces in ways that make using the system more efficient and satisfactory. Understanding how users contribute data to recommender systems is important for practitioners, as problems like low participation can impede system performance. From a research perspective, a better understanding of the psychological impacts of recommender systems can inform various fields, such as educational psychology (instructional design, educational technology), social psychology (persuasion, trust building), business administration (marketing), or computer science (machine learning, HCI). The paper is structured as follows: Section 2 explores how the key characteristics of personalized recommender systems fit into current thought in the learning sciences. Section 3 discusses specific requirements that recommender systems must fulfill in order to support learning processes, both with regard to two learner roles and two types of adaptation. This discussion leads to four conjectures about how recommender systems should be adapted for educational contexts. Section 4 integrates the findings, and provides an outlook on future research. 2. Recommender systems and the learning sciences Designing and implementing workable recommender systems can be quite burdensome. Apart from a technological infrastructure that needs to store data about each possible combination of items and user, thereby generating substantial server load, a critical mass of users is one of the main roadblocks towards successful implementation (Glance, Arregui, & Dardenne, 1999). If the community of people who generate data is too small, recommendations become less precise. This begs the question of whether it is useful to implement personalized recommender systems in educational contexts. In order to answer this question, an example of a fictitious educational recommender system is introduced. This example will be used to illuminate main principles of recommender systems design, and these principles will be compared to principles in the learning sciences. In our example, a psychology student is trying to find good research literature for her Masters thesis. She logs into a digital library Website which operates a recommender system on academic publications. Let’s assume that she has never used the system before. The recommender might provide her with a list of the most popular publications on her thesis topic. This list would be similar to a bestseller list. Adjacent to each entry is a slider where she can rate each publication on a range from 1 (uninteresting) to 5 (highly interesting). She reads through the list, and selects some publications that she knows. Interestingly, she dislikes some of the popular publications, and expresses this through low ratings. Though our student does not interact with other users of the recommender system, she is part of a larger community of others who have also selected and rated thousands of publications. As shown in Fig. 1, selecting entries and rating them constitutes the activity of individuals within the community. The recommender system then aggregates all the ratings from the community’s rating database, and filters this information according to specified algorithms. For instance, if the recommender system employs userbased collaborative filtering algorithms, one step is to define a so-called neighborhood for our student, i.e. a number of users that gave the most similar ratings to her. Once this set of neighbors is established, the system goes through all publications that our student has not rated yet, and identifies those publications that received the highest average ratings from the student’s neighborhood. In the recommender interface, the system provides an output of the top 10 publications; these items constitute the recommendations. As the student (and, by the use of similarity metrics, her neighborhood) have a non-standard taste, this list might differ strongly from the original, bestseller-like list. If a publication is recommended that the student does not know, she might order it. If she likes it (gives a high rating), she will become more similar to her neighbors; if the recommendation was bad and she gives a low rating, a new neighborhood might emerge, resulting in slightly different, adjusted recommendations. This ongoing cycle between individual activities (selecting, rating) and system activities (aggregating, filtering) rests on five principles of recommender system design (see Fig. 1). First, recommender systems rely on collective responsibility. In our digital library example, the data on which book recommendations are based were generated by a community of peers (Resnick & Varian, 1997). This is in contrast to offline contexts where recommendations often come from dedicated individuals like teachers, mentors, or reviewers. Recommender systems do not hand particular power to dedicated individuals, but shift responsibility and accountability towards the user collective. A similar principle of collective responsibility can be found in the learning sciences (Scardamalia, 2002) where many scholars have suggested moving from a traditional, teacher-centered education towards a peer-centered education (Brown et al., 1993). In peercentered education as well as in recommender systems, a power structure with flat hierarchies emerges. Moreover, in both fields it is assumed that peer efforts will lead to high-quality output: The learning from peer-centered education should be at least as high as in teacher-centered education; similarly, recommendations derived from a community should at least be as good as those from dedicated experts. Second, recommender systems exhibit collective intelligence. For instance, if a particular book is recommended to our student, this recommendation cannot be traced back to the behavior of any individual user. Rather, it is the behavior of the user collective (or in the case of user-based collaborative filtering, the neighborhood) that is responsible for the recommendation. As it was shown empirically that computed recommendations are sufficiently correlated with the actual ratings of a user (Herlocker, Konstan, Borchers, & Riedl, 1999) it can be argued that these systems exhibit collective intelligence (Malone, Laubacher, & Dellarocas, 2009). This idea resonates with the notion of ‘‘group cognition’’ in the learning sciences, particularly in research on computer-supported collaborative learning (Stahl, 2006). According to this view, the output of a collaborative learning group, e.g. their discussions or the constructed artifacts, cannot be meaningfully or completely traced back to individual group members, but rather arise through complex interactions among the constituents (group members). It can be said that these emergent properties of groups can also be found in the way that recommender systems operate. Third, recommender systems are based on user control. A book that is suggested by a recommender system differs from a book that is a mandatory part of a course syllabus. Our student has the choice to follow the recommendation or not. Recommender systems preserve user autonomy, and they do not prescribe courses of action to be taken by a person. They typically support information search and retrieval, i.e. tasks of a self-directed, exploratory and often open-ended nature. In this regard, they cater to modern constructivist epistemologies in the learning sciences 208 J. Buder, C. Schwind / Computers in Human Behavior 28 (2012) 207–216