∽u=9◎ 2 A Multidimensional Paper Recommender Experiments and Evaluations uE Paper recommender systems in the e-learning domain must consider pedagogical factors, such as a paper's overall popularity and learner background nowledge- factors that are less important in commercial book or movie recommender systems. This article reports evaluations of a 6D paper recommender. Experimental results from a human subject study of learner preferences suggest that pedagogical factors help to overcome a serious cold start problem(not having enough papers or learners to start the recommender system) and help the system more appropriately support users as they learn. Tiffany Y Tang W hen information overload inten- collaborative filtering, construct groups Konkuk University sifies. users are overwhelmed of like-minded users with whom the by the information pouring target user shares similar interests and Gordon mccalla from various sources, including the makes recommendations based on an University of Saskatchewan Internet. Often, they're confused by analysis of them. which information they should con Making recommendations to learn ume that is, they find it difficult to ers differs from making recommenda pick the most appropriate information tions in many commercial domains, when the number of choices increases. in which only user likes and dislikes Recommender systems offer a feasible matter. A learner's overall impres solution to this problem. For example, if sion toward each paper depends not a user explicitly indicates a preference only on how interesting a paper is for action movies starring Sean Penn, but also on the degree that the paper a recommender system might suggest helps them meet their cognitive goals The Interpreter. In this case, the system This is consistent with Soo Young matches user preferences to movies' Rieh's observations of human users content features using a content-based information-seeking behaviors: "people other major recommendation approach, and cognitive authority ation quality filtering approach. Systems using an- make judgment of inforn 34 Published by the IEEE Computer Society 089-7801/09/$2500°2009IEE EEE INTERNET COMPUTING
Emerging E-Learning Technologies 34 Published by the IEEE Computer Society 1089-7801/09/$25.00 © 2009 IEEE IEEE INTERNET COMPUTING When information overload intensifies, users are overwhelmed by the information pouring from various sources, including the Internet. Often, they’re confused by which information they should con - sume — that is, they find it difficult to pick the most appropriate information when the number of choices increases. Recommender systems offer a feasible solution to this problem. For example, if a user explicitly indicates a preference for action movies starring Sean Penn, a recommender system might suggest The Interpreter. In this case, the system matches user preferences to movies’ content features using a content-based filtering approach. Systems using an - other major recommendation approach, collaborative filtering, construct groups of like-minded users with whom the target user shares similar interests and makes recommendations based on an analysis of them. Making recommendations to learn - ers differs from making recommenda - tions in many commercial domains, in which only user likes and dislikes matter. A learner’s overall impres - sion toward each paper depends not only on how interesting a paper is, but also on the degree that the paper helps them meet their cognitive goals. This is consistent with Soo Young Rieh’s observations of human users’ information-seeking behaviors: “people make judgment of information quality and cognitive authority.” 1 Paper recommender systems in the e-learning domain must consider pedagogical factors, such as a paper’s overall popularity and learner background knowledge — factors that are less important in commercial book or movie recommender systems. This article reports evaluations of a 6D paper recommender. Experimental results from a human subject study of learner preferences suggest that pedagogical factors help to overcome a serious coldstart problem (not having enough papers or learners to start the recommender system) and help the system more appropriately support users as they learn. Tiffany Y. Tang Konkuk University Gordon McCalla University of Saskatchewan A Multidimensional Paper Recommender Experiments and Evaluations
A Multidimensional Paper Recommender Table I Learner fe 3 L. How difficult is the paper to understand? Very difficult Difficult 2. Is the paper's content related to your job Not really Not at all 3. Is the paper interesting? Somewhat Not at al 4. Does the paper aid your understanding of the Very much Not really Not at all software engineering concepts and techniques learned in class? 5. Do you learn something new after reading lutely A little Not really Not at all pap 6. What is your overall rating of this paper? Very good 7. Will you recommen No classmates? 6 Existing research on recommending papers a learner be willing to tolerate a paper that learners doesn't consider pedagogical fac- isn't interesting? tors when making recommendations -that is, Will a learner be comfortable with a highly whether a recommended paper will enhance technical paper, even if it matches the learn- learning (see the "Related Work in Recom- er's interest or is required reading (for in- mender Systems"sidebar). To deal with this is- stance, a seminal paper on a topic)? sue, we proposed in earlier work the notion of recommending pedagogically appropriate pa- Due to space limitations, we only report our key pers .4Here, we extend our previous work by findings relevant to the first two questions determining the extra value of making peda gogically relevant recommendations. Learner Data Collection satisfaction is a complex function of learner Our study subjects were students enrolled in a characteristics and, as such, requires more master's program at the Hong Kong Polytechnic than simply matching a paper's topic against a University. All were registered in the "Software learner's interests. To study this claim, we look Engineering Concepts"course, whose curricu- at two factors: the significance of the pedagog- lum was designed primarily for mature or work ical factors in recommending a paper and the ing students from various backgrounds. A total ssociations among these factors that might in- of 40 part-time students attended the course. dicate their interactive relationships. We study The course curriculum included 22 papers as these factors in a human-subject experiment reading assignments, with a varying number and propose a recommendation algorithm ap- of papers assigned each week. Twenty-four stu- propriate to our findings dents agreed to participate in our experiment Empirical Study Learner profiles and feedback We performed an empirical study aimed at ex- We drew learner profiles from a questionnaire ploring the multidimensionality of paper recom- consisting of four categories: interest, knowl- mendation. In particular, we wanted to examine edge background, work experience, and learn he effect of considering pedagogical elements ing expectations. Students represented a pool in making recommendations. We addressed the of learners with work experience related to in following questions: formation technology, but not necessarily with computer science backgrounds. We used partic. Will learners be happy with papers that can ipants' learning expectations to determine what expand their knowledge (that is, teach them specific areas they wanted to pursue in their something new? E-learning's ultimate goal work, which the system could then use to make is fulfilling the user's knowledge needs. recommendations. After the students read each How important is learner interest in the rec- of the 22 assigned papers, we asked them to an- ommendation? For instance, how much will swer several questions, shown in Table 1 ULYJAUGUST 2009
JULY/AUGUST 2009 35 A Multidimensional Paper Recommender Existing research on recommending papers for learners2 doesn’t consider pedagogical factors when making recommendations — that is, whether a recommended paper will enhance learning (see the “Related Work in Recommender Systems” sidebar). To deal with this issue, we proposed in earlier work the notion of recommending pedagogically appropriate papers.3,4 Here, we extend our previous work by determining the extra value of making pedagogically relevant recommendations. Learner satisfaction is a complex function of learner characteristics and, as such, requires more than simply matching a paper’s topic against a learner’s interests. To study this claim, we look at two factors: the significance of the pedagogical factors in recommending a paper and the associations among these factors that might indicate their interactive relationships. We study these factors in a human-subject experiment and propose a recommendation algorithm appropriate to our findings. Empirical Study We performed an empirical study aimed at exploring the multidimensionality of paper recommendation. In particular, we wanted to examine the effect of considering pedagogical elements in making recommendations. We addressed the following questions: • Will learners be happy with papers that can expand their knowledge (that is, teach them something new)? E-learning’s ultimate goal is fulfilling the user’s knowledge needs. • How important is learner interest in the recommendation? For instance, how much will a learner be willing to tolerate a paper that isn’t interesting? • Will a learner be comfortable with a highly technical paper, even if it matches the learner’s interest or is required reading (for instance, a seminal paper on a topic)? Due to space limitations, we only report our key findings relevant to the first two questions. Data Collection Our study subjects were students enrolled in a master’s program at the Hong Kong Polytechnic University. All were registered in the “Software Engineering Concepts” course, whose curriculum was designed primarily for mature or working students from various backgrounds. A total of 40 part-time students attended the course. The course curriculum included 22 papers as reading assignments, with a varying number of papers assigned each week. Twenty-four students agreed to participate in our experiment. Learner Profiles and Feedback We drew learner profiles from a questionnaire consisting of four categories: interest, knowledge background, work experience, and learning expectations. Students represented a pool of learners with work experience related to information technology, but not necessarily with computer science backgrounds. We used participants’ learning expectations to determine what specific areas they wanted to pursue in their work, which the system could then use to make recommendations. After the students read each of the 22 assigned papers, we asked them to answer several questions, shown in Table 1. Table 1. Learner feedback form. Question Response 4 3 2 1 1. How difficult is the paper to understand? Very difficult Difficult Easy Very easy 2. Is the paper’s content related to your job? Very Somewhat Not really Not at all 3. Is the paper interesting? Very Somewhat Not really Not at all 4. Does the paper aid your understanding of the software engineering concepts and techniques learned in class? Very much Somewhat Not really Not at all 5. Do you learn something new after reading this paper? Absolutely A little Not really Not at all 6. What is your overall rating of this paper? Very good Good Mediocre Bad 7. Will you recommend this paper to your classmates? Absolutely Maybe No
Emerging E-Learning Technologies teresting the paper is but also on the richness of knowledge that the learner has gained from reading it and its usefulness in helping the learner understand the course material Figure 1. Causal inference with partial correlation. When(a)rABc. The closeness of a learner's work experience effect, which we call a spurious to a paper's topic might also affect the learn explanation. When(b) TaB >abc>0, A partially affects B and er's overall ratings of that paper and the might affect C. We call this a partial explanation. likelihood of recommending it to others To validate our hypotheses, we must show that le used a Likert 4-scale rating for the par- ratings on the value-added or aid-learning fac ticipants' answers to all but the last question. tors (question 4) affect ratings on overall or Several questions are related to a paper's peda- peer recommendation independently from how gogical values the degree of difficulty to un- interesting the paper is (3). We performed four derstand (question 1), the users job-relatedness analyses: partial correlation, structural equa- (2), the amount of value added (5), and degree of tion modeling, principal components analysi peer recommendation (7). Because the candidate (PCA), and partial least squares regression(PLS) papers are mainly from popular technical maga- These statistical analysis methods are particu zines, even learners with no deep mathematical larly valuable to social scientists for probing knowledge have little difficulty understanding the interactivity among variables. Due to space them. We expected the most difference among limitations, we report only our main findings learners for the papers' value added, degree of from partial correlation, PCA, and PLs peer recommendation, and overall rating Partial Correlation results Correlation Analysis Partial correlation is often used for modeling Our major goal in analyzing the data collected causality using three or four variables. Let TAB.C through the study is to determine the factors be the Pearson correlation of variables A and B, that rank a paper high in terms of its pedagogi- controlling for variable C; and Tab be the Pear- cal benefits. Were looking for important inter- son correlation of variables A and B If TAB.C actions among the pedagogical variables that TAB, we infer that the control variable C has no might help accurately rank a paper for a learner. effect. If TAB.c approaches 0, TAB is spurious (the To achieve this, we don't use traditional evalu- correlation is spurious)-that is, no direct caus- tion approaches such as mean absolute error, al link exists between a and b(see Figure 1a) which examine how close the recommender Rather, either C affects A and b (anteceding) systems predicted ratings are to the true user or A affects C, which affects B (intervening ). If ratings, or receiver operating characteristics, AB > TAB.c>0, we have partial explanation (see which measure how well users receive the rec- Figure 1b). In this case, a partially affects b re ommended items. Instead, we use statistical gardless of whether it affects (or is affected by) methods to sort out alternative explanations C Table 2 shows our partial correlation compu- for relations between variables and therefore to tations for the data collected in the study. further our understanding of the relationship of We divided the analysis into three groups various pedagogical factors to the ranking of (I, Il, and Ill), based on different assumptions papers appropriate for particular learners. We about what were correlating make the following hypotheses In Group I, we check TaB and TaBc for A value_added, B= overall or peer_rec, A paper's overall rating might not only de- = interest. The results of TAB are 0. 4798 and pend on how interesting it is but also on the 0.4335 for B overall and B= peer_rec,re- richness of knowledge that the learner gains spectively. After introducing interest as a rom reading it and its usefulness in help- control, the correlations decrease to 0. 3539 and ing a learner understand the course subject 0.3017, respectively; hence, for this group, TAb matter. A learners intent in recommending a In Group Il, we check TAB and TAB to others might depend not only on ho aid_learning. EEE INTERNET COMPUTING
Emerging E-Learning Technologies 36 www.computer.org/internet/ IEEE INTERNET COMPUTING We used a Likert 4-scale rating for the participants’ answers to all but the last question. Several questions are related to a paper’s pedagogical values — the degree of difficulty to understand (question 1), the user’s job-relatedness (2), the amount of value added (5), and degree of peer recommendation (7). Because the candidate papers are mainly from popular technical magazines, even learners with no deep mathematical knowledge have little difficulty understanding them. We expected the most difference among learners for the papers’ value added, degree of peer recommendation, and overall rating. Correlation Analysis Our major goal in analyzing the data collected through the study is to determine the factors that rank a paper high in terms of its pedagogical benefits. We’re looking for important interactions among the pedagogical variables that might help accurately rank a paper for a learner. To achieve this, we don’t use traditional evaluation approaches such as mean absolute error, which examine how close the recommender system’s predicted ratings are to the true user ratings, or receiver operating characteristics, which measure how well users receive the recommended items. Instead, we use statistical methods to sort out alternative explanations for relations between variables and therefore to further our understanding of the relationship of various pedagogical factors to the ranking of papers appropriate for particular learners. We make the following hypotheses: • A paper’s overall rating might not only depend on how interesting it is but also on the richness of knowledge that the learner gains from reading it and its usefulness in helping a learner understand the course subject matter. • A learner’s intent in recommending a paper to others might depend not only on how interesting the paper is but also on the richness of knowledge that the learner has gained from reading it and its usefulness in helping the learner understand the course material. • The closeness of a learner’s work experience to a paper’s topic might also affect the learner’s overall ratings of that paper and the likelihood of recommending it to others. To validate our hypotheses, we must show that ratings on the value-added or aid-learning factors (question 4) affect ratings on overall or peer recommendation independently from how interesting the paper is (3). We performed four analyses: partial correlation, structural equation modeling, principal components analysis (PCA), and partial least squares regression (PLS). These statistical analysis methods are particularly valuable to social scientists for probing the interactivity among variables. Due to space limitations, we report only our main findings from partial correlation, PCA, and PLS. Partial Correlation Results Partial correlation is often used for modeling causality using three or four variables. Let rAB.C be the Pearson correlation of variables A and B, controlling for variable C; and rAB be the Pearson correlation of variables A and B. If rAB.C = rAB, we infer that the control variable C has no effect. If rAB.C approaches 0, rAB is spurious (the correlation is spurious) — that is, no direct causal link exists between A and B (see Figure 1a). Rather, either C affects A and B (anteceding) or A affects C, which affects B (intervening). If rAB > rAB.C > 0, we have partial explanation (see Figure 1b). In this case, A partially affects B regardless of whether it affects (or is affected by) C. Table 2 shows our partial correlation computations for the data collected in the study. We divided the analysis into three groups (I, II, and III), based on different assumptions about what we’re correlating. In Group I, we check rAB and rAB.C for A = value_added, B = overall or peer_rec, and C = interest. The results of rAB are 0.4798 and 0.4335 for B = overall and B = peer_rec, respectively. After introducing interest as a control, the correlations decrease to 0.3539 and 0.3017, respectively; hence, for this group, rAB > rAB.C > 0. In Group II, we check rAB and rAB.C for A = aid_learning. C A B (a) C A B C A B (b) C A B Figure 1. Causal inference with partial correlation. When (a) rAB.C = 0, the control variable C has no effect, which we call a spurious explanation. When (b) rAB > rAB.C > 0, A partially affects B and might affect C. We call this a partial explanation
A Multidimensional Paper Recommender Table 2. Results from the partial correlation analysis. Pearson partial correlation(TAB.c) Variables C(co A B(overall) B(peer_rec) 0.4798 04335 teres Value added 30l7 NIA Aid learning 0.4242 0.3740 Interest Aid_learning 0.3038 0.2469 Interest 0.6046 0.5574 Value added Interest 0.5282 04780 Aid_learning Interest 0.5456 0.4975 In Group Ill, we check TAB and T AB C for the reverse causality that is, interest is affected by value_added or aid_learning. In fact, TAb> TAB.c>0 for all groups, or the results favor a par tial explanation model. In other words, to some degree, value_added and aid_learning affect 04 verall and peer_rec ratings independently 0 from interest. Whether the model is still valid in the presence of other variables (such as dif- Figure 2. The variable-importance-in-the-projection(VP)index of ficulty or job_related)is unclear. omponents /(left bar: overall) and 2 (right bar: peer_rec) from partial least squares regression(PLS). Any independent Principal Components Regression and variable with a small VP(that is, less than 0.8)could be considered Partial Least Squares Regression Results unimportant and hence can be dropped from the model.6 Principal components regression (PCR)com- bines PCA and linear regression. PCA trans- forms observations from a p-dimensional space decomposes explanatory and depende original dimensions. The resulting dimensions between them, OnS to a q-dimensional space(q s p) while conserv- ables, with the constraint that these ing as much information as possible from the explain as much as possible of the cov are noncorrelated weighted components that Ne set the stopping criteria for both PCR are linear combinations of the original vari- and PLs as the point at which they find at most ibles. The weights are usually represented by two components. The regression therefore ex- eigenvalues produced during transformation. cludes other components. We used Addinsoft's High-eigenvalue components contain the most XLSTAT 2007 to perform both PCR and PLS researchers to analyze the associations between aid_learning, and value_ added as explan, information from the original data, allowing We used difficulty, job_related, interest, he original variables. Meanwhile, by removing tory variables; and overall and peer_rec as low-eigenvalue components, we can also sim- dependent variables. The PCR model uses 61 plify the regression model. If an explanatory percent variability of the original explanatory variable is redundant (for example, collinear data. This value is low (the suggested variabl- with other variables), it will vanish during di- ity in PCR is at least 80 percent in XLSTAT) mensional reduction by PCR. Our test checks We restrict our model to a low variability to whether value_added or aid_learning impact verify the survivability of value_added and the overall rating, peer_rec, or both. id_learning as explanatory variables in the PLS also uses building noncorrelated model. Results support these variables'surviv- components but differs from PCR in that it con- ability in explaining overall and peer_rec siders regression accuracy when selecting com- ratings. In addition, the variable-importance- ponents in regression. The selected components in-the-projection index of both variables from arent necessarily those with the highest eigen- PLS are higher than the critical value 0.8, values, but those that explain the most inde- leading us to strongly believe that they con- pendent variables. As such, PLS simultaneously tribute significantly to the model, as Figure 2 ULYJAUGUST 2009
JULY/AUGUST 2009 37 A Multidimensional Paper Recommender In Group III, we check rAB and rAB.C for the reverse causality — that is, interest is affected by value_added or aid_learning. In fact, rAB > rAB.C > 0 for all groups, or the results favor a partial explanation model. In other words, to some degree, value_added and aid_learning affect overall and peer_rec ratings independently from interest. Whether the model is still valid in the presence of other variables (such as difficulty or job_related) is unclear. Principal Components Regression and Partial Least Squares Regression Results Principal components regression (PCR) combines PCA and linear regression. PCA transforms observations from a p-dimensional space to a q-dimensional space (q ≤ p) while conserving as much information as possible from the original dimensions. The resulting dimensions are noncorrelated weighted components that are linear combinations of the original variables. The weights are usually represented by eigenvalues produced during transformation. High-eigenvalue components contain the most information from the original data,5 allowing researchers to analyze the associations between the original variables. Meanwhile, by removing low-eigenvalue components, we can also simplify the regression model. If an explanatory variable is redundant (for example, collinear with other variables), it will vanish during dimensional reduction by PCR. Our test checks whether value_added or aid_learning impact the overall rating, peer_rec, or both. PLS also uses PCA in building noncorrelated components but differs from PCR in that it considers regression accuracy when selecting components in regression. The selected components aren’t necessarily those with the highest eigenvalues, but those that explain the most independent variables. As such, PLS simultaneously decomposes explanatory and dependent variables, with the constraint that these variables explain as much as possible of the covariance between them. We set the stopping criteria for both PCR and PLS as the point at which they find at most two components. The regression therefore excludes other components. We used Addinsoft’s XLSTAT 2007 to perform both PCR and PLS. We used difficulty, job_related, interest, aid_learning, and value_added as explanatory variables; and overall and peer_rec as dependent variables. The PCR model uses 61.1 percent variability of the original explanatory data. This value is low (the suggested variability in PCR is at least 80 percent in XLSTAT). We restrict our model to a low variability to verify the survivability of value_added and aid_learning as explanatory variables in the model. Results support these variables’ survivability in explaining overall and peer_rec ratings. In addition, the variable-importancein-the-projection index of both variables from PLS are higher than the critical value 0.8, leading us to strongly believe that they contribute significantly to the model,6 as Figure 2 Table 2. Results from the partial correlation analysis. Pearson partial correlation (rAB.C) Variables C (control) A B (overall) B (peer_rec) Group I N/A Value_added 0.4798 0.4335 Interest Value_added 0.3539 0.3017 Group II N/A Aid_learning 0.4242 0.3740 Interest Aid_learning 0.3038 0.2469 Group III N/A Interest 0.6046 0.5574 Value_added Interest 0.5282 0.4780 Aid_learning Interest 0.5456 0.4975 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 Interesting Value_added Aid_learning Difficulty Job_related VIP Figure 2. The variable-importance-in-the-projection (VIP) index of components 1 (left bar: overall) and 2 (right bar: peer_rec) from partial least squares regression (PLS). Any independent variable with a small VIP (that is, less than 0.8) could be considered unimportant and hence can be dropped from the model.6
Emerging E-Learning Technologies overall,Walue_added,W)0.0,0.0.0.05.09040.09400.03.09600202,(.0.0) Popularity(w,) (,0),(,0.5),(l,)} W2DStdModel 0.5,1,3,5,10} Number of neighbors demonstrates. In other words, a papers value- +W added and aid-learning factors are important in making recommendations where n is the number of neighbors IBI and W is the weight of a paper's popularity F Proposed Paper Recommender Using the ranking r,, we can find the best Our paper recommender uses collaborative filter- papers to recommend. ng to find papers for a learner that have proven Our paper recommendation system there- to be pedagogically relevant for similar learners. fore uses six elements: overall performance, Several factors contribute to determining such value added, peer recommendation, F, learner similarity. First, we consider three factors as a interest, and learner background knowledge basis for measuring a pair of learners' closeness: The 6D collaborative-filtering computation is overall, value added, and peer recommendation. more complex than other collaborative-filter- Because each paper has three different ratings, ing-based techniques; however, we believe that we can obtain three different Pearson correlations under certain circumstances, it's necessary and for each pair of learners. Suppose Pa (a, b) is the beneficial. (In addition to 6D collaborative fil Pearson correlation based on the rating ra on di- tering, we've studied 3D and 4D computations mension d. Then, we can combine those three cor- and compared performances. relations into a weighted-sum Pearson correlation as P3p (a, b)=Overall Overal(a, b)+wvalue added Pval. Evaluation added(a, b)+Wpeer_ree Ppeer-recla, b), where overall+ To test our 6D recommender system, we used Wvalue_ added+Peer._ree=1(w= weight e Through computation, we find a group of in the empirical study described earlier. We've rners similar to a given target learner. explored a vast number of weight combinations Next, we compute the 2D Pearson correlation and neighborhood sizes in the 6D collaborative- between learners based on their student mod- filtering technique to find the factors that would els. That is, we compute the aggregated Pearson lead to recommendations that best matched a correlation between student interest and back- student's actual experience ground knowledge as P2Dstd Model(a, b ) Interest We found that the weighting combina- Interest(a, b)+ wbkgrKnowledge Pbkgrknowledgela, b). tions overall, value_added, and peer_rec Because we have various weights on combin- were most important. However, they benefit the ing Pearson correlations, we can tune them to recommendation only when the sum of value study each factor's relative importance. We then _added and peer_rec is less than 0. 1. The combine this result with a 3D Pearson correla- overall rating is still the most important factor. tion from coated papers: Psp(a, b)= P3p(a, b ) le set the number of neighbors to five and W2p P2DstdModella, ) From Pspa, b), we can iden- 10, although we also tested other values. We tify the best n neighbors for a target user. After were also eager to see which factors better boost that, we use the following formula to calculate recommendation performance. Table 3 summa- each paper's aggregate rating rizes the sets of variables in 6d collaborative lItering. The weight reflects the corresponding =∑D(a,b)hk variables value Lastly, we combine this rating with each pa- Evaluation Protocols per's average rating (that is, the paper's popular- For each target learner, we randomly assigned ity )to obtain a 6D collaborative-filtering-based 30 combinations of coated papers to find neigh rating, as follows: bors, who later recommend one or five papers EEE INTERNET COMPUTING
Emerging E-Learning Technologies 38 www.computer.org/internet/ IEEE INTERNET COMPUTING demonstrates. In other words, a paper’s valueadded and aid-learning factors are important in making recommendations. Proposed Paper Recommender Our paper recommender uses collaborative filtering to find papers for a learner that have proven to be pedagogically relevant for similar learners. Several factors contribute to determining such similarity. First, we consider three factors as a basis for measuring a pair of learners’ closeness: overall, value added, and peer recommendation. Because each paper has three different ratings, we can obtain three different Pearson correlations for each pair of learners. Suppose Pd (a, b) is the Pearson correlation based on the rating rd on dimension d. Then, we can combine those three correlations into a weighted-sum Pearson correlation as P3D (a,b) = woverall Poverall(a,b) + wvalue_added Pvalue_added(a,b) + wpeer_rec Ppeer_rec(a,b), where woverall + wvalue_added + wpeer_rec = 1 (w = weight). Through computation, we find a group of learners similar to a given target learner. Next, we compute the 2D Pearson correlation between learners based on their student models. That is, we compute the aggregated Pearson correlation between student interest and background knowledge as P2DStdModel(a,b) = winterest Pinterest(a,b) + wbkgrKnowledge PbkgrKnowledge(a,b). Because we have various weights on combining Pearson correlations, we can tune them to study each factor’s relative importance. We then combine this result with a 3D Pearson correlation from corated papers: P5D(a,b) = P3D(a,b) + w2D P2DStdModel(a,b). From P5D(a,b), we can identify the best n neighbors for a target user. After that, we use the following formula to calculate each paper’s aggregate rating: r P a b r k D D b k B 5 = 5 ∑ ( ) , , . Lastly, we combine this rating with each paper’s average rating (that is, the paper’s popularity ) to obtain a 6D collaborative-filtering-based rating, as follows: r r w nr k D k D r 6 5 = + , where n is the number of neighbors = |B| and wr is the weight of a paper’s popularity r. Using the ranking r k 6D , we can find the best papers to recommend. Our paper recommendation system therefore uses six elements: overall performance, value added, peer recommendation, r, learner interest, and learner background knowledge. The 6D collaborative-filtering computation is more complex than other collaborative-filtering-based techniques; however, we believe that under certain circumstances, it’s necessary and beneficial. (In addition to 6D collaborative filtering, we’ve studied 3D and 4D computations and compared performances.) Evaluation To test our 6D recommender system, we used it to make recommendations for the students in the empirical study described earlier. We’ve explored a vast number of weight combinations and neighborhood sizes in the 6D collaborativefiltering technique to find the factors that would lead to recommendations that best matched a student’s actual experience. We found that the weighting combinations overall, value_added, and peer_rec were most important. However, they benefit the recommendation only when the sum of value _added and peer_rec is less than 0.1. The overall rating is still the most important factor. We set the number of neighbors to five and 10, although we also tested other values. We were also eager to see which factors better boost recommendation performance. Table 3 summarizes the sets of variables in 6D collaborative filtering. The weight reflects the corresponding variable’s value. Evaluation Protocols For each target learner, we randomly assigned 30 combinations of corated papers to find neighbors, who later recommend one or five papers. Table 3. Experimental variables. Dimensions Weighting values (woverall, wvalue_added, wpeer_rec) {(0.8, 0.1, 0.1), (0.9, 0.05, 0.05), (0.92, 0.04, 0.04), (0.94, 0.03, 0.03), (0.96, 0.02, 0.02), (1, 0, 0)} Popularity ( wr ) {1, 5} (winterest, wbkgrKnowledge) {(1, 0), (1, 0.5), (1, 1)} w2DStdModel {0.5, 1, 3, 5, 10} Number of neighbors {5, 10}
A Multidimensional Paper Recommender Table 4. Average ratings with various weights. Weights(overall, Walue added, Woeer red (,0,0) (0.8,0.,0.) (0.9,0.5,0.05) (096,0.02,0.02) Average overall ratings 30538 3.06 Next, we recorded the target learners average ratings of the recommended papers. Therefore, for each treatment, we collected 24 learners x 30 E ombinations =720 ratings In each treatment, 22.95 we tuned those pedagogical elements'weights 92.90 to identify their importance to the overall rec ommendation. Table 4 reports the average rat (1,0)(.0.5)(1,1) (1,0)(1,0.5)(,1) ings,and Figure 3 shows partial experimental(a) (waterer, eknmonede) results. We based our analysis on the pattern of those average ratings on each dimension of our Figure 3. Partial experimental results for 6D collaborative filtering control variables using(a) two and(b) four coated papers when recommending the best paper. The y-axis denotes the average overall ratings, and the Analysis of Results X-axis denotes various weights of background knowledge when the We used three key findings to interpret the ped- weight of interest remains constant recommendation approaches by comparing on their background knowledge. These results Effect of ratings on value added and peer aren't surprising because almost all of the pa- recommendation. Table 4 reports the average pers in the study were from popular magazines ratings of target users before and after we in- such as Communications of the aCm and IEEE clude value-added and peer-recommendation Software, which target general readers. Hence, ratings in the computation. Results suggest the papers are more understandable than more that incorporating ratings from value added technical papers from, say, IEEE Transactions and peer recommendation improves the per- on Software Engineering formance of collaborative-filtering-based rec- ommender systems Importance of learner interest in the recom- mendation. Would adding learner interest to Effects of background knowledge. We also look the system increase a recommendations qual at whether incorporating background knowledge ity? Here, we examine this question when negatively or positively affects the system (interest, Wbkgrknowledge)=(1, O). The effect of Figure 3 captures the experimental results this on the calculation of w'2DStdModel in the 6D when (winterest, Wbkgrknowledge)=(1,),(1, 0.5), collaborative-filtering equation is to factor in (1,1), W2D=0.5, (woverall, Wvalue_added, Peer_ red) only the effects of learner interest. (100, 0, 0), and the number of coated papers When w.=1, introducing learner interest are two and four. We obtained similar pat- (increasing w2DStdModel from 0 to 0.5)has a small terns using eight and 15 coated papers. We positive impact when we have a relatively large observed that when the weight on background number of coated papers (15 in this case). How 1- that is, when we begin to consider back- crease w2DStdMode/ beyond 1, the benefit drop knowledge (wbkgrKnowledge) increases from 0 to ever, the benefit isn't persistent. When we ground knowledge in computing the Pearson When W,=5, the performance is steady correlation between two learners the rec- with respect to w2DStdModel, showing that recom- ommender's performance decreases. In fact, mendations are independent of the weights on when we look at other treatments (that is, other learner interest (w2pstdModel increases from 0 to weights and numbers of coated papers), we ob- 10), except when W2DStdModel equals 10. Recom- tain similar results, which suggests that includ- mendations are more satisfying when w,=5.In ing background knowledge doesn't improve the other words, learner interest's effect on recom recommendation process. That is, learners' un- mendation outcome is less important than that derstanding of papers doesn't strongly depend of a paper's popularity. The primary reason is ULYJAUGUST 2009
JULY/AUGUST 2009 39 A Multidimensional Paper Recommender Next, we recorded the target learner’s average ratings of the recommended papers. Therefore, for each treatment, we collected 24 learners × 30 combinations = 720 ratings. In each treatment, we tuned those pedagogical elements’ weights to identify their importance to the overall recommendation. Table 4 reports the average ratings, and Figure 3 shows partial experimental results. We based our analysis on the pattern of those average ratings on each dimension of our control variables. Analysis of Results We used three key findings to interpret the pedagogical factors’ significance by comparing the recommendation approaches. Effect of ratings on value added and peer recommendation. Table 4 reports the average ratings of target users before and after we include value-added and peer-recommendation ratings in the computation. Results suggest that incorporating ratings from value added and peer recommendation improves the performance of collaborative-filtering-based recommender systems. Effects of background knowledge. We also look at whether incorporating background knowledge negatively or positively affects the system. Figure 3 captures the experimental results when (winterest, wbkgrKnowledge) = {(1, 0), (1, 0.5), (1, 1)}, w2D = 0.5, (woverall, wvalue_added, wpeer_rec) = (100, 0, 0), and the number of corated papers are two and four. We obtained similar patterns using eight and 15 corated papers. We observed that when the weight on background knowledge (wbkgrKnowledge) increases from 0 to 1 — that is, when we begin to consider background knowledge in computing the Pearson correlation between two learners — the recommender’s performance decreases. In fact, when we look at other treatments (that is, other weights and numbers of corated papers), we obtain similar results, which suggests that including background knowledge doesn’t improve the recommendation process. That is, learners’ understanding of papers doesn’t strongly depend on their background knowledge. These results aren’t surprising because almost all of the papers in the study were from popular magazines such as Communications of the ACM and IEEE Software, which target general readers. Hence, the papers are more understandable than more technical papers from, say, IEEE Transactions on Software Engineering. Importance of learner interest in the recommendation. Would adding learner interest to the system increase a recommendation’s quality? Here, we examine this question when (winterest, wbkgrKnowledge) = (1, 0). The effect of this on the calculation of w2DStdModel in the 6D collaborative-filtering equation is to factor in only the effects of learner interest. When wr = 1, introducing learner interest (increasing w2DStdModel from 0 to 0.5) has a small positive impact when we have a relatively large number of corated papers (15 in this case). However, the benefit isn’t persistent. When we increase w2DStdModel beyond 1, the benefit drops. When wr = 5, the performance is steady with respect to w2DStdModel, showing that recommendations are independent of the weights on learner interest (w2DStdModel increases from 0 to 10), except when w2DStdModel equals 10. Recommendations are more satisfying when wr = 5. In other words, learner interest’s effect on recommendation outcome is less important than that of a paper’s popularity. The primary reason is Table 4. Average ratings with various weights. Weights (woverall, wvalue_added, wpeer_rec) (1, 0, 0) (0.8, 0.1, 0.1) (0.9, 0.5, 0.05) (0.96, 0.02, 0.02) Average overall ratings 3.055 3.05 3.0538 3.06 3.10 3.05 3.00 2.95 2.90 2.85 (1, 0) Average ratings (1, 0.5) (1, 1) (winterest, wbkgrKnowledge ) (a) 3.10 3.05 3.00 2.95 2.90 2.85 (1, 0) Average ratings (1, 0.5) (1, 1) (winterest, wbkgrKnowledge) (b) Figure 3. Partial experimental results for 6D collaborative filtering using (a) two and (b) four corated papers when recommending the best paper. The y-axis denotes the average overall ratings, and the x-axis denotes various weights of background knowledge when the weight of interest remains constant
Emerging E-Learning Technologies Related Work in Recommender Systems ost recommender systems, such as Firefly' and Group- mending additional references for a target Lens, recommend items based purely on item-to-item, Mimi Recker, Andrew Walker, and Kimberly Lawless study the user-to-user, or item-to-user correlations without consider- pedagogical characteristics of a Web-based resource in which ing the broader context of the decision as to what to recom- teachers and learners can submit and review learner com- mend.Gediminas Adomavicius and his colleagues argue that ments. However, although they emphasize the importance of contextual informations dimensions can include when, how, these educational resources pedagogical features, they dont and with whom users will consume the recommended items. consider pedagogical features in making recommendations. These factors, therefore, directly affect users' satisfaction Our system recommends papers not only according to learn- with the systems performance. To handle such multidimen- ers interests, but also based on what is pedagogically suitable sional factors, they use data warehouse and online analytic to them. processing to reduce the number of dimensions. Michael Pazzani studies the aggregation of users' demographic infor- References mation (gender, age, education, address, and so on) in a col- I. S Upendra and M Pattie. "Social Information Filtering: Algorithms for Au- laborative filtering system tomating"Word of Mouth, "Proc. ACM Human Factors in Computing Systems The most recent effort in incorporating contextual infor- Conf(CHI), ACM Press, 1995, Pp 210-217. mation considers users' lifestyles when making recommenda- 2. J. Konstan et al. "GroupLens: Applying Collaborative Filtering to Usenet tions.To obtain this information -such as living and spending News, "Comm. ACM, vol 40, no. 3, 1997. pp. 77-87 patterns-the system exposes users to advertisements from 3. G. Adomavicius et al., "Incorporating Contextual Information in Recom- seven categories. It then makes recommendations based on mender Systems Using a Multidimensional Approach, "ACM Trans. Informa- similarities among users' lifestyles. Like George Lekakos and tion Systems(TOIS), vol. 23, no. 1, 2005. Pp. 103-145 George Giaglis and Pazzani, our approach makes recommen- 4. M. Pazzani, "A Framework for Collaborative. Content-based, and Demo- dations based on user pedagogical similarities. However, unlike graphic Filtering, "Al Rev.Dec1999.pp.393-408 existing approaches- that consider only users' contextual in- 5. G. Lekakos and G. Giaglis, "Improving the Prediction Accuracy of Recom- formation we also er paper features such as a paper's mendation Algorithms: Approaches Anchored on Human Factors, "Interact- opularity and peer rece ing with Computers, vol. 18, no 3, 2006, Pp. 410-43I Several systems track and recommend technical papers. 6.K Bollacker, S. Lawrence, and C. Lee Giles, "A System for Automatic Per- Kurt Bollacker, Steve Lawrence, and C. Lee Giles refine Cite- sonalized Tracking of Scientific Literature on the Web "Proc. Int'l Conf. Digi- Seer, NECs digital library of scientific literature, using an auto- tal Libraries (CDL). ACM Press, 1999.pp105-113 matic personalized paper-tracking module that retrieves user 7. A. Woodruff et al. "Enhancing a Digital Book with a Reading Recommend- interests from heterogeneous user profiles. Allison Wood- er, "Proc. Conf. Human Factors in Computing Systems(CHI), ACM Press. pp ruff and her colleagues discuss an enhanced digital book with 153-160 a spreading-activation-geared mechanism to make customized 8 M Recker, A Walker, and K Lawless, "What Do You Recommend? Im recommendations for readers with different backgrounds and plementation and Analyses of Collaborative Information Filtering of Web knowledge.Pazzani adopts collaborative filtering to recom- Resources for Education,"Instructional Science,vol31,nos4-5,2003 mend papers for researchers; however, he focuses on recom-299-316 that we can't accurately identify similar neigh- ing the recommender to pick the top five papers bors using a small number of coated papers. because it needs more information to maintain We observe similar results with other combina- performance stability. tions(overall, wvalue added, The recommender's performance differs when choosing the top five papers and fails to o meet our goal of satisfying learners peda improve even when we increase popularity's gogically, our analysis also aimed to provide weight (from W,=1 to W:=5). The overall insights about learner satisfaction toward the performance shows a downward trend. Per- recommended items. We also conducted exit in- formance is even worse when we increase the terviews with some students to get a sense of how value of w. with a small number of coated pa- they perceived the papers they read. Overall pers. This result is just the opposite when the student feedback on the course was overwhelm recommender makes a best single recommen- ingly positive. Even those among them who were dation. For other treatments, we obtain similar software engineers claimed to have learned some results. Thus, we should take care when requir- new terms and practices used in their field EEE INTERNET COMPUTING
Emerging E-Learning Technologies 40 www.computer.org/internet/ IEEE INTERNET COMPUTING that we can’t accurately identify similar neighbors using a small number of corated papers. We observe similar results with other combinations (woverall, wvalue_added, and wpeer_rec). The recommender’s performance differs when choosing the top five papers and fails to improve even when we increase popularity’s weight (from wr = 1 to wr = 5). The overall performance shows a downward trend. Performance is even worse when we increase the value of wr with a small number of corated papers. This result is just the opposite when the recommender makes a best single recommendation. For other treatments, we obtain similar results. Thus, we should take care when requiring the recommender to pick the top five papers because it needs more information to maintain performance stability. T o meet our goal of satisfying learners pedagogically, our analysis also aimed to provide insights about learner satisfaction toward the recommended items.7 We also conducted exit interviews with some students to get a sense of how they perceived the papers they read.8 Overall student feedback on the course was overwhelmingly positive. Even those among them who were software engineers claimed to have learned some new terms and practices used in their field. Related Work in Recommender Systems Most recommender systems, such as Firefly1 and GroupLens,2 recommend items based purely on item-to-item, user-to-user, or item-to-user correlations without considering the broader context of the decision as to what to recommend. Gediminas Adomavicius and his colleagues argue that contextual information’s dimensions can include when, how, and with whom users will consume the recommended items.3 These factors, therefore, directly affect users’ satisfaction with the system’s performance. To handle such multidimensional factors, they use data warehouse and online analytic processing to reduce the number of dimensions. Michael Pazzani studies the aggregation of users’ demographic information (gender, age, education, address, and so on) in a collaborative filtering system.4 The most recent effort in incorporating contextual information considers users’ lifestyles when making recommendations.5 To obtain this information — such as living and spending patterns — the system exposes users to advertisements from seven categories. It then makes recommendations based on similarities among users’ lifestyles. Like George Lekakos and George Giaglis5 and Pazzani,4 our approach makes recommendations based on user pedagogical similarities. However, unlike existing approaches3–5 that consider only users’ contextual information, we also consider paper features such as a paper’s popularity and peer recommendations. Several systems track and recommend technical papers. Kurt Bollacker, Steve Lawrence, and C. Lee Giles refine CiteSeer, NEC’s digital library of scientific literature, using an automatic personalized paper-tracking module that retrieves user interests from heterogeneous user profiles.6 Allison Woodruff and her colleagues discuss an enhanced digital book with a spreading-activation-geared mechanism to make customized recommendations for readers with different backgrounds and knowledge.7 Pazzani adopts collaborative filtering to recommend papers for researchers; however, he focuses on recommending additional references for a target research paper.4 Mimi Recker, Andrew Walker, and Kimberly Lawless study the pedagogical characteristics of a Web-based resource in which teachers and learners can submit and review learner comments.8 However, although they emphasize the importance of these educational resources’ pedagogical features, they don’t consider pedagogical features in making recommendations. Our system recommends papers not only according to learners’ interests, but also based on what is pedagogically suitable to them. References 1. S. Upendra and M. Pattie, “Social Information Filtering: Algorithms for Automating ‘Word of Mouth,’” Proc. ACM Human Factors in Computing Systems Conf. (CHI), ACM Press, 1995, pp. 210–217. 2. J. Konstan et al., “GroupLens: Applying Collaborative Filtering to Usenet News,” Comm. ACM, vol. 40, no. 3, 1997, pp. 77–87. 3. G. Adomavicius et al., “Incorporating Contextual Information in Recommender Systems Using a Multidimensional Approach,” ACM Trans. Information Systems (TOIS), vol. 23, no. 1, 2005, pp. 103–145. 4. M. Pazzani, “A Framework for Collaborative, Content-based, and Demographic Filtering,” AI Rev., Dec.1999, pp. 393–408. 5. G. Lekakos and G. Giaglis, “Improving the Prediction Accuracy of Recommendation Algorithms: Approaches Anchored on Human Factors,” Interacting with Computers, vol. 18, no. 3, 2006, pp. 410–431. 6. K. Bollacker, S. Lawrence, and C. Lee Giles, “A System for Automatic Personalized Tracking of Scientific Literature on the Web,” Proc. Int’l Conf. Digital Libraries (JCDL), ACM Press, 1999, pp. 105–113. 7. A. Woodruff et al., “Enhancing a Digital Book with a Reading Recommender,” Proc. Conf. Human Factors in Computing Systems (CHI), ACM Press, pp. 153–160. 8. M. Recker, A. Walker, and K. Lawless, “What Do You Recommend? Implementation and Analyses of Collaborative Information Filtering of Web Resources for Education,” Instructional Science, vol. 31, nos. 4–5, 2003, pp. 299–316
A Multidimensional Paper Recommender As our study shows, user interest 4. T.Y. Tang isnt the key factor in boosting rec- tations with Learner Models. Proc. Int'l ommendation performance. Instead, some learners accept recommenda- (AIED), IOS Press, 2005, pp. 654-661. CISCO tions that they don't perceive to di- 5. I.T. Jolliffe, Principal Component Analy rectly match their interests. Thus sis, 2nd ed, Springer, 2002 Cisco Systems, Inc. is accepting resumes for he notion of pedagogically relevant 6.S. Wold, "PLS for Multivariate Linear the following positior recommendations is interesting and Modelling, "QSAR: Chemometric Meth nontrivial and deserves further ods in Molecular Design, vol. 2,H Edison/Iselin, N exploration in a wide variety of e- de Waterbeemd, ed, Wiley-VCH, 1994, learning contexts Customer Support Engineer Finding a"good" paper isn't a 7. S. McNee, J Riedl, and J.A. Konstan, (Ref#: ED3) trivial task. It's a multiple-step pro "Being Accurate Is Not Enough: How Responsible for providing technical support regarding cess that typically entails the users Accuracy Metrics Have Hurt Recom company's proprietary systems and software to navigating the paper collections, mender Systems, " Ertended Abstracts field engineers, technicians, product support and understanding the recommended Conf. Human Factors in Computing company customers who are diagnosing. items, seeing what other users like Systems (CHI), ACM Press, 2006, PI roubleshooting, repairing and debugging complex and dislike, and making decisions. 1097-1101 electro/mechanical equipment, computer systems In addition, the prerequisite struc- 8. T.Y. Tang, The Design and Study of Peda- dor complex software. ture underlying most courses, if gogical Paper Recommendation, PhD Please mail resumes with reference number to Cisco combined with pedagogical paper thesis, Univ of Saskatchewan, Dept. of Systems, Inc, Attrn: JSIW, 7o W Tasman Drive, Mail recommendation approaches, might Computer Science, 2008. Stop: SC 5/V4, San Jose, CA 95B4. No phone calls better help in predicting which pa- 9. T.Y. Tang and G. McCalla, "On the Peda- please. Must be legally authorized to work in the Us. pers would be suitable to the current gogically Guided Paper Recommendation without sponsorship.EOE. topic. We hope that the studies initi for an Evolving Web-Based Learning Www.cisCo.com ated here can open up opportunities System, "Proc. 17th Int'l Florida Artifi for researchers to probe the use of cial Intelligence Research Soc. (FLAIRS automated social tools to support ac- Conf, AAAl Press, 2004, p. 19 tive learning and teaching in future networked learning environments. u< Tiffany Y. Tang is an assistant professor in ECUF Y&PRIVACY the Department of Computer Science at Acknowledgments Konkuk University, Korea. Her research We thank two anony mous reviewers and our nterests include Web personalization editors for their insightful comments on the and recommender systems, distributed software development, and artificial in- telligence in education. Tang has a PhD References from the Department of Computer Sci- 1. S.Y. Rieh, Judgment of Information ence at the University of Saskatchewan. Quality and Cognitive Authority in the Contact her at tiffany@kkuac kr. Web, J. Amer. Soc. Information Science EEE Security Privacy is the and Technology (JASIST), vol. 53, no 2, Gordan McCalla is a professor in the Depart remier magazine for security 2002,pp.145-161 ment of Computer Science and Director professionals. Each issue is packed 2.SMcNee et al, "On the Recommending of the Laboratory for Advanced Research with information about cybercrime security policy, privacy and leg Computer Supported Cooperative University of Saskatchewan. His research issues, and intellectual property Work(CSCW), ACM Press, pp 116-125. interests include applied artificial intel- protection 3. T.Y. Tang and G.I. Mc Calla,"Utilizing igence, particularly user modeling and ArtificialLearnerstoHelpOvercometheartificialintelligenceineducationMc-www.computer.org/services/ Cold-Start Problem in a Pedagogically Calla has a PhD in computer science from Oriented Paper Recommendation Sys- the University of British Columbia. He's tem, "Adaptive Hypermedia and Adaptive past president of the Canadian Associa- Subscribe now 1ors32! eb-Based Systems, LNCS 3137, Springer, tion for Computer Science. Contact him 2004,pp.245-254. at mccalla@cs. usask. ca. ULYJAUGUST 2009
JULY/AUGUST 2009 41 A Multidimensional Paper Recommender As our study shows, user interest isn’t the key factor in boosting recommendation performance. Instead, some learners accept recommendations that they don’t perceive to directly match their interests.9 Thus the notion of pedagogically relevant recommendations is interesting and nontrivial and deserves further exploration in a wide variety of elearning contexts. Finding a “good” paper isn’t a trivial task. It’s a multiple-step process that typically entails the users navigating the paper collections, understanding the recommended items, seeing what other users like and dislike, and making decisions. In addition, the prerequisite structure underlying most courses, if combined with pedagogical paper recommendation approaches, might better help in predicting which papers would be suitable to the current topic. We hope that the studies initiated here can open up opportunities for researchers to probe the use of automated social tools to support active learning and teaching in future networked learning environments. Acknowledgments We thank two anonymous reviewers and our editors for their insightful comments on the manuscript. References 1. S.Y. Rieh, “Judgment of Information Quality and Cognitive Authority in the Web,” J. Amer. Soc. Information Science and Technology (JASIST), vol. 53, no. 2, 2002, pp. 145–161. 2. S. McNee et al., “On the Recommending of Citations for Research Papers,” Proc. Computer Supported Cooperative Work (CSCW), ACM Press, pp. 116–125. 3. T.Y. Tang and G.I. McCalla, “Utilizing Artificial Learners to Help Overcome the Cold-Start Problem in a Pedagogically Oriented Paper Recommendation System,” Adaptive Hypermedia and Adaptive Web-Based Systems, LNCS 3137, Springer, 2004, pp. 245–254. 4. T.Y. Tang and G.I. McCalla, “Paper Annotations with Learner Models,” Proc. Int’l Conf. Artificial Intelligence in Education (AIED), IOS Press, 2005, pp. 654–661. 5. I.T. Jolliffe, Principal Component Analysis, 2nd ed., Springer, 2002. 6. S. Wold, “PLS for Multivariate Linear Modelling,” QSAR: Chemometric Methods in Molecular Design, vol. 2, H. van de Waterbeemd, ed., Wiley-VCH, 1994, pp. 195–221. 7. S. McNee, J. Riedl, and J.A. Konstan, “Being Accurate Is Not Enough: How Accuracy Metrics Have Hurt Recommender Systems,” Extended Abstracts Conf. Human Factors in Computing Systems (CHI), ACM Press, 2006, pp. 1097–1101. 8. T.Y. Tang, The Design and Study of Pedagogical Paper Recommendation, PhD thesis, Univ. of Saskatchewan, Dept. of Computer Science, 2008. 9. T.Y. Tang and G. McCalla, “On the Pedagogically Guided Paper Recommendation for an Evolving Web-Based Learning System,” Proc. 17th Int’l Florida Artificial Intelligence Research Soc. (FLAIRS) Conf., AAAI Press, 2004, p. 19. Tiffany Y. Tang is an assistant professor in the Department of Computer Science at Konkuk University, Korea. Her research interests include Web personalization and recommender systems, distributed software development, and artificial intelligence in education. Tang has a PhD from the Department of Computer Science at the University of Saskatchewan. Contact her at tiffany@kku.ac.kr. Gordan McCalla is a professor in the Department of Computer Science and Director of the Laboratory for Advanced Research in Intelligent Educational Systems at the University of Saskatchewan. His research interests include applied artificial intelligence, particularly user modeling and artificial intelligence in education. McCalla has a PhD in computer science from the University of British Columbia. He’s past president of the Canadian Association for Computer Science. Contact him at mccalla@cs.usask.ca. IEEE Security & Privacy is the premier magazine for security professionals. Each issue is packed with information about cybercrime, security & policy, privacy and legal issues, and intellectual property protection. www.computer.org/services/ nonmem/spbnr Subscribe now for $32! Reading OveR YOuR ShOuldeR • dealing with the SmaRt gRid maY/June 2009 vOlume 7, numbeR 3 www.cisco.com Cisco Systems, Inc. is accepting resumes for the following position: Edison/Iselin, NJ Customer Support Engineer (Ref#: ED3) Responsible for providing technical support regarding the company’s proprietary systems and software to field engineers, technicians, product support and company customers who are diagnosing, troubleshooting, repairing and debugging complex electro/mechanical equipment, computer systems and/or complex software. Please mail resumes with reference number to Cisco Systems, Inc., Attn: J51W, 170 W. Tasman Drive, Mail Stop: SJC 5/1/4, San Jose, CA 95134. No phone calls please. Must be legally authorized to work in the U.S. without sponsorship. EOE