2009 Fourth International Conference on Innovative Computing, Information and Control An Evidential Reasoning Approach for Learning Object Recommendation with Uncertainty Noppamas Pukkhem and Wiwat Vatanawood Department of Computer Engineering, Faculty of engineering, Chulalongkorn University, Bangkok, Thailand oppamas. p@student chula ac th, wiwat@chula.ac th Abstract developed with in this framework for combining multiple uncertain subjective judgments. The aim of design is then Selecting the most suitable learning object in SCORM- to recommend the learning object over the learner compliant learning object recommendation system is a complex preference from the filtered learning object in the same ecision process. We exploit the techniques of collaborative concept the best compromise learning object which attains concept map design, ontology explaining, an evidence reasoning these performances as suitable as possible that may be use to deal with uncertain decision making, an evaluation analysis model and the evidence combination rule of The remainder of this paper is organized as follows the Dempster-Shafer theory for supporting the system. Two The Section 2 shows our proposed model and strategy combination algorithms have been developed in this approach Next, Section 3 presents the method demonstration and for combining multiple uncertain subjective judgments. Based nally we give the conclusion of this work on this approach and the traditional multiple attribute decision 2.Our Proposed Model and strategies taking method, a recommendation procedure is proposed to rank the most suitable learning objects over learner preferences This system(Figure 1)includes an off-line concept modeling process, four intelligent agents and related atabases. The four intelligent agents are concept map management agent [4], learner interface agent, feedback Keywords: Evidential reasoning, recommendation system agent, learning object recommendation agent. multi-agent, multiple attributes decision making 1. Introduction SCORM compliant learning object is a digital learning resource that facilitates a single learning Develops objective and which may be reused in a different context [1]. Nowadays, there are many learning objects distributing on various learning object repositories. This makes the learners to confuse when selecting the learning objects for their learning path. Although the adaptive learning system provides lots of learning objects, its Ontology application is limited for personalized learning object AdatnwLcanng Obed Rapoaleit This research aims at producing a guideline for learning Figure 1. System architecture of recommendation object filtering by using the master concept map [2] that is a description of how propositions are organized and Based on the system architecture presented in Figure design from various experts and the designed ontological 1, we will discuss the problem with uncertainty in model make it possible to personalize learning objects to recommendation based on learner preference. A hybrid specific learners. To solve the problem of learning object multi feature decision making problem in learning object recommendation, we adopt a multiple attribute decision recommendation may be expressed using the following uation aking based on the evidence combination rule of the Dempster-Shafer theory. It is support to a learner Optimize/(1O)=[1(O…(O→…fk+k2(1O preference modeling approach, comprising an evidential reasoning framework for evaluation the suitability of In(1),L is the discrete set of all learning object qualitative IEEE LOM based learning object features 31 feature and is denoted by The local and global combination algorithms have been E=M∈LOM,≠f} 978-0-7695-3873-0/09S29.00@2009IEEE
An Evidential Reasoning Approach for Learning Object Recommendation with Uncertainty Noppamas Pukkhem and Wiwat Vatanawood Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, Thailand noppamas.p@student.chula.ac.th, wiwat@chula.ac.th Abstract Selecting the most suitable learning object in SCORMcompliant learning object recommendation system is a complex decision process. We exploit the techniques of collaborative concept map design, ontology explaining, an evidence reasoning that may be use to deal with uncertain decision making, an evaluation analysis model and the evidence combination rule of the Dempster-Shafer theory for supporting the system. Two combination algorithms have been developed in this approach for combining multiple uncertain subjective judgments. Based on this approach and the traditional multiple attribute decision making method, a recommendation procedure is proposed to rank the most suitable learning objects over learner preferences to a specific learner. A learning object raking example is discussed to demonstrate the method implementation based on multi-agent framework. Keywords: Evidential reasoning, recommendation system, multi-agent, multiple attributes decision making, 1. Introduction SCORM compliant learning object is a digital learning resource that facilitates a single learning objective and which may be reused in a different context [1]. Nowadays, there are many learning objects distributing on various learning object repositories. This makes the learners to confuse when selecting the learning objects for their learning path. Although the adaptive learning system provides lots of learning objects, its application is limited for personalized learning object. This research aims at producing a guideline for learning object filtering by using the master concept map [2] that is a description of how propositions are organized and design from various experts and the designed ontological model make it possible to personalize learning objects to specific learners. To solve the problem of learning object recommendation, we adopt a multiple attribute decision making based on the evidence combination rule of the Dempster-Shafer theory. It is support to a learner preference modeling approach, comprising an evidential reasoning framework for evaluation the suitability of qualitative IEEE LOM based learning object features [3]. The local and global combination algorithms have been developed with in this framework for combining multiple uncertain subjective judgments. The aim of design is then to recommend the learning object over the learner preference from the filtered learning object in the same concept the best compromise learning object which attains these performances as suitable as possible. The remainder of this paper is organized as follows. The Section 2 shows our proposed model and strategy. Next, Section 3 presents the method demonstration and, finally we give the conclusion of this work. 2.Our Proposed Model and Strategies This system (Figure 1) includes an off-line concept modeling process, four intelligent agents and related databases. The four intelligent agents are concept map management agent [4], learner interface agent, feedback agent, learning object recommendation agent. Figure 1. System architecture of recommendation Based on the system architecture presented in Figure 1, we will discuss the problem with uncertainty in recommendation based on learner preference. A hybrid multi feature decision making problem in learning object recommendation may be expressed using the following equation (1) Οptimize 1 1 2 ( ) [ ( ) ( ) ( )] k kk F f L f LO f LO f LO f LO + ∈ = " " . (1) In (1), F L is the discrete set of all learning object feature and is denoted by {| , } F L i i fi j = ∀∈ ≠ f f LOM f f (2) 2009 Fourth International Conference on Innovative Computing, Information and Control 978-0-7695-3873-0/09 $29.00 © 2009 IEEE 262
whe, ff is a value of f at LO=.…,/=,…k) (2)=0.m2(m)=∑"(4m2B(6) Our applied model is shown in Figure 2. In the feature level of model, the state of a feature(such as m1(A)m2(B) educational) at each learning object LO is required to be evaluated All these basic suitability assignments to Sn, Sn+1,8 1(=1,…,Rn;n=1, Learning object-. Leamer then be expressed by the following basic suitability assignment matrices SMn Feature Sa tebe rade Preven SA Figure 2. A suitable grade analysis model S is called a suitable grade, suppose number of From the Dempster-Shafer theory, we have grades N=7, for example, S and V may be defined as follows. (S,: mm()=K(2)(mm".m", 2+m".m@ 2+mg.mm, 2) dthe most unsuitability, very unsuitability, unsuitability, indifferer suitability, very suitability, the most suitability/ k=tvkv…h}k=k+1…,+k2 where K/(2) [l-(m,"mln+m/n m/r where vk(i=l,, Lk)are factors(such as difficulty, As to nln+I, the partially combined suitability semantic density) influencing the evaluation of fK(Lo) assignments to other hypothesis in 6 are all zero. Te In the recommendation model, the learner(li) has to represent the results of the partial combination of all provide his preference information about his learning subsets of factors, the following matrix is suggested, tyle or characteristic of the any features. We use a ten called the local suitability assignment matrix point scale to estimate the relative importance [5]. The m(4)m2) elative weights of the features are thus estimated as follows m(R, m/(R )f /(R λ=[1元1…了 a is then normalized by 2=d/A, suppose A is the mNH) my 2.2. Global combination algorithm normalized relative weight of vk in Vk and After the local combination. the subset of factors 4=[41,…, Hn, may be regarded as an aggregated factor, and m/(R,) 2.1. Local combination algorithm as a new basic suitability assignment to the hypothesis Sn, confirmed by Hn. The problem is then to combine a basic suitability assignment satisfies the followin ondition 6 all these integrated factors in order to obtain the overall 公m)=,m)=0;0sm()≤L, for all uc0 suitability assignment to all subsets u of 6, including the singletons Sn,(n=l,- ,N) m(u) indicates that portion of the total belief exactly First of all, combine v2)=ft I(R). In a mmitted to hypothesis u given a piece of evidence similar way and obtain the following Suppose there exist two pieces of evidence in 8, ar that they provide two basic suitability assignment to (S1): b(+D=K(/+ I(Ru) subset u of 8, i.e., my(u) and m2() b+)=K(+Db(m() Then,m2(u)=m, (u)e m2(u). The Dempster-Shafer (,+: bd+=K (+Db 9m+m")+b mg n)+b8 m/+n) theory provides an evidence combination rule defined below [7] S+21:bH2+)=k(/+Db(m(B…) U+1)=K(+1)b(m4( 263
where, ij f is a value of i f at LOi (i=1, …, l; j=1, …, k1 ) Our applied model is shown in Figure 2. In the feature level of model, the state of a feature (such as educational) at each learning object LO is required to be evaluated. Figure 2. A suitable grade analysis model n S is called a suitable grade, suppose number of grades N=7, for example, S and V may be defined as follows: S SSS S S S S = { 1 23 4 5 6 7 } (3) = {the most unsuitability, very unsuitability, unsuitability, indifferent, suitability, very suitability, the most suitability}. 1 2 1, , 1 12 { } Lk kk k k k kk k V vv v = " =+ + " (4) where ( 1, , ) i i L k k v = " are factors (such as difficulty, semantic density) influencing the evaluation of ( ) kf LO . In the recommendation model, the learner ( Li ) has to provide his preference information about his learning style or characteristic of the any features. We use a tenpoint scale to estimate the relative importance [5]. The relative weights of the features are thus estimated as follows: 1 1 3 k ˆˆˆˆ ˆ [ ]T λλλλ λ = " (5) ˆ λ is then normalized by 1 ˆ ˆ / k k i λλ λ= = ∑ , suppose i λk is the normalized relative weight of i kv in Vk and 1 [,, ] Lk T λλ λ kk k = " . 2.1. Local combination algorithm A basic suitability assignment satisfies the following condition [6]: ∑ m m ( ) 1, ( ) = 0 = ∅ ⊆ μ μ θ ; 0 ( ) 1, ≤ ≤ m μ for all 0 μ ⊆ m( ) μ indicates that portion of the total belief exactly committed to hypothesis μ given a piece of evidence. Suppose there exist two pieces of evidence in θ , and that they provide two basic suitability assignment to a subset μ of θ , i.e., ( ) 1m μ and ( ). 2 m μ Then, () () () 12 1 2 m mm μ = ⊕ μ μ . The Dempster-Shafer theory provides an evidence combination rule defined below [7]: 1 2 12 12 () () ( ) 0, ( ) 1 A B m Am B m m k ∩ = ∅ = − = ∑ μ μ (6) () () 1 2 A B K m Am B ∩ =∅ = ∑ (7) All these basic suitability assignments to n S , n 1 S + ,θ with respect to 1 , 1( 1, , ; 1, , 1) nn n v i Rn N + = =− " " may then be expressed by the following basic suitability assignment matrices n SM ,1 ,1 ,1 1 1 , 1 ,2 ,2 ,2 2 1 , 1 , , , 1 , 1 { } { } { } n n n n nn n n n n n nnn n n n n n n R nR nR R n n n n mm m v mmm v SM mmm v θ θ θ + + + + + + ⎡ ⎤ ⎢ ⎥ = ⎣ ⎦ ### " (8) From the Dempster-Shafer theory, we have (2) (2) ,1 ,2 ,1 ,2 ,1 ,2 { }: ( ) I I nn nn nn n n nn n n S m K mm mm mm θ θ = ++ (2) (2) ,1 ,2 ,1 ,2 ,1 ,2 1 1 11 1 1 { }: ( ) I I n n n n nn n n nn n n S m K m m m m mm + + ++ + + θ θ = ++ (2) (2) ,1 ,2 { }: I I nn m K mm θ θθ θ = where (2) ,1 ,2 ,1 ,2 1 1 1 [1 ( )] I nn n n n n n n K mm m m − + + =− + . As to (2) , 1 I n n v + , the partially combined suitability assignments to other hypothesis in θ are all zero. To represent the results of the partial combination of all subsets of factors, the following matrix is suggested, called the local suitability assignment matrix: 1 111 1 111 1 ( ) () () () 1,2 1 2 ( ) () () ( ) 1 , 1 ( ) () () ( ) 1 1, { } { } { } n n n NNN I R IR IR IR I R IR IR I R n n n n I R IR IR I R N N N N mmm v SM mmm v mmm v θ θ θ −−− + + − − ⎡ ⎤ ⎢ ⎥ = ⎣ ⎦ """ " "" # " (9) 2.2. Global combination algorithm After the local combination, the subset of factors Hn may be regarded as an aggregated factor, and ( ) n I R mn as a new basic suitability assignment to the hypothesis n S , confirmed by Hn . The problem is then to combine all these integrated factors in order to obtain the overall suitability assignment to all subsets μ of θ , including the singletons ( 1, , ) n Sn N = " . First of all, combine 1 2 (2) () () 1,3 1,2 2,3 {, } c IR IR v vv = . In a similar way and obtain the following recursive algorithm: 1 ( 1) ( 1) ( ) ( ) 1 1 1 { }: j cj cj cj I R Sb K b mθ + + + = " 1 ( 1) ( 1) ( ) ( ) { }: j cj cj cj I R j j j Sb K bmθ + + + = 111 ( 1) ( 1) ( ) ( ) ( ) () () () 1 11 1 1 { }: jjj cj cj cj cj cj IR IR IR j j jj j j S b K bm bm bm θ θ + + +++ + ++ + + = ++ 1 ( 1) ( 1) ( ) ( ) 2 2 2 { }: j cj cj cj I R j j j S b K bm θ + + + + + + = 1 ( 1) ( 1) ( ) ( ) { }: j cj cj cj I R b K bm θ θθ θ + + + = , where 263
AcU+1) 1-e0(m1)+m12+1m1r1,y=1…,N-2 max -ik ∑4 d- hED when j =N-2, the global suitability assignments are PCi=K1+k2,ecy max k-r generated and can be expressed by the following vector, called the global suitability(GS)assignment vector where Ci={k|Pk≥P/,k=1…,kl+k2 b Consider that GS is obtained by combining vN-)while Step 3: For all learning object, calculate the preference concordance dominance index pc(LO) n,n+1 net evaluation concordance dominance index ec(LO) v2…}=V and the net preference-evaluation discordance dominance In other words, f -1) index d(LOi). Then, normalize the three indexes is the global suitability assignment to whichn is confirmed by all factors pc(LO, It is obvious that the global suitability assignments are all zero for other hypothesis in 6 except for the Pc2(LO) singletons Sn(n=l,, M)and 8. So, it can be proved d(o) do (16) that the following equation is true ec(Lo, ∑明N-)+b-)=1 (11) Step 4: The relative closeness index of LO, represented Since m(S,/VK(LO)=bE N-I), the preference degree as u(LO, is finally defined Prk can then be calculated by M(LO) (17) Prk=2m(S, /Vk(LO,)P(S,)+m(0/Vk(LO, ))P(e) >(pc(LO )+ec(LO, )-d(LO,) ,0≤叫(LO)≤l,i=l… ∑ bc -p(S,)+2N The learning object (LO)that has the largest value of u(LO)will be recommend to the specific learner The value of quantitative feature(such as size) also be transformed into the preference degree space 3. Methods Demonstration using the following equations Learner tries to find the related learning object by P=以(=2(-)-1,k=1…,k;=1…,(13) using the key words via learner interface agent. The agent receives all information about learner and negotiates with other agents follow our method in previous section 2.3. Learning Object Ranking Finally, in this step we have a set of different learning objects in the same concept, so the learner needs the We have chosen the CODASID method [8 which is recommendation from the system what is the most based on a complete concordance and discordance suitable learning object for him analysis for information aggregation and the decision rule The three features are described in Table 1. in which of the TOPSis method for learning object ranking. The the uncertain subjective judgments for evaluation of the computational steps are summarized below qualitative features are discussed by two sets of factor fo Step 1: Generate the weighted normalized evaluation evaluation of the two qualitative features (general and matrix Z from learner preference as follow educational) are defined by Z=()×(k1+k2)×dag4…k1+k2} 2=n2v where Ak is the preference weight from the learner of Semantic Density Difficult Learning each factor The examples of preference weight of each feature Step 2: Calculate the preference-evaluation discordance defined by learner(Li )are obtained index di, the preference concordance index and the 42=[]=[0.508 evaluation concordance index eci. These are defined by 2=[号鸡=10908050504
11 1 ( 1) ( ) ( ) () () () 1 1 2 12 1 [1 ( ( ) ) ( )] , 1, , 2 jj j j cj cj cj IR IR IR i j j jj i K bm m bm j N + ++ + − + + ++ = =− + + = − ∑ " when j =N-2, the global suitability assignments are generated and can be expressed by the following vector, called the global suitability (GS) assignment vector: ( 1) ( 1) ( 1) ( 1) 1 [ ,, ,, , ] cN cN cN cN T GS b b b b n N θ − −− − = " " (10) Consider that GS is obtained by combining ( 1) 1, c N N v − while ( 1) ( ) ( ) ( 1) 1, 1,2 , 1 1, 1 { ,, ,, } n N cN IR IR IR N nn N v vv v − − + − = " " = 1 2 { } Lk kk k k vv v V " = In other words, c N( 1) nb − is the global suitability assignment to which n S is confirmed by all factors. It is obvious that the global suitability assignments are all zero for other hypothesis in θ except for the singletons ( 1, , ) n Sn N = " and θ . So, it can be proved that the following equation is true: ( 1) ( 1) 1 1 N c N c N n n b bθ − − = ∑ + = (11) Since ( 1) ( / ( )) n c N n k S m S V LO b − = , the preference degree rk p can then be calculated by 1 ( / ( )) ( ) ( / ( )) ( ) N rk n k r n k r n p m S V LO p S m V LO p θ θ = = + ∑ ( 1) ( 1) 1 ( ) () n N cN cN S n n b pS b p θ θ − − = = + ∑ (12) The value of quantitative feature (such as size) may also be transformed into the preference degree space using the following equations: min max min 1 2( ) ( ) 1 , 1, , ; 1, , ( ) rk k rk rk k k f f p pf k k r l f f − == −= = − " " (13) 2.3. Learning Object Ranking We have chosen the CODASID method [8] which is based on a complete concordance and discordance analysis for information aggregation and the decision rule of the TOPSIS method for learning object ranking. The computational steps are summarized below. Step 1: Generate the weighted normalized evaluation matrix Z from learner preference as follow: ( 1 2) 1 1 2 () { } Z r diag = × ij l k k k k ×+ + λ λ " ( 1 2) ( ) ij l k k z = × + (14) where λk is the preference weight from the learner of each factor. Step 2: Calculate the preference-evaluation discordance index ij d , the preference concordance index ij pc , and the evaluation concordance index ij ec . These are defined by: 1 2 1 max | | max | | ; ; max | | max | | ij ij ij k ik jk ik jk kD kC kC ij ij ij k k ik jk ik jk kj kj k k z z rr d pc ec z z rr λ λ ∈ ∈∈ + ∈ ∈ = − − = == − − ∑ ∑ (15) where { | , 1, , 1 2}; C kp p k k k ij ik jk =≥ =+ " { | , 1, , 1 2} Dij ik jk =< =+ kp p k k k " . Step 3: For all learning object, calculate the net preference concordance dominance index ( )i pc LO , the net evaluation concordance dominance index ( )i ec LO , and the net preference-evaluation discordance dominance index ( )i d LO . Then, normalize the three indexes as follow: 2 1 ( ) ( ) ; ( ) i i l j j pc LO pc LO pc LO = = ∑ 2 2 1 1 () () ( ) ; ( ) ; i=1 , , () () i i i i l l j j j j ec LO d LO ec LO d LO l ec LO d LO = = = = ∑ ∑ " (16) Step 4: The relative closeness index of LOi represented as ( )i u LO is finally defined as 1 () ()() ( ) ( ( ) ( ) ( )) i ii i l i ii i pc LO ec LO d LO u LO pc LO ec LO d LO = + − = ∑ + − (17) , 0 ( ) 1, 1, , ; i ≤ ≤= u LO i l " The learning object ( LOi ) that has the largest value of ( )i u LO will be recommend to the specific learner. 3. Methods Demonstration Learner tries to find the related learning object by using the keywords via learner interface agent. The agent receives all information about learner and negotiates with other agents follow our method in previous section. Finally, in this step we have a set of different learning objects in the same concept, so the learner needs the recommendation from the system what is the most suitable learning object for him. The three features are described in Table 1, in which the uncertain subjective judgments for evaluation of the qualitative features are discussed by two sets of factor for evaluation of the two qualitative features (general and educational) are defined by 1 2 2 22 V vv = { } = {Structure Aggregation_Level} 12345 3 33 33 3 V vvvvv = { } ={Interactivity_Type Interactivity_Level Semantic_Density Difficult Learning_Resource_Type} The examples of preference weight of each feature defined by learner ( i L ) are obtained: 1 2 2 22 [ ] [0.5 0.8]T λ λλ = = 12345 3 33 33 3 [ ] [0.9 0.8 0.5 0.5 0.4]T λ λλ λλ λ = = 264
Table 1. Example of an extended decision matrix for Table 3. The j-E matrix evaluation of three different learning objects pc(Lo) pc(LO) ec(LO) ec( LO) d(Lo) d(LO) IEEE LOM Path Units or Factor .498 0.66 0.553 LOn (LO2) (L03) 0.219 0.32 5 he relative closeness indexed of the three LOM/General (2) Finally, objects are generated by equations(17)is (2,2=31m LOn)(LO2)0LO2)了=0606023801561 So the recommendation order over the specifie Interactivity Level(3) learner preference iS LO,>LO,>LO3. In this case, LOMEducatiorall wUO5 learning object LO, is ranked to be the most suitable fo Semanic Density (13) S10 L0.5) Difticulty (3) s(0.6) 4. Conclusion Learning Resource Type (13) The evidential reasoning approach proposed in this VU(an-wery ttsmitable, U(a)-unsanitable, A(oj-amerage, s(aj-statable, S(ej-rery statable work provides an alternative way to treat uncertain ecision the evaluation grades is defined by ng in learning object recommendation From problem. The presented decision making procedure S=S S4 S5=/rery umsuitable umsuitable composed of this approach and the CODASId method can be used to deal with multiple feature decision making problems with uncertainty. The demonstrated examples S is transformed into the preference degree have presented the implementation process of treat using the following scale uncertainty in multiple features decision making P(S)=[p(S1)P(S2) P(S,)P(S4) P(S)] oblems. This system provides personalized guidance fitters out unsuitable course concept to reduce cognitive load, recommend the most suitable learning object to th Each of the preference degrees for quantifying the learner by the raking method states of the qualitative features at all learning objects is 5. References generate following the same process demonstrated in the previous section. The global suitability assignments are [I P Dodds, Advanced Distributed Learning Sharable Content Object generated using the global combination algorithm. The Reference Model Version 1. 2, online], available at example of results is listed in Table 2 http://www.adlnet.org(2001) Cognition, Revised December 12, 2006.(2006) Table 2. Suitability assignments for f(LO) [2]J. Novak and A Canas"The Theory Underlying Concept Maps and Basic Suitability Suitable grades How to Construct them, Technical Report IHMC STools 2006-01, Florida Institute for Human and Machine (B×A2) vU(p) U() ↓S( nition, Revised December 12, 2006.(2006 IEEE Standard for Learning Object Metadata, 1484.12.1 [4N. Pukkhem, M.W. Evens, and w. Vatanawood, " The Concept lobal Suitability Path Combination model for St ting a personalized learni 0.000 Path in Adaptive Educational Systems, In Proceedings of th Then, the preference degrees of the two qualitative (EEE06),, Las Vegas, USA,(2006) features at the three learning objects are calculated using (13). For example, Attribute Decision Models'", Journal of multi Criteria Decision Analysis, vol. Il, pp291-303 (2002) 2=2pS)+32n(S)+1S)+,)+82S)+pS Case-Based Reasoning Applications", Knowledge's Engineering vie,203),pp.325-328(2006 =0.0×(-1)+0.072×(-04)+087×(0)+00×(0.4)+0.041×(1)+0017×0 [7 K. Sentz and S Ferson, Combination of evidence in Demper- =0.012 Shafer theory", SAND Report, SAND2002-0835, (2002) [8 D. Dubois, H. Fargier, P Pemy and H. Prade, A Characterization Using equations(15)-(16),we obtain the riteria Decision ollowing judgment and evaluation matrix (Table 3) Making, International Journal of Intelligent Systems, vol 18, pp.751-774(1998)
Table 1. Example of an extended decision matrix for evaluation of three different learning objects Features or IEEE LOM Path of Features Units or Factor Learning Object (LO1) (LO2) (LO3) Size ( 1 f ) Mb 5 8 2 LOM/General/ ( 2f ) Structure ( 1 2v ) VS(0.9) S(1.0) A(1.0) Aggregation Level ( 2 2v ) A(0.5) S(0.5) S(1.0) S(0.8) VS(0.1) LOM/Educational/ ( 3f ) Interactivity Type ( 1 3v ) A(1.0) VS(0.9) U(1.0) Interactivity Level ( 2 3v ) VS(1.0) S(1.0) S(1.0) Semantic Density ( 3 3v ) VS(1.0) S(1.0) VU(0.5) U(0.5) Difficulty ( 4 3v ) S(1.0) S(0.5) VS(0.5) S(0.6) Learning Resource Type ( 5 3v ) S(1.0) A(1.0) VS(1.0) VU very unsuitable U unsuitable A average S suitable VS very suitable ( ) , ( ) , ( ) , ( ) , ( ) ω ω ωω ω − − −− − From (3), the evaluation grades is defined by S SSS S S = { 1 23 4 5 } = {very unsuitable unsuitable average suitable very suitable } S is transformed into the preference degree space using the following scale: 12345 { } [ ( ) ( ) ( ) ( ) ( )]T pS pS pS pS pS pS = = [-1 -0.4 0 0.4 1] T Each of the preference degrees for quantifying the states of the qualitative features at all learning objects is generate following the same process demonstrated in the previous section. The global suitability assignments are generated using the global combination algorithm. The example of results is listed in Table 2. Table 2. Suitability assignments for 2 1 f ( ) LO Basic Suitability ( 2 β × λ ) Suitable Grades VU ( ) β U ( ) β A( ) β S( ) β VS( ) β Factors 1 2v 0.9 0.72 2 2v 0.45 0.45 Global Suitability (2) n C Sb 0.000 0.072 0.870 0.000 0.041 Then, the preference degrees of the two qualitative features at the three learning objects are calculated using (13). For example, 12 3 4 5 (2) (2) (2) (2) (2) (2) 12 1 2 3 4 5 ( ) ( ) ( ) ( ) ( ) () CC CC C C SS SS S S p b pS b pS b pS b pS b pS b pS =+++++ = 0.0 ( 1) 0.072 ( 0.4) 0.87 (0) 0.0 (0.4) 0.041 (1) 0.017 0 ×− + ×− + × + × + × + × = 0.012. Using equations(15)-(16), we can obtain the following judgment and evaluation matrix (Table 3). Table 3. The J-E matrix pc LO ( ) pc LO ( ) ec LO ( ) ec LO ( ) d LO ( ) d LO ( ) 1 LO 0.769 0.876 1.006 0.703 0.728 0.464 2 LO 0.436 0.498 0.566 0.396 0.867 0.553 3 LO 0.308 0.352 0.313 0.219 0.536 0.342 Finally, the relative closeness indexed of the three learning objects are generated by equations (17) is: 12 3 ( ) ( ) ( ) T ⎡u LO u LO u LO ⎤ ⎣ ⎦ = [0.606 0.238 0.156] T . So the recommendation order over the specific learner preference is LO LO LO 123 > > . In this case, learning object LO1 is ranked to be the most suitable for specific learner. 4. Conclusion The evidential reasoning approach proposed in this work provides an alternative way to treat uncertain decision making in learning object recommendation problem. The presented decision making procedure composed of this approach and the CODASID method can be used to deal with multiple feature decision making problems with uncertainty. The demonstrated examples have presented the implementation process of treat uncertainty in multiple features decision making problems. This system provides personalized guidance, fitters out unsuitable course concept to reduce cognitive load, recommend the most suitable learning object to the learner by the raking method. 5. References [1] P. Dodds, Advanced Distributed Learning Sharable Content Object Reference Model Version 1.2, [online], available at: http://www.adlnet.org (2001). Cognition, Revised December 12, 2006. (2006). [2] J. Novak and A. Caňas “The Theory Underlying Concept Maps and How to Construct them, Technical Report IHMC CmapTools 2006-01”, Florida Institute for Human and Machine Cognition, Revised December 12, 2006. (2006). [3] IEEE, IEEE Standard for Learning Object Metadata, 1484.12.1- 2002, 2002. (2002). [4] N. Pukkhem, M.W. Evens, and W. Vatanawood, "The Concept Path Combination Model for Supporting a Personalized Learning Path in Adaptive Educational Systems", In Proceedings of the 2006 International Conference on e-Learning, e-Business, Enterprise Information Systems, e-Government, and Outsourcing (EEE'06), , Las Vegas, USA, (2006). [5] R. Roberts and P.Goodwin, “Weight Approximations in Multi- Attribute Decision Models”, Journal of Multi Criteria Decision Analysis, vol. 11, pp291-303 (2002). [6] R. Lopez de Mantarus, P. Perner and P. Conningnam, “Emergent Case-Based Reasoning Applications”, Knowledge’s Engineering Review, 20(3), pp. 325-328 (2006). [7] K. Sentz and S. Ferson, “Combination of evidence in Demper- Shafer theory”, SAND Report, SAND2002-0835, (2002). [8] D. Dubois, H. Fargier, P. Perny and H. Prade, A Characterization of Generalized Concordance Rules in Multi-Criteria Decision Making, International Journal of Intelligent Systems, vol.18, pp. 751-774 (1998). 265