正在加载图片...
ARTICLE N PRESS Degree I wgt=0.5 Its core algorithm evaluates the similarity of the target case(n) with the cases in the case base. The retrieval sub-algorithm evalu- Gendering=0.2 target and each case in the base by summarizing each feature's gap, which describes that case in detail. CBR judges the similarity here by calculating the differ ence between each case and the target, the similarity increasing d capability< it to evaluate the features'similarity, as illustrated in Fig 3. The fet-check-function is set to either total or partial similar according to the feature 's characteristics. For partial similarit serial the check function returns a real number between 0, indicating that they are identical, and 1, indicating that they are totally differ wg=06 ent. The total similar fet-chedk-function, however, returns either 1 or O. For example, if the targets gender feature is female and case, wO=0. 3 is male, the gap between the target and case: would be 1. The dif- erence between case and the target is therefore the gap as calcu Fig. 2. An IT certification recommender problem described by HCA. lated using Eq(1). GCBR then selects the case with the smallest gap as the most feasible solution and provides it to the decision makers for reference Table 1 GCBR variables definitions and description. Variable on and description diference(ci. T)=cosine(etm fetm) (1) The ith case of the case base. i= 1.2..n The case-retrieval sub-algorithm needs to be revised, however, The number of features each because the problems that GCBR addresses involve hierarchical The target case inputted by the decision makers. The levels of criteria. This study therefore proposes the recursive sub provides them with feasible e ference cases according to the target cases condition algorithm fet-check-rewrite, as shown in Fig 3. This algorithm is a To present the jth feature, the recursive one for rewriting fet-check-function in order to allow GCBr fett the jth feature of case i to manage the HCA problem. When the HCA feature level exceeds 1 of target such as in level (fet)>1, the weighted sum of the next level replace weter ecision makers can assign an importan he feature, wget to denote the importance weight of For example, according to the It certification example illus- trated in Fig. 2 the similarity evaluation function should be altered he jth feature: j= 1, 2. difference(CD ifference(ci is an array ecords the degree of as shown in Fig 4, to consider three levels recursively, so its con- ifference between the ith case and the target case(T), sideration of level 3, which is the dash-block area, precedes that hich is evaluated by Eq (5) i= 1, 2,. of level 2, which precedes that of level 1. The Fet-check-rewrite To evaluate the jth feature gap of the ith case and T mechanism is a function that returns the revised feature- 1,2,n,j=1,2,m,byEq(4) check-function to GCBr's core algorithm in order to evaluate the level(fern) urn to the conditio ase s simllar ity with the targ hreshold ecision makers can set the gap threshold. If the ig. 5 compares the gap between the features of specific cases ifference in a case is less than the threshold, it provides with the target using the fet-check-function, as shown in Eq. (2). xt_level(fety) retrieve the next level feature of the jth fear Eq (3)then summarizes the feature gap to evaluate the similarity he numbers of stages, k, that decision makers expect the Fig. 5's Reuse algorithm. Following Yang and Wangs(2009a)proce- recommendation system to require for providing a dure, GCBr then analyzes the retrieved case or cases further using feasible suggestion the knowledge discovery(KDD)mechanism, which includes asso- ciation mining techniques and statistical analyses to produce potential knowledge rules and then provide decision makers with stage suggesting reference cases in each stage. This refined infor- revised case information upon which they can take action. Yang mation is useful for decision makers in revising their solutions. and Wang(2008)claimed that simply presenting the retrieved The third characteristic is that its performance is superior to that case or cases to decision makers is useless because the case filter of the traditional CBR algorithm because it employs a genetic algo- ing performs poorly under loose target conditions. The system rithm( GA)to keep the convergence rate stable, thereby increasing should therefore employ data mining analysis to identify the efficiency of the solution process. 3. 1. GCBR as a generalized problem-solving model Function fet_check_function=fer-check-rewrite (fer if( leveller)>1) The traditional CBR algorithms 4R steps are that CBr retrieves the feasible cases so that decision makers may either reuse the fet_check_finction=wgte,x fer_check_rewrite(next_level(fer) olution of these retrieved cases directly or revise the solution according to real applications: CBR then retains the successful case fet_check_fuanction= wgt s xfer_check_function process is similar to that of ordinary human problem solving, and /ar n or cases and the solution in the case base for further reference. this lany have applied CBr successfully to a variety of contexts during the past few decades. Please cite this article in press as: Wang, C-S,& Yang. H.-L A recommender mechanism based on case-based reasoning Expert Systems with Application 2011da06 /jeswa.201109161stage suggesting reference cases in each stage. This refined infor￾mation is useful for decision makers in revising their solutions. The third characteristic is that its performance is superior to that of the traditional CBR algorithm because it employs a genetic algo￾rithm (GA) to keep the convergence rate stable, thereby increasing the efficiency of the solution process. 3.1. GCBR as a generalized problem-solving model The traditional CBR algorithm’s 4R steps are that CBR retrieves the feasible cases so that decision makers may either reuse the solution of these retrieved cases directly or revise the solution according to real applications; CBR then retains the successful case or cases and the solution in the case base for further reference. This process is similar to that of ordinary human problem solving, and many have applied CBR successfully to a variety of contexts during the past few decades. Its core algorithm evaluates the similarity of the target case (T) with the cases in the case base. The retrieval sub-algorithm evalu￾ates the similarity between the target and each case in the case base by summarizing each feature’s gap, which describes that case in detail. CBR judges the similarity here by calculating the differ￾ence between each case and the target, the similarity increasing as the difference decreases. Each feature has a fet-check-function to evaluate the features’ similarity, as illustrated in Fig. 3. The fet-check-function is set to either total or partial similar according to the feature’s characteristics. For partial similarity, the check function returns a real number between 0, indicating that they are identical, and 1, indicating that they are totally differ￾ent. The total similar fet-chedk-function, however, returns either 1 or 0. For example, if the target’s gender feature is female and casei is male, the gap between the target and casei would be 1. The dif￾ference between casei and the target is therefore the gap as calcu￾lated using Eq. (1). GCBR then selects the case with the smallest gap as the most feasible solution and provides it to the decision makers for reference. differenceð~Ci;~TÞ ¼ cosineðfetCi m !; fetT m !Þ ¼ fetCi m ! fetT m ! fetCi m ! 2 fetT m ! 2 ð1Þ The case-retrieval sub-algorithm needs to be revised, however, because the problems that GCBR addresses involve hierarchical levels of criteria. This study therefore proposes the recursive sub￾algorithm fet-check-rewrite, as shown in Fig. 3. This algorithm is a recursive one for rewriting fet-check-function in order to allow GCBR to manage the HCA problem. When the HCA feature level exceeds 1, such as in level (fet) > 1, the weighted sum of the next level replaces the fet-check-rewrite. For example, according to the IT certification example illus￾trated in Fig. 2 the similarity evaluation function should be altered, as shown in Fig. 4, to consider three levels recursively, so its con￾sideration of level 3, which is the dash-block area, precedes that of level 2, which precedes that of level 1. The Fet-check-rewrite mechanism is a function that returns the revised feature￾check-function to GCBR’s core algorithm in order to evaluate the case’s similarity with the target. Fig. 5 compares the gap between the features of specific cases with the target using the fet-check-function, as shown in Eq. (2). Eq. (3) then summarizes the feature gap to evaluate the similarity between the target and each case in the case base. Fig. 6 presents Fig. 5’s Reuse algorithm. Following Yang and Wang’s (2009a) proce￾dure, GCBR then analyzes the retrieved case or cases further using the knowledge discovery (KDD) mechanism, which includes asso￾ciation mining techniques and statistical analyses to produce potential knowledge rules and then provide decision makers with revised case information upon which they can take action. Yang and Wang (2008) claimed that simply presenting the retrieved case or cases to decision makers is useless because the case filter￾ing performs poorly under loose target conditions. The system should therefore employ data mining analysis to identify demographic data capability learning path Degree Gender age Achieved certification Working experience Unit time unit serial wgt=0.2 wgt=0.5 wgt=0.3 wgt=0.5 wgt=0.2 wgt=0.3 wgt=0.4 wgt=0.6 wgt=0.6 wgt=0.4 wgt=0.6 wgt=0.4 target Fig. 2. An IT certification recommender problem described by HCA. Table 1 GCBR variables definitions and description. Variable Definition and description n The number of cases in the case base Ci The ith case of the case base, i = 1,2,...,n fet Features used to describe a case m The number of features each case employs T The target case inputted by the decision makers. The recommender mechanism provides them with feasible reference cases according to the target case’s condition fetj To present the jth feature, that fetCi j the jth feature of case i fetT j the jth feature of target ( i ¼ 1; 2; ... ; n; j ¼ 1; 2; ... ; m wgtfet Decision makers can assign an importance weighting to the feature, wgtfetj , to denote the importance weight of the jth feature; j = 1,2,...,m difference(Ci) difference(Ci) is an array that records the degree of difference between the ith case and the target case (T), which is evaluated by Eq. (5), i = 1,2,...,n gap fetCi j ; fetT j   To evaluate the jth feature gap of the ith case and T, i = 1, 2,...,n, j = 1, 2,...,m, by Eq. (4) level(fetj) To return to the condition that decision makers set on the jth feature threshold Decision makers can set the gap threshold. If the difference in a case is less than the threshold, it provides the case as a reference next_level(fetj) To retrieve the next level feature of the jth feature fet_check_function To evaluate the gap between the jth feature and T k The numbers of stages, k, that decision makers expect the recommendation system to require for providing a feasible suggestion Function fet_check_function= fet-check-rewrite (fet) if ( level(fet)>1) fet_check_function= _ _ ( _ ( )) ∑wgt fet check rewrite next level fet fet × else fet_check_function=wgt fet × fet _ check _ function end if; end Fig. 3. Fet-check-rewrite algorithm. C.-S. Wang, H.-L. Yang / Expert Systems with Applications xxx (2011) xxx–xxx 3 Please cite this article in press as: Wang, C.-S., & Yang, H.-L. A recommender mechanism based on case-based reasoning. Expert Systems with Applications (2011), doi:10.1016/j.eswa.2011.09.161
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有