当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《模式识别》课程教学资源(书籍文献)Data Clustering - A Review(A.K. JAIN、M.N. MURTY、P.J. FLYNN)

资源类别:文库,文档格式:PDF,文档页数:60,文件大小:621.33KB,团购合买
点击下载完整版文档(PDF)

Data Clustering:A Review A.K.JAIN Michigan State University M.N.MURTY Indian Institute of Science AND P.J.FLYNN The Ohio State University Clustering is the unsupervised classification of patterns(observations,data items, or feature vectors)into groups(clusters).The clustering problem has been addressed in many contexts and by researchers in many disciplines;this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. However,clustering is a difficult problem combinatorially,and differences in assumptions and contexts in different communities has made the transfer of useful generic concepts and methodologies slow to occur.This paper presents an overview of pattern clustering methods from a statistical pattern recognition perspective, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners.We present a taxonomy of clustering techniques,and identify cross-cutting themes and recent advances.We also describe some important applications of clustering algorithms such as image segmentation,object recognition,and information retrieval. Categories and Subject Descriptors:I.5.1 [Pattern Recognition]:Models;I.5.3 [Pattern Recognition]:Clustering;1.5.4 [Pattern Recognition]:Applications- Computer vision;H.3.3 [Information Storage and Retrievall:Information Search and Retrieval-Clustering;1.2.6 [Artificial Intelligence]: Learning-Knowledge acquisition General Terms:Algorithms Additional Key Words and Phrases:Cluster analysis,clustering applications, exploratory data analysis,incremental clustering,similarity indices,unsupervised learning Section 6.1 is based on the chapter "Image Segmentation Using Clustering"by A.K.Jain and P.J. Flynn,Advances in Image Understanding:A Festschrift for Azriel Rosenfeld(K.Bowyer and N.Ahuja Eds.),1996 IEEE Computer Society Press,and is used by permission of the IEEE Computer Society. Authors'addresses:A.Jain,Department of Computer Science,Michigan State University,A714 Wells Hall,East Lansing,MI 48824;M.Murty,Department of Computer Science and Automation,Indian Institute of Science,Bangalore,560 012,India;P.Flynn,Department of Electrical Engineering,The Ohio State University,Columbus,OH 43210. Permission to make digital/hard copy of part or all of this work for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage,the copyright notice,the title of the publication,and its date appear,and notice is given that copying is by permission of the ACM,Inc.To copy otherwise,to republish,to post on servers,or to redistribute to ists,requires prior specific permission and/or a fee. ©2000ACM0360-0300/99/0900-0001$5.00 ACM Computing Surveys,Vol.31,No.3,September 1999

Data Clustering: A Review A.K. JAIN Michigan State University M.N. MURTY Indian Institute of Science AND P.J. FLYNN The Ohio State University Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters). The clustering problem has been addressed in many contexts and by researchers in many disciplines; this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. However, clustering is a difficult problem combinatorially, and differences in assumptions and contexts in different communities has made the transfer of useful generic concepts and methodologies slow to occur. This paper presents an overview of pattern clustering methods from a statistical pattern recognition perspective, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners. We present a taxonomy of clustering techniques, and identify cross-cutting themes and recent advances. We also describe some important applications of clustering algorithms such as image segmentation, object recognition, and information retrieval. Categories and Subject Descriptors: I.5.1 [Pattern Recognition]: Models; I.5.3 [Pattern Recognition]: Clustering; I.5.4 [Pattern Recognition]: Applications— Computer vision; H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval—Clustering; I.2.6 [Artificial Intelligence]: Learning—Knowledge acquisition General Terms: Algorithms Additional Key Words and Phrases: Cluster analysis, clustering applications, exploratory data analysis, incremental clustering, similarity indices, unsupervised learning Section 6.1 is based on the chapter “Image Segmentation Using Clustering” by A.K. Jain and P.J. Flynn, Advances in Image Understanding: A Festschrift for Azriel Rosenfeld (K. Bowyer and N. Ahuja, Eds.), 1996 IEEE Computer Society Press, and is used by permission of the IEEE Computer Society. Authors’ addresses: A. Jain, Department of Computer Science, Michigan State University, A714 Wells Hall, East Lansing, MI 48824; M. Murty, Department of Computer Science and Automation, Indian Institute of Science, Bangalore, 560 012, India; P. Flynn, Department of Electrical Engineering, The Ohio State University, Columbus, OH 43210. Permission to make digital/hard copy of part or all of this work for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication, and its date appear, and notice is given that copying is by permission of the ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. © 2000 ACM 0360-0300/99/0900–0001 $5.00 ACM Computing Surveys, Vol. 31, No. 3, September 1999

Data Clustering 265 CONTENTS Intuitively,patterns within a valid clus- ter are more similar to each other than 1.Introduction 1.1 Motivation they are to a pattern belonging to a 1.2 Components of a Clustering Task different cluster.An example of cluster- 1.3 The User's Dilemma and the Role of Expertise ing is depicted in Figure 1.The input 1.4 History patterns are shown in Figure 1(a),and 1.5 Outline 2.Definitions and Notation the desired clusters are shown in Figure 3.Pattern Representation,Feature Selection and 1(b).Here,points belonging to the same Extraction cluster are given the same label.The 4.Similarity Measures variety of techniques for representing 5.Clustering Techniques data,measuring proximity (similarity) 5.1 Hierarchical Clustering Algorithms 5.2 Partitional Algorithms between data elements,and grouping 5.3 Mixture-Resolving and Mode-Seeking data elements has produced a rich and Algorithms often confusing assortment of clustering 5.4 Nearest Neighbor Clustering methods. 5.5 Fuzzy Clustering 5.6 Representation of Clusters It is important to understand the dif- 5.7 Artificial Neural Networks for Clustering ference between clustering (unsuper- 5.8 Evolutionary Approaches for Clustering vised classification)and discriminant 5.9 Search-Based Approaches analysis (supervised classification).In 5.10 A Comparison of Techniques supervised classification,we are pro- 5.11 Incorporating Domain Constraints in Clustering vided with a collection of labeled (pre- 5.12 Clustering Large Data Sets classified)patterns;the problem is to 6.Applications label a newly encountered,yet unla- 6.1 Image Segmentation Using Clustering beled,pattern.Typically,the given la- 6.2 Object and Character Recognition 6.3 Information Retrieval beled (training)patterns are used to 6.4 Data Mining learn the descriptions of classes which 7.Summary in turn are used to label a new pattern. In the case of clustering,the problem is to group a given collection of unlabeled patterns into meaningful clusters.In a 1.INTRODUCTION sense,labels are associated with clus- ters also,but these category labels are 1.1 Motivation data driven;that is,they are obtained solely from the data. Data analysis underlies many comput- Clustering is useful in several explor- ing applications,either in a design atory pattern-analysis,grouping,deci- phase or as part of their on-line opera- sion-making,and machine-learning sit- tions.Data analysis procedures can be uations, including data mining, dichotomized as either exploratory or document retrieval,image segmenta- confirmatory,based on the availability tion,and pattern classification.How- of appropriate models for the data ever,in many such problems,there is source,but a key element in both types little prior information (e.g.,statistical of procedures (whether for hypothesis models)available about the data,and formation or decision-making)is thethe decision-maker must make as few grouping,or classification of measure-assumptions about the data as possible. ments based on either(i)goodness-of-fit It is under these restrictions that clus- to a postulated model,or (ii)natural tering methodology is particularly ap- groupings(clustering)revealed through propriate for the exploration of interre- analysis.Cluster analysis is the organi- lationships among the data points to zation of a collection of patterns (usual-make an assessment (perhaps prelimi- ly represented as a vector of measure-nary)of their structure. ments,or a point in a multidimensional The term“clustering'”is used in sev- space)into clusters based on similarity.eral research communities to describe ACM Computing Surveys,Vol.31,No.3,September 1999

1. INTRODUCTION 1.1 Motivation Data analysis underlies many comput￾ing applications, either in a design phase or as part of their on-line opera￾tions. Data analysis procedures can be dichotomized as either exploratory or confirmatory, based on the availability of appropriate models for the data source, but a key element in both types of procedures (whether for hypothesis formation or decision-making) is the grouping, or classification of measure￾ments based on either (i) goodness-of-fit to a postulated model, or (ii) natural groupings (clustering) revealed through analysis. Cluster analysis is the organi￾zation of a collection of patterns (usual￾ly represented as a vector of measure￾ments, or a point in a multidimensional space) into clusters based on similarity. Intuitively, patterns within a valid clus￾ter are more similar to each other than they are to a pattern belonging to a different cluster. An example of cluster￾ing is depicted in Figure 1. The input patterns are shown in Figure 1(a), and the desired clusters are shown in Figure 1(b). Here, points belonging to the same cluster are given the same label. The variety of techniques for representing data, measuring proximity (similarity) between data elements, and grouping data elements has produced a rich and often confusing assortment of clustering methods. It is important to understand the dif￾ference between clustering (unsuper￾vised classification) and discriminant analysis (supervised classification). In supervised classification, we are pro￾vided with a collection of labeled (pre￾classified) patterns; the problem is to label a newly encountered, yet unla￾beled, pattern. Typically, the given la￾beled (training) patterns are used to learn the descriptions of classes which in turn are used to label a new pattern. In the case of clustering, the problem is to group a given collection of unlabeled patterns into meaningful clusters. In a sense, labels are associated with clus￾ters also, but these category labels are data driven; that is, they are obtained solely from the data. Clustering is useful in several explor￾atory pattern-analysis, grouping, deci￾sion-making, and machine-learning sit￾uations, including data mining, document retrieval, image segmenta￾tion, and pattern classification. How￾ever, in many such problems, there is little prior information (e.g., statistical models) available about the data, and the decision-maker must make as few assumptions about the data as possible. It is under these restrictions that clus￾tering methodology is particularly ap￾propriate for the exploration of interre￾lationships among the data points to make an assessment (perhaps prelimi￾nary) of their structure. The term “clustering” is used in sev￾eral research communities to describe CONTENTS 1. Introduction 1.1 Motivation 1.2 Components of a Clustering Task 1.3 The User’s Dilemma and the Role of Expertise 1.4 History 1.5 Outline 2. Definitions and Notation 3. Pattern Representation, Feature Selection and Extraction 4. Similarity Measures 5. Clustering Techniques 5.1 Hierarchical Clustering Algorithms 5.2 Partitional Algorithms 5.3 Mixture-Resolving and Mode-Seeking Algorithms 5.4 Nearest Neighbor Clustering 5.5 Fuzzy Clustering 5.6 Representation of Clusters 5.7 Artificial Neural Networks for Clustering 5.8 Evolutionary Approaches for Clustering 5.9 Search-Based Approaches 5.10 A Comparison of Techniques 5.11 Incorporating Domain Constraints in Clustering 5.12 Clustering Large Data Sets 6. Applications 6.1 Image Segmentation Using Clustering 6.2 Object and Character Recognition 6.3 Information Retrieval 6.4 Data Mining 7. Summary Data Clustering • 265 ACM Computing Surveys, Vol. 31, No. 3, September 1999

266 A.Jain et al.. 22 7 (b) Figure 1.Data clustering. methods for grouping of unlabeled data.sionals(who should view it as an acces- These communities have different ter-sible introduction to a mature field that minologies and assumptions for the is making important contributions to components of the clustering process computing application areas). and the contexts in which clustering is used.Thus,we face a dilemma regard- 1.2 Components of a Clustering Task ing the scope of this survey.The produc- tion of a truly comprehensive survey Typical pattern clustering activity in- would be a monumental task given the volves the following steps [Jain and sheer mass of literature in this area. Dubes 1988]: The accessibility of the survey might (1)pattern representation (optionally also be questionable given the need to including feature extraction and/or reconcile very different vocabularies selection), and assumptions regarding clustering in the various communities. (2)definition of a pattern proximity The goal of this paper is to survey the measure appropriate to the data do- core concepts and techniques in the main, large subset of cluster analysis with its (3)clustering or grouping, roots in statistics and decision theory. Where appropriate,references will be (4)data abstraction(if needed),and made to key concepts and techniques arising from clustering methodology in (5)assessment of output(if needed). the machine-learning and other commu-Figure 2 depicts a typical sequencing of nities. the first three of these steps,including The audience for this paper includes a feedback path where the grouping practitioners in the pattern recognition process output could affect subsequent and image analysis communities (who feature extraction and similarity com- should view it as a summarization of putations. current practice),practitioners in the Pattern representation refers to the machine-learning communities (who number of classes,the number of avail- should view it as a snapshot of a closely able patterns,and the number,type, related field with a rich history of well-and scale of the features available to the understood techniques),and the clustering algorithm.Some of this infor- broader audience of scientific profes-mation may not be controllable by the ACM Computing Surveys,Vol.31,No.3,September 1999

methods for grouping of unlabeled data. These communities have different ter￾minologies and assumptions for the components of the clustering process and the contexts in which clustering is used. Thus, we face a dilemma regard￾ing the scope of this survey. The produc￾tion of a truly comprehensive survey would be a monumental task given the sheer mass of literature in this area. The accessibility of the survey might also be questionable given the need to reconcile very different vocabularies and assumptions regarding clustering in the various communities. The goal of this paper is to survey the core concepts and techniques in the large subset of cluster analysis with its roots in statistics and decision theory. Where appropriate, references will be made to key concepts and techniques arising from clustering methodology in the machine-learning and other commu￾nities. The audience for this paper includes practitioners in the pattern recognition and image analysis communities (who should view it as a summarization of current practice), practitioners in the machine-learning communities (who should view it as a snapshot of a closely related field with a rich history of well￾understood techniques), and the broader audience of scientific profes￾sionals (who should view it as an acces￾sible introduction to a mature field that is making important contributions to computing application areas). 1.2 Components of a Clustering Task Typical pattern clustering activity in￾volves the following steps [Jain and Dubes 1988]: (1) pattern representation (optionally including feature extraction and/or selection), (2) definition of a pattern proximity measure appropriate to the data do￾main, (3) clustering or grouping, (4) data abstraction (if needed), and (5) assessment of output (if needed). Figure 2 depicts a typical sequencing of the first three of these steps, including a feedback path where the grouping process output could affect subsequent feature extraction and similarity com￾putations. Pattern representation refers to the number of classes, the number of avail￾able patterns, and the number, type, and scale of the features available to the clustering algorithm. Some of this infor￾mation may not be controllable by the X X Y Y (a) (b) x x x x x 1 1 1 x x 1 1 2 2 x x 2 2 x x x x x x x x x x x x x x x x x x x 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 x x x x x x x x 6 6 6 7 7 7 7 6 xxx x x x x 45 5 5 5 5 5 Figure 1. Data clustering. 266 • A. Jain et al. ACM Computing Surveys, Vol. 31, No. 3, September 1999

Data Clustering 267 Patterns Feature Pattern Interpattern Selection/ Clusters Similarity Grouping Extraction Representations feedback loop Figure 2.Stages in clustering. practitioner.Feature selection is the Data abstraction is the process of ex- process of identifying the most effective tracting a simple and compact represen- subset of the original features to use in tation of a data set.Here,simplicity is clustering.Feature extraction is the use either from the perspective of automatic of one or more transformations of the analysis (so that a machine can perform input features to produce new salient further processing efficiently)or it is features.Either or both of these tech-human-oriented(so that the representa- niques can be used to obtain an appro-tion obtained is easy to comprehend and priate set of features to use in cluster-intuitively appealing).In the clustering ing. context,a typical data abstraction is a Pattern proximity is usually measured compact description of each cluster, by a distance function defined on pairs usually in terms of cluster prototypes or of patterns.A variety of distance mea-representative patterns such as the cen- sures are in use in the various commu-troid [Diday and Simon 1976]. nities [Anderberg 1973;Jain and Dubes How is the output of a clustering algo- 1988;Diday and Simon 1976].A simple rithm evaluated?What characterizes a distance measure like Euclidean dis- good'clustering result and a poor'one? tance can often be used to reflect dis-All clustering algorithms will,when similarity between two patterns, presented with data,produce clusters- whereas other similarity measures can regardless of whether the data contain be used to characterize the conceptual clusters or not.If the data does contain similarity between patterns [Michalski clusters,some clustering algorithms and Stepp 1983].Distance measures are may obtain better'clusters than others. discussed in Section 4. The assessment of a clustering proce- The grouping step can be performed dure's output,then,has several facets. in a number of ways.The output clus-One is actually an assessment of the tering (or clusterings)can be hard (a data domain rather than the clustering partition of the data into groups)or algorithm itself data which do not fuzzy (where each pattern has a vari-contain clusters should not be processed able degree of membership in each of by a clustering algorithm.The study of the output clusters).Hierarchical clus-cluster tendency,wherein the input data tering algorithms produce a nested se-are examined to see if there is any merit ries of partitions based on a criterion for to a cluster analysis prior to one being merging or splitting clusters based on performed,is a relatively inactive re- similarity.Partitional clustering algo-search area,and will not be considered rithms identify the partition that opti-further in this survey.The interested mizes (usually locally)a clustering cri-reader is referred to Dubes [1987]and terion.Additional techniques for the Cheng [1995]for information. grouping operation include probabilistic Cluster validity analysis,by contrast, [Brailovski 1991]and graph-theoretic is the assessment of a clustering proce- [Zahn 1971]clustering methods.The dure's output.Often this analysis uses a variety of techniques for cluster forma- specific criterion of optimality;however, tion is described in Section 5. these criteria are usually arrived at ACM Computing Surveys,Vol.31,No.3,September 1999

practitioner. Feature selection is the process of identifying the most effective subset of the original features to use in clustering. Feature extraction is the use of one or more transformations of the input features to produce new salient features. Either or both of these tech￾niques can be used to obtain an appro￾priate set of features to use in cluster￾ing. Pattern proximity is usually measured by a distance function defined on pairs of patterns. A variety of distance mea￾sures are in use in the various commu￾nities [Anderberg 1973; Jain and Dubes 1988; Diday and Simon 1976]. A simple distance measure like Euclidean dis￾tance can often be used to reflect dis￾similarity between two patterns, whereas other similarity measures can be used to characterize the conceptual similarity between patterns [Michalski and Stepp 1983]. Distance measures are discussed in Section 4. The grouping step can be performed in a number of ways. The output clus￾tering (or clusterings) can be hard (a partition of the data into groups) or fuzzy (where each pattern has a vari￾able degree of membership in each of the output clusters). Hierarchical clus￾tering algorithms produce a nested se￾ries of partitions based on a criterion for merging or splitting clusters based on similarity. Partitional clustering algo￾rithms identify the partition that opti￾mizes (usually locally) a clustering cri￾terion. Additional techniques for the grouping operation include probabilistic [Brailovski 1991] and graph-theoretic [Zahn 1971] clustering methods. The variety of techniques for cluster forma￾tion is described in Section 5. Data abstraction is the process of ex￾tracting a simple and compact represen￾tation of a data set. Here, simplicity is either from the perspective of automatic analysis (so that a machine can perform further processing efficiently) or it is human-oriented (so that the representa￾tion obtained is easy to comprehend and intuitively appealing). In the clustering context, a typical data abstraction is a compact description of each cluster, usually in terms of cluster prototypes or representative patterns such as the cen￾troid [Diday and Simon 1976]. How is the output of a clustering algo￾rithm evaluated? What characterizes a ‘good’ clustering result and a ‘poor’ one? All clustering algorithms will, when presented with data, produce clusters — regardless of whether the data contain clusters or not. If the data does contain clusters, some clustering algorithms may obtain ‘better’ clusters than others. The assessment of a clustering proce￾dure’s output, then, has several facets. One is actually an assessment of the data domain rather than the clustering algorithm itself— data which do not contain clusters should not be processed by a clustering algorithm. The study of cluster tendency, wherein the input data are examined to see if there is any merit to a cluster analysis prior to one being performed, is a relatively inactive re￾search area, and will not be considered further in this survey. The interested reader is referred to Dubes [1987] and Cheng [1995] for information. Cluster validity analysis, by contrast, is the assessment of a clustering proce￾dure’s output. Often this analysis uses a specific criterion of optimality; however, these criteria are usually arrived at Feature Selection/ Extraction Pattern Grouping Clusters Interpattern Similarity Representations Patterns feedback loop Figure 2. Stages in clustering. Data Clustering • 267 ACM Computing Surveys, Vol. 31, No. 3, September 1999

268 A.Jain et al. subjectively.Hence,little in the way of -How can a vary large data set (say,a gold standards'exist in clustering ex- million patterns)be clustered effi- cept in well-prescribed subdomains.Va- ciently? lidity assessments are objective [Dubes 1993]and are performed to determine These issues have motivated this sur- whether the output is meaningful.A vey,and its aim is to provide a perspec- clustering structure is valid if it cannot tive on the state of the art in clustering reasonably have occurred by chance or methodology and algorithms.With such as an artifact of a clustering algorithm. a perspective,an informed practitioner When statistical approaches to cluster- should be able to confidently assess the ing are used,validation is accomplished tradeoffs of different techniques,and by carefully applying statistical meth- ultimately make a competent decision ods and testing hypotheses.There are on a technique or suite of techniques to three types of validation studies.An employ in a particular application. external assessment of validity com- There is no clustering technique that pares the recovered structure to an a is universally applicable in uncovering priori structure.An internal examina- the variety of structures present in mul- tion of validity tries to determine if the tidimensional data sets.For example, structure is intrinsically appropriate for consider the two-dimensional data set the data.A relative test compares two shown in Figure 1(a).Not all clustering structures and measures their relative techniques can uncover all the clusters merit.Indices used for this comparison present here with equal facility,because are discussed in detail in Jain and clustering algorithms often contain im- Dubes [1988]and Dubes [1993],and are plicit assumptions about cluster shape not discussed further in this paper. or multiple-cluster configurations based on the similarity measures and group- 1.3 The User's Dilemma and the Role of ing criteria used. Expertise Humans perform competitively with automatic clustering procedures in two The availability of such a vast collection dimensions,but most real problems in- of clustering algorithms in the litera- volve clustering in higher dimensions.It ture can easily confound a user attempt- is difficult for humans to obtain an intu- ing to select an algorithm suitable for itive interpretation of data embedded in the problem at hand.In Dubes and Jain a high-dimensional space.In addition, [1976],a set of admissibility criteria data hardly follow the“ideal”structures defined by Fisher and Van Ness [1971] (e.g.,hyperspherical,linear)shown in are used to compare clustering algo- rithms.These admissibility criteria are Figure 1.This explains the large num- ber of clustering algorithms which con- based on:(1)the manner in which clus- ters are formed,(2)the structure of the tinue to appear in the literature;each new clustering algorithm performs data,and(3)sensitivity of the cluster-slightly better than the existing ones on ing technique to changes that do not a specific distribution of patterns. affect the structure of the data.How- It is essential for the user of a cluster- ever,there is no critical analysis of clus- ing algorithm to not only have a thor- tering algorithms dealing with the im- ough understanding of the particular portant questions such as technique being utilized,but also to -How should the data be normalized? know the details of the data gathering -Which similarity measure is appropri- process and to have some domain exper- tise;the more information the user has ate to use in a given situation? about the data at hand,the more likely -How should domain knowledge be uti-the user would be able to succeed in lized in a particular clustering prob- assessing its true class structure [JJain lem? and Dubes 19881.This domain informa- ACM Computing Surveys,Vol.31,No.3,September 1999

subjectively. Hence, little in the way of ‘gold standards’ exist in clustering ex￾cept in well-prescribed subdomains. Va￾lidity assessments are objective [Dubes 1993] and are performed to determine whether the output is meaningful. A clustering structure is valid if it cannot reasonably have occurred by chance or as an artifact of a clustering algorithm. When statistical approaches to cluster￾ing are used, validation is accomplished by carefully applying statistical meth￾ods and testing hypotheses. There are three types of validation studies. An external assessment of validity com￾pares the recovered structure to an a priori structure. An internal examina￾tion of validity tries to determine if the structure is intrinsically appropriate for the data. A relative test compares two structures and measures their relative merit. Indices used for this comparison are discussed in detail in Jain and Dubes [1988] and Dubes [1993], and are not discussed further in this paper. 1.3 The User’s Dilemma and the Role of Expertise The availability of such a vast collection of clustering algorithms in the litera￾ture can easily confound a user attempt￾ing to select an algorithm suitable for the problem at hand. In Dubes and Jain [1976], a set of admissibility criteria defined by Fisher and Van Ness [1971] are used to compare clustering algo￾rithms. These admissibility criteria are based on: (1) the manner in which clus￾ters are formed, (2) the structure of the data, and (3) sensitivity of the cluster￾ing technique to changes that do not affect the structure of the data. How￾ever, there is no critical analysis of clus￾tering algorithms dealing with the im￾portant questions such as —How should the data be normalized? —Which similarity measure is appropri￾ate to use in a given situation? —How should domain knowledge be uti￾lized in a particular clustering prob￾lem? —How can a vary large data set (say, a million patterns) be clustered effi￾ciently? These issues have motivated this sur￾vey, and its aim is to provide a perspec￾tive on the state of the art in clustering methodology and algorithms. With such a perspective, an informed practitioner should be able to confidently assess the tradeoffs of different techniques, and ultimately make a competent decision on a technique or suite of techniques to employ in a particular application. There is no clustering technique that is universally applicable in uncovering the variety of structures present in mul￾tidimensional data sets. For example, consider the two-dimensional data set shown in Figure 1(a). Not all clustering techniques can uncover all the clusters present here with equal facility, because clustering algorithms often contain im￾plicit assumptions about cluster shape or multiple-cluster configurations based on the similarity measures and group￾ing criteria used. Humans perform competitively with automatic clustering procedures in two dimensions, but most real problems in￾volve clustering in higher dimensions. It is difficult for humans to obtain an intu￾itive interpretation of data embedded in a high-dimensional space. In addition, data hardly follow the “ideal” structures (e.g., hyperspherical, linear) shown in Figure 1. This explains the large num￾ber of clustering algorithms which con￾tinue to appear in the literature; each new clustering algorithm performs slightly better than the existing ones on a specific distribution of patterns. It is essential for the user of a cluster￾ing algorithm to not only have a thor￾ough understanding of the particular technique being utilized, but also to know the details of the data gathering process and to have some domain exper￾tise; the more information the user has about the data at hand, the more likely the user would be able to succeed in assessing its true class structure [Jain and Dubes 1988]. This domain informa- 268 • A. Jain et al. ACM Computing Surveys, Vol. 31, No. 3, September 1999

Data Clustering 269 tion can also be used to improve the survey of the state of the art in cluster- quality of feature extraction,similarity ing circa 1978 was reported in Dubes computation,grouping,and cluster rep- and Jain [1980].A comparison of vari- resentation [Murty and Jain 1995]. ous clustering algorithms for construct- Appropriate constraints on the data ing the minimal spanning tree and the source can be incorporated into a clus- short spanning path was given in Lee tering procedure.One example of this is [1981].Cluster analysis was also sur- mixture resolving [Titterington et al. veyed in Jain et al.[1986].A review of 1985],wherein it is assumed that the image segmentation by clustering was data are drawn from a mixture of an reported in Jain and Flynn [1996].Com- unknown number of densities (often as- parisons of various combinatorial opti- sumed to be multivariate Gaussian). mization schemes,based on experi- The clustering problem here is to iden-ments,have been reported in Mishra tify the number of mixture components and Raghavan [1994]and Al-Sultan and and the parameters of each component. han[1996]. The concept of density clustering and a methodology for decomposition of fea- ture spaces [Bajcsy 1997]have also 1.5 Outline been incorporated into traditional clus- tering methodology,yielding a tech- This paper is organized as follows.Sec- tion 2 presents definitions of terms to be nique for extracting overlapping clus- ters. used throughout the paper.Section 3 summarizes pattern representation, feature extraction,and feature selec- 1.4 History tion.Various approaches to the compu- Even though there is an increasing in- tation of proximity between patterns terest in the use of clustering methods are discussed in Section 4.Section 5 in pattern recognition [Anderberg presents a taxonomy of clustering ap- 1973],image processing [Jain and proaches,describes the major tech- Flynn 1996]and information retrieval niques in use,and discusses emerging [Rasmussen 1992;Salton 1991],cluster- techniques for clustering incorporating ing has a rich history in other disci- non-numeric constraints and the clus- plines [Jain and Dubes 1988]such as tering of large sets of patterns.Section biology,psychiatry,psychology,archae- 6 discusses applications of clustering ology,geology,geography,and market- methods to image analysis and data ing.Other terms more or less synony- mining problems.Finally,Section 7 pre- mous with clustering include sents some concluding remarks. unsupervised learning [Jain and Dubes 1988],numerical taxonomy [Sneath and 2.DEFINITIONS AND NOTATION Sokal 1973],vector quantization [Oehler and Gray 1995],and learning by obser- The following terms and notation are vation [Michalski and Stepp 1983].The used throughout this paper. field of spatial analysis of point pat- terns [Ripley 1988]is also related to -A pattern (or feature vector,observa- cluster analysis.The importance and tion,or datum)x is a single data item interdisciplinary nature of clustering is used by the clustering algorithm.It evident through its vast literature. typically consists of a vector of d mea- A number of books on clustering have been published [Jain and Dubes 1988; surements:x =(x1,...xa). Anderberg 1973;Hartigan 1975;Spath 1980;Duran and Odell 1974;Everitt -The individual scalar components xi 1993;Backer 1995],in addition to some of a pattern x are called features (or useful and influential review papers.A attributes). ACM Computing Surveys,Vol.31,No.3,September 1999

tion can also be used to improve the quality of feature extraction, similarity computation, grouping, and cluster rep￾resentation [Murty and Jain 1995]. Appropriate constraints on the data source can be incorporated into a clus￾tering procedure. One example of this is mixture resolving [Titterington et al. 1985], wherein it is assumed that the data are drawn from a mixture of an unknown number of densities (often as￾sumed to be multivariate Gaussian). The clustering problem here is to iden￾tify the number of mixture components and the parameters of each component. The concept of density clustering and a methodology for decomposition of fea￾ture spaces [Bajcsy 1997] have also been incorporated into traditional clus￾tering methodology, yielding a tech￾nique for extracting overlapping clus￾ters. 1.4 History Even though there is an increasing in￾terest in the use of clustering methods in pattern recognition [Anderberg 1973], image processing [Jain and Flynn 1996] and information retrieval [Rasmussen 1992; Salton 1991], cluster￾ing has a rich history in other disci￾plines [Jain and Dubes 1988] such as biology, psychiatry, psychology, archae￾ology, geology, geography, and market￾ing. Other terms more or less synony￾mous with clustering include unsupervised learning [Jain and Dubes 1988], numerical taxonomy [Sneath and Sokal 1973], vector quantization [Oehler and Gray 1995], and learning by obser￾vation [Michalski and Stepp 1983]. The field of spatial analysis of point pat￾terns [Ripley 1988] is also related to cluster analysis. The importance and interdisciplinary nature of clustering is evident through its vast literature. A number of books on clustering have been published [Jain and Dubes 1988; Anderberg 1973; Hartigan 1975; Spath 1980; Duran and Odell 1974; Everitt 1993; Backer 1995], in addition to some useful and influential review papers. A survey of the state of the art in cluster￾ing circa 1978 was reported in Dubes and Jain [1980]. A comparison of vari￾ous clustering algorithms for construct￾ing the minimal spanning tree and the short spanning path was given in Lee [1981]. Cluster analysis was also sur￾veyed in Jain et al. [1986]. A review of image segmentation by clustering was reported in Jain and Flynn [1996]. Com￾parisons of various combinatorial opti￾mization schemes, based on experi￾ments, have been reported in Mishra and Raghavan [1994] and Al-Sultan and Khan [1996]. 1.5 Outline This paper is organized as follows. Sec￾tion 2 presents definitions of terms to be used throughout the paper. Section 3 summarizes pattern representation, feature extraction, and feature selec￾tion. Various approaches to the compu￾tation of proximity between patterns are discussed in Section 4. Section 5 presents a taxonomy of clustering ap￾proaches, describes the major tech￾niques in use, and discusses emerging techniques for clustering incorporating non-numeric constraints and the clus￾tering of large sets of patterns. Section 6 discusses applications of clustering methods to image analysis and data mining problems. Finally, Section 7 pre￾sents some concluding remarks. 2. DEFINITIONS AND NOTATION The following terms and notation are used throughout this paper. —A pattern (or feature vector, observa￾tion, or datum) x is a single data item used by the clustering algorithm. It typically consists of a vector of d mea￾surements: x 5 ~x1,... xd!. —The individual scalar components xi of a pattern x are called features (or attributes). Data Clustering • 269 ACM Computing Surveys, Vol. 31, No. 3, September 1999

270 A.Jain et al. -d is the dimensionality of the pattern clustering system.Because of the diffi- or of the pattern space. culties surrounding pattern representa- tion,it is conveniently assumed that the -A pattern set is denoted pattern representation is available prior {x1,...x.The ith pattern in is to clustering.Nonetheless,a careful in- denoted x;=(xi,,··.xi,d).In many vestigation of the available features and cases a pattern set to be clustered is any available transformations (even simple ones)can yield significantly im- viewed as an n x d pattern matrix. proved clustering results.A good pat- -A class,in the abstract,refers to a tern representation can often yield a state of nature that governs the pat-simple and easily understood clustering; tern generation process in some cases. a poor pattern representation may yield More concretely,a class can be viewed a complex clustering whose true struc- as a source of patterns whose distri-ture is difficult or impossible to discern. bution in feature space is governed by Figure 3 shows a simple example.The a probability density specific to the points in this 2D feature space are ar- class.Clustering techniques attempt ranged in a curvilinear cluster of ap- to group patterns so that the classes proximately constant distance from the thereby obtained reflect the different origin.If one chooses Cartesian coordi- pattern generation processes repre- nates to represent the patterns,many sented in the pattern set. clustering algorithms would be likely to fragment the cluster into two or more -Hard clustering techniques assign a clusters,since it is not compact.If,how- class label li to each patterns xi,iden-ever,one uses a polar coordinate repre- tifying its class.The set of all labels sentation for the clusters,the radius for a pattern set is coordinate exhibits tight clustering and l1,...I,with li(1,...,k),a one-cluster solution is likely to be where k is the number of clusters. easily obtained. A pattern can measure either a phys- -Fuzzy clustering procedures assign to ical object (e.g.,a chair)or an abstract each input pattern x:a fractional de-notion (e.g.,a style of writing).As noted gree of membership fi;in each output above,patterns are represented conven- tionally as multidimensional vectors, cluster / where each dimension is a single fea- -A distance measure (a specialization ture [Duda and Hart 1973].These fea- of a proximity measure)is a metric tures can be either quantitative or qual- (or quasi-metric)on the feature space itative.For example,if weight and color used to quantify the similarity of pat-are the two features used,then terns. (20,black)is the representation of a black object with 20 units of weight. 3.PATTERN REPRESENTATION,FEATURE The features can be subdivided into the SELECTION AND EXTRACTION following types [Gowda and Diday 1992: There are no theoretical guidelines that suggest the appropriate patterns and (1)Quantitative features:e.g. features to use in a specific situation. (a)continuous values (e.g.,weight); Indeed,the pattern generation process (b)discrete values (e.g.,the number is often not directly controllable;the of computers); user's role in the pattern representation (c)interval values (e.g.,the dura- process is to gather facts and conjec- tion of an event). tures about the data,optionally perform feature selection and extraction,and de- (2)Qualitative features: sign the subsequent elements of the (a)nominal or unordered (e.g.,color); ACM Computing Surveys,Vol.31,No.3,September 1999

—d is the dimensionality of the pattern or of the pattern space. —A pattern set is denoted - 5 $x1,... xn%. The ith pattern in - is denoted xi 5 ~xi,1,... xi,d!. In many cases a pattern set to be clustered is viewed as an n 3 d pattern matrix. —A class, in the abstract, refers to a state of nature that governs the pat￾tern generation process in some cases. More concretely, a class can be viewed as a source of patterns whose distri￾bution in feature space is governed by a probability density specific to the class. Clustering techniques attempt to group patterns so that the classes thereby obtained reflect the different pattern generation processes repre￾sented in the pattern set. —Hard clustering techniques assign a class label li to each patterns xi, iden￾tifying its class. The set of all labels for a pattern set - is + 5 $l1,... ln%, with li [ $1, · · ·, k%, where k is the number of clusters. —Fuzzy clustering procedures assign to each input pattern xi a fractional de￾gree of membership fij in each output cluster j. —A distance measure (a specialization of a proximity measure) is a metric (or quasi-metric) on the feature space used to quantify the similarity of pat￾terns. 3. PATTERN REPRESENTATION, FEATURE SELECTION AND EXTRACTION There are no theoretical guidelines that suggest the appropriate patterns and features to use in a specific situation. Indeed, the pattern generation process is often not directly controllable; the user’s role in the pattern representation process is to gather facts and conjec￾tures about the data, optionally perform feature selection and extraction, and de￾sign the subsequent elements of the clustering system. Because of the diffi￾culties surrounding pattern representa￾tion, it is conveniently assumed that the pattern representation is available prior to clustering. Nonetheless, a careful in￾vestigation of the available features and any available transformations (even simple ones) can yield significantly im￾proved clustering results. A good pat￾tern representation can often yield a simple and easily understood clustering; a poor pattern representation may yield a complex clustering whose true struc￾ture is difficult or impossible to discern. Figure 3 shows a simple example. The points in this 2D feature space are ar￾ranged in a curvilinear cluster of ap￾proximately constant distance from the origin. If one chooses Cartesian coordi￾nates to represent the patterns, many clustering algorithms would be likely to fragment the cluster into two or more clusters, since it is not compact. If, how￾ever, one uses a polar coordinate repre￾sentation for the clusters, the radius coordinate exhibits tight clustering and a one-cluster solution is likely to be easily obtained. A pattern can measure either a phys￾ical object (e.g., a chair) or an abstract notion (e.g., a style of writing). As noted above, patterns are represented conven￾tionally as multidimensional vectors, where each dimension is a single fea￾ture [Duda and Hart 1973]. These fea￾tures can be either quantitative or qual￾itative. For example, if weight and color are the two features used, then ~20, black! is the representation of a black object with 20 units of weight. The features can be subdivided into the following types [Gowda and Diday 1992]: (1) Quantitative features: e.g. (a) continuous values (e.g., weight); (b) discrete values (e.g., the number of computers); (c) interval values (e.g., the dura￾tion of an event). (2) Qualitative features: (a) nominal or unordered (e.g., color); 270 • A. Jain et al. ACM Computing Surveys, Vol. 31, No. 3, September 1999

Data Clustering 271 tify a subset of the existing features for subsequent use,while feature extrac- tion techniques compute new features from the original set.In either case,the goal is to improve classification perfor- mance and/or computational efficiency Feature selection is a well-explored : topic in statistical pattern recognition [Duda and Hart 1973];however,in a clustering context (i.e.,lacking class la- bels for patterns),the feature selection process is of necessity ad hoc,and might involve a trial-and-error process where Figure 3.A curvilinear cluster whose points various subsets of features are selected, are approximately equidistant from the origin. the resulting patterns clustered,and Different pattern representations (coordinate the output evaluated using a validity systems)would cause clustering algorithms to yield different results for this data(see text). index.In contrast,some of the popular feature extraction processes (e.g.,prin- cipal components analysis [Fukunaga (b)ordinal (e.g.,military rank or 1990])do not depend on labeled data qualitative evaluations of tem- and can be used directly.Reduction of perature(“cool”or "hot”")or the number of features has an addi- sound intensity(“quiet'”or tional benefit,namely the ability to pro- oud”). duce output that can be visually in- spected by a human. Quantitative features can be measured on a ratio scale (with a meaningful ref- 4.SIMILARITY MEASURES erence value,such as temperature),or on nominal or ordinal scales. Since similarity is fundamental to the One can also use structured features definition of a cluster,a measure of the [Michalski and Stepp 1983]which are similarity between two patterns drawn represented as trees,where the parent from the same feature space is essential node represents a generalization of its to most clustering procedures.Because child nodes.For example,a parent node of the variety of feature types and “vehicle”may be a generalization of scales,the distance measure (or mea- children labeled “cars,”buses,” sures)must be chosen carefully.It is “trucks,”and“motorcycles.”Further, most common to calculate the dissimi- the node“cars”could be a generaliza- larity between two patterns using a dis- tion of cars of the type“Toyota,”“Ford," tance measure defined on the feature “Benz,”etc.A generalized representa- space.We will focus on the well-known tion of patterns,called symbolic objects distance measures used for patterns was proposed in Diday [1988].Symbolic whose features are all continuous. objects are defined by a logical conjunc- The most popular metric for continu- tion of events.These events link values ous features is the euclidean distance and features in which the features can take one or more values and all the objects need not be defined on the same d2(&,x)=(∑(x.k-x元.)22 set of features. k=1 It is often valuable to isolate only the most descriptive and discriminatory fea- =区:-x2, tures in the input set,and utilize those features exclusively in subsequent anal- which is a special case (p=2)of the ysis.Feature selection techniques iden- Minkowski metric ACM Computing Surveys,Vol.31,No.3,September 1999

(b) ordinal (e.g., military rank or qualitative evaluations of tem￾perature (“cool” or “hot”) or sound intensity (“quiet” or “loud”)). Quantitative features can be measured on a ratio scale (with a meaningful ref￾erence value, such as temperature), or on nominal or ordinal scales. One can also use structured features [Michalski and Stepp 1983] which are represented as trees, where the parent node represents a generalization of its child nodes. For example, a parent node “vehicle” may be a generalization of children labeled “cars,” “buses,” “trucks,” and “motorcycles.” Further, the node “cars” could be a generaliza￾tion of cars of the type “Toyota,” “Ford,” “Benz,” etc. A generalized representa￾tion of patterns, called symbolic objects was proposed in Diday [1988]. Symbolic objects are defined by a logical conjunc￾tion of events. These events link values and features in which the features can take one or more values and all the objects need not be defined on the same set of features. It is often valuable to isolate only the most descriptive and discriminatory fea￾tures in the input set, and utilize those features exclusively in subsequent anal￾ysis. Feature selection techniques iden￾tify a subset of the existing features for subsequent use, while feature extrac￾tion techniques compute new features from the original set. In either case, the goal is to improve classification perfor￾mance and/or computational efficiency. Feature selection is a well-explored topic in statistical pattern recognition [Duda and Hart 1973]; however, in a clustering context (i.e., lacking class la￾bels for patterns), the feature selection process is of necessity ad hoc, and might involve a trial-and-error process where various subsets of features are selected, the resulting patterns clustered, and the output evaluated using a validity index. In contrast, some of the popular feature extraction processes (e.g., prin￾cipal components analysis [Fukunaga 1990]) do not depend on labeled data and can be used directly. Reduction of the number of features has an addi￾tional benefit, namely the ability to pro￾duce output that can be visually in￾spected by a human. 4. SIMILARITY MEASURES Since similarity is fundamental to the definition of a cluster, a measure of the similarity between two patterns drawn from the same feature space is essential to most clustering procedures. Because of the variety of feature types and scales, the distance measure (or mea￾sures) must be chosen carefully. It is most common to calculate the dissimi￾larity between two patterns using a dis￾tance measure defined on the feature space. We will focus on the well-known distance measures used for patterns whose features are all continuous. The most popular metric for continu￾ous features is the Euclidean distance d2~xi, xj! 5 ~O k51 d ~xi, k 2 xj, k! 2 ! 1/ 2 5 ixi 2 xji2, which is a special case (p52) of the Minkowski metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 3. A curvilinear cluster whose points are approximately equidistant from the origin. Different pattern representations (coordinate systems) would cause clustering algorithms to yield different results for this data (see text). Data Clustering • 271 ACM Computing Surveys, Vol. 31, No. 3, September 1999

272 A.。Jain et al. n(n -1)/2 pairwise distance values for the n patterns and store them in a k=1 (symmetric)matrix. Computation of distances between =区:-b: patterns with some or all features being noncontinuous is problematic,since the The Euclidean distance has an intuitive different types of features are not com- appeal as it is commonly used to evalu- parable and (as an extreme example) ate the proximity of objects in two or three-dimensional space.It works well the notion of proximity is effectively bi- nary-valued for nominal-scaled fea- when a data set has“compact”or“iso- lated"clusters [Mao and Jain 1996]. tures.Nonetheless,practitioners (espe- The drawback to direct use of the cially those in machine learning,where Minkowski metrics is the tendency of mixed-type patterns are common)have the largest-scaled feature to dominate developed proximity measures for heter- the others.Solutions to this problem ogeneous type patterns.A recent exam- include normalization of the continuous ple is Wilson and Martinez [1997], features (to a common range or vari- which proposes a combination of a mod- ance)or other weighting schemes.Lin- ified Minkowski metric for continuous ear correlation among features can also features and a distance based on counts distort distance measures;this distor- (population)for nominal attributes.A tion can be alleviated by applying a variety of other metrics have been re- whitening transformation to the data or ported in Diday and Simon [1976]and by using the squared Mahalanobis dis- Ichino and Yaguchi [1994]for comput- tance ing the similarity between patterns rep- resented using quantitative as well as dM(&,x)=(x:-x)2-(x:-x)T, qualitative features. Patterns can also be represented us- where the patterns x:and x;are as- ing string or tree structures [Knuth 1973].Strings are used in syntactic sumed to be row vectors,and is the clustering [Fu and Lu 1977].Several sample covariance matrix of the pat- measures of similarity between strings terns or the known covariance matrix of are described in Baeza-Yates [1992].A the pattern generation process;d(.,) good summary of similarity measures assigns different weights to different between trees is given by Zhang [1995]. features based on their variances and A comparison of syntactic and statisti- pairwise linear correlations.Here,it is implicitly assumed that class condi- cal approaches for pattern recognition tional densities are unimodal and char- using several criteria was presented in Tanaka [1995]and the conclusion was acterized by multidimensional spread, i.e.,that the densities are multivariate that syntactic methods are inferior in Gaussian.The regularized Mahalanobis every aspect.Therefore,we do not con- distance was used in Mao and Jain sider syntactic methods further in this [1996]to extract hyperellipsoidal clus- paper. ters.Recently,several researchers There are some distance measures re- [Huttenlocher et al.1993;Dubuisson ported in the literature [Gowda and and Jain 1994]have used the Hausdorff Krishna 1977;Jarvis and Patrick 1973] distance in a point set matching con- that take into account the effect of sur- text. rounding or neighboring points.These Some clustering algorithms work on a surrounding points are called context in matrix of proximity values instead of on Michalski and Stepp [1983].The simi- the original pattern set.It is useful in larity between two points xi and xi, such situations to precompute all the given this context,is given by ACM Computing Surveys,Vol.31,No.3,September 1999

dp~xi, xj! 5 ~O k51 d ?xi, k 2 xj, k? p ! 1/p 5 ixi 2 xjip. The Euclidean distance has an intuitive appeal as it is commonly used to evalu￾ate the proximity of objects in two or three-dimensional space. It works well when a data set has “compact” or “iso￾lated” clusters [Mao and Jain 1996]. The drawback to direct use of the Minkowski metrics is the tendency of the largest-scaled feature to dominate the others. Solutions to this problem include normalization of the continuous features (to a common range or vari￾ance) or other weighting schemes. Lin￾ear correlation among features can also distort distance measures; this distor￾tion can be alleviated by applying a whitening transformation to the data or by using the squared Mahalanobis dis￾tance dM~xi, xj! 5 ~xi 2 xj!S21 ~xi 2 xj! T, where the patterns xi and xj are as￾sumed to be row vectors, and S is the sample covariance matrix of the pat￾terns or the known covariance matrix of the pattern generation process; dM~z , z! assigns different weights to different features based on their variances and pairwise linear correlations. Here, it is implicitly assumed that class condi￾tional densities are unimodal and char￾acterized by multidimensional spread, i.e., that the densities are multivariate Gaussian. The regularized Mahalanobis distance was used in Mao and Jain [1996] to extract hyperellipsoidal clus￾ters. Recently, several researchers [Huttenlocher et al. 1993; Dubuisson and Jain 1994] have used the Hausdorff distance in a point set matching con￾text. Some clustering algorithms work on a matrix of proximity values instead of on the original pattern set. It is useful in such situations to precompute all the n~n 2 1! / 2 pairwise distance values for the n patterns and store them in a (symmetric) matrix. Computation of distances between patterns with some or all features being noncontinuous is problematic, since the different types of features are not com￾parable and (as an extreme example) the notion of proximity is effectively bi￾nary-valued for nominal-scaled fea￾tures. Nonetheless, practitioners (espe￾cially those in machine learning, where mixed-type patterns are common) have developed proximity measures for heter￾ogeneous type patterns. A recent exam￾ple is Wilson and Martinez [1997], which proposes a combination of a mod￾ified Minkowski metric for continuous features and a distance based on counts (population) for nominal attributes. A variety of other metrics have been re￾ported in Diday and Simon [1976] and Ichino and Yaguchi [1994] for comput￾ing the similarity between patterns rep￾resented using quantitative as well as qualitative features. Patterns can also be represented us￾ing string or tree structures [Knuth 1973]. Strings are used in syntactic clustering [Fu and Lu 1977]. Several measures of similarity between strings are described in Baeza-Yates [1992]. A good summary of similarity measures between trees is given by Zhang [1995]. A comparison of syntactic and statisti￾cal approaches for pattern recognition using several criteria was presented in Tanaka [1995] and the conclusion was that syntactic methods are inferior in every aspect. Therefore, we do not con￾sider syntactic methods further in this paper. There are some distance measures re￾ported in the literature [Gowda and Krishna 1977; Jarvis and Patrick 1973] that take into account the effect of sur￾rounding or neighboring points. These surrounding points are called context in Michalski and Stepp [1983]. The simi￾larity between two points xi and xj, given this context, is given by 272 • A. Jain et al. ACM Computing Surveys, Vol. 31, No. 3, September 1999

Data Clustering 273 X2 B A FE X I Figure 4.A and B are more similar than A Figure 5.After a change in context,B and C and C. are more similar than B and A. Watanabe's theorem of the ugly duck- s(x,X)=fx,,), ling [Watanabe 1985]states: where is the context (the set of sur- "Insofar as we use a finite set of rounding points).One metric defined predicates that are capable of dis- using context is the mutual neighbor tinguishing any two objects con- distance(MND),proposed in Gowda and sidered,the number of predicates Krishna [1977],which is given by shared by any two such objects is constant,independent of the MND(xi,x)=NN(xi,x)+NN(xi,xi), choice of objects." This implies that it is possible to where NN(xi,x)is the neighbor num- make any two arbitrary patterns ber of x;with respect to xi.Figures 4 equally similar by encoding them with a and 5 give an example.In Figure 4,the sufficiently large number of features.As nearest neighbor of A is B,and B's a consequence,any two arbitrary pat- nearest neighbor is A.So,NN(A,B)=terns are equally similar,unless we use NN(B,A)=1 and the MND between some additional domain information. A and B is 2.However,NN(B,C)=1 For example,in the case of conceptual clustering [Michalski and Stepp 1983], but NN(C,B)=2,and therefore the similarity between xi and x;is de- MND(B,C)=3.Figure 5 was ob- fined as tained from Figure 4 by adding three new points D,E,and F.Now MND(B,C) S(X,x}=fx,,6,), =3 (as before),but MND(A,B)=5. The MND between A and B has in- where 6 is a set of pre-defined concepts. creased by introducing additional This notion is illustrated with the help points,even though A and B have not of Figure 6.Here,the Euclidean dis- moved.The MND is not a metric (it does tance between points A and B is less not satisfy the triangle inequality than that between B and C.However,B [Zhang 1995]).In spite of this,MND has and C can be viewed as "more similar" been successfully applied in several than A and B because B and C belong to clustering applications [Gowda and Di-the same concept (ellipse)and A belongs day 1992].This observation supports to a different concept (rectangle).The the viewpoint that the dissimilarity conceptual similarity measure is the does not need to be a metric. most general similarity measure.We ACM Computing Surveys,Vol.31,No.3,September 1999

s~xi, xj! 5 f~xi, xj, %!, where % is the context (the set of sur￾rounding points). One metric defined using context is the mutual neighbor distance (MND), proposed in Gowda and Krishna [1977], which is given by MND~xi, xj! 5 NN~xi, xj! 1 NN~xj, xi!, where NN~xi, xj! is the neighbor num￾ber of xj with respect to xi. Figures 4 and 5 give an example. In Figure 4, the nearest neighbor of A is B, and B’s nearest neighbor is A. So, NN~A, B! 5 NN~B, A! 5 1 and the MND between A and B is 2. However, NN~B, C! 5 1 but NN~C, B! 5 2, and therefore MND~B, C! 5 3. Figure 5 was ob￾tained from Figure 4 by adding three new points D, E, and F. Now MND~B, C! 5 3 (as before), but MND~A, B! 5 5. The MND between A and B has in￾creased by introducing additional points, even though A and B have not moved. The MND is not a metric (it does not satisfy the triangle inequality [Zhang 1995]). In spite of this, MND has been successfully applied in several clustering applications [Gowda and Di￾day 1992]. This observation supports the viewpoint that the dissimilarity does not need to be a metric. Watanabe’s theorem of the ugly duck￾ling [Watanabe 1985] states: “Insofar as we use a finite set of predicates that are capable of dis￾tinguishing any two objects con￾sidered, the number of predicates shared by any two such objects is constant, independent of the choice of objects.” This implies that it is possible to make any two arbitrary patterns equally similar by encoding them with a sufficiently large number of features. As a consequence, any two arbitrary pat￾terns are equally similar, unless we use some additional domain information. For example, in the case of conceptual clustering [Michalski and Stepp 1983], the similarity between xi and xj is de￾fined as s~xi, xj! 5 f~xi, xj, #, %!, where # is a set of pre-defined concepts. This notion is illustrated with the help of Figure 6. Here, the Euclidean dis￾tance between points A and B is less than that between B and C. However, B and C can be viewed as “more similar” than A and B because B and C belong to the same concept (ellipse) and A belongs to a different concept (rectangle). The conceptual similarity measure is the most general similarity measure. We A B C X X 1 2 Figure 4. A and B are more similar than A and C. A B C X X 1 2 D F E Figure 5. After a change in context, B and C are more similar than B and A. Data Clustering • 273 ACM Computing Surveys, Vol. 31, No. 3, September 1999

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共60页,可试读20页,点击继续阅读 ↓↓
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有