当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

同济大学:《大数据分析与数据挖掘 Big Data Analysis and Mining》课程教学资源(PPT课件讲稿)K-means & EM

资源类别:文库,文档格式:PPTX,文档页数:34,文件大小:2.51MB,团购合买
点击下载完整版文档(PPTX)

Partitioning Algorithms a Partitioning method: Construct a partition of n data into a set of k clusters a Given: a set of data and the number k Find: a partition of K clusters that optimizes the chosen partitioning criterion ◆ Globally optimal a Intractable for many objective functions a Ergo, exhaustively enumerate all partitions e Effective heuristic methods: K-means and K- medoids algorithms

Partitioning Algorithms ◼ Partitioning method: Construct a partition of n data into a set of K clusters ◼ Given: a set of data and the number K ◼ Find: a partition of K clusters that optimizes the chosen partitioning criterion ◆ Globally optimal  Intractable for many objective functions  Ergo, exhaustively enumerate all partitions ◆ Effective heuristic methods: K-means and K￾medoids algorithms

K-Means a Assumes data points are real- valued vectors a Clusters based on centroids(aka the center of gravity or mean) of points in a cluster, c u(c) x x∈C a Reassignment of instances to clusters is based on distance to the current cluster centroids o(Or one can equivalently phrase it in terms of similarities)

K-Means ◼ Assumes data points are real-valued vectors. ◼ Clusters based on centroids (aka the center of gravity or mean) of points in a cluster, c: ◼ Reassignment of instances to clusters is based on distance to the current cluster centroids. ◆ (Or one can equivalently phrase it in terms of similarities)   = x c x c    | | 1 μ(c)

K-Means Algorithm Select K random data C1, C2,.CK] from data X1, 2… XN as seeds a Until clustering converges(or other stopping criterion) /partitioning For each point X Assign X; to the cluster c such that dist(x c)is minimal //NeXt, update the centroid of each cluster For each cluster c=(c)

K-Means Algorithm ◼ Select K random data {c1 , c2 ,… cK} from data {x1 , x2 ,… xN} as seeds. ◼ Until clustering converges (or other stopping criterion): //partitioning For each point xi : Assign xi to the cluster cj such that dist(xi , cj ) is minimal. //Next, update the centroid of each cluster For each cluster cj cj = (cj )

K Means Example(K-2) Pick seeds Reassign clusters Compute centroids Reassign clusters X Compute centroids Reassign clusters Converged

K Means Example (K=2) Pick seeds Reassign clusters Compute centroids x x Reassign clusters x x Compute centroids Reassign clusters Converged!

Example Assign Gate each the objects°23 cluster to most means similar reassign reassign K=2 Arbitrarily choose K object as initial cluster center Update the means 012345678910

Example 0123456789 10 0 1 2 3 4 5 6 7 8 9 10 0123456789 10 0 1 2 3 4 5 6 7 8 9 10 0123456789 10 0 1 2 3 4 5 6 7 8 9 10 0123456789 10 0 1 2 3 4 5 6 7 8 9 10 0123456789 10 0 1 2 3 4 5 6 7 8 9 10 K=2 Arbitrarily choose K object as initial cluster center Assign each objects to most similar center Update the cluster means Update the cluster means reassign reassign

Termination conditions a Several possibilities, e. g .Afixed number of iterations e data partition unchanged o Centroid positions dont change Does this mean that the data in a cluster are unchanged?

Termination conditions ◼ Several possibilities, e.g., ◆ A fixed number of iterations. ◆ data partition unchanged. ◆ Centroid positions don’t change. Does this mean that the data in a cluster are unchanged?

Convergence Why should the K-means algorithm ever reach a fixed point? A state in which clusters dont change K-means is a special case of a general procedure known as the Expectation Maximization(EM algorithm o EM is known to converge o Number of iterations could be large But in practice usually isnt

Convergence ◼ Why should the K-means algorithm ever reach a fixed point? ◆ A state in which clusters don’t change. ◼ K-means is a special case of a general procedure known as the Expectation Maximization (EM) algorithm. ◆ EM is known to converge. ◆ Number of iterations could be large. ➢ But in practice usually isn’t

Time Complexity a Computing distance between two data points IS O(D)Where d is the dimensionality of the vectors Reassigning clusters: O(KN) distance computations, or O(KND Computing centroids: Each point gets added once to some centroid: O(ND) a assume these two steps are each done once for /iterations: O(KND)

Time Complexity ◼ Computing distance between two data points is O(D) where D is the dimensionality of the vectors. ◼ Reassigning clusters: O(KN) distance computations, or O(KND). ◼ Computing centroids: Each point gets added once to some centroid: O(ND). ◼ Assume these two steps are each done once for I iterations: O(IKND)

Strengths of K-means clustering Relatively scalable in processing large data sets a Relatively efficient: O(tkn), where n is #f objects, k is clusters, and t is iterations. Normally, k, t<< n a Often terminates at a local optimum; the global optimum may be found using techniques such as genetic algorithms

Strengths of K-means clustering ◼ Relatively scalable in processing large data sets ◼ Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n. ◼ Often terminates at a local optimum; the global optimum may be found using techniques such as genetic algorithms

Weaknesses of K-means clustering a Applicable only when the mean of objects is defined Need to specify k, the number of clusters in advance a Unable to handle noisy data and outliers a Not suitable to discover clusters with non-convex shapes, or clusters of very different size

Weaknesses of K-means clustering ◼ Applicable only when the mean of objects is defined ◼ Need to specify k, the number of clusters, in advance ◼ Unable to handle noisy data and outliers ◼ Not suitable to discover clusters with non-convex shapes, or clusters of very different size

点击下载完整版文档(PPTX)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共34页,可试读12页,点击继续阅读 ↓↓
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有