开发者

Clustering [assessment] algorithm with distance matrix as an input

开发者 https://www.devze.com 2023-01-01 08:06 出处:网络
Can anyone suggest some clusteri开发者_如何学JAVAng algorithm which can work with distance matrix as an input? Or the algorithm which can assess the \"goodness\" of the clustering also based on the di

Can anyone suggest some clusteri开发者_如何学JAVAng algorithm which can work with distance matrix as an input? Or the algorithm which can assess the "goodness" of the clustering also based on the distance matrix?

At this moment I'm using a modification of Kruskal's algorithm (http://en.wikipedia.org/wiki/Kruskal%27s_algorithm) to split data into two clusters. It has a problem though. When the data has no distinct clusters the algorithm will still create two clusters with one cluster containing one element and the other containing all the rest. In this case I would rather have one cluster containing all the elements and another one which is empty.

Are there any algorithms which are capable of doing this type of clustering?

Are there any algorithms which can estimate how well the clustering was done or even better how many clusters are there in the data?

The algorithms should work only with distance(similarity) matrices as an input.


Or the algorithm which can assess the "goodness" of the clustering also based on the distance matrix?

KNN should be useful in assessing the “goodness” of a clustering assignment. Here's how:

Given a distance matrix with each point labeled according to the cluster it belongs to (its “cluster label”):

  1. Test the cluster label of each point against the cluster labels implied from k-nearest neighbors classification
  2. If the k-nearest neighbors imply an alternative cluster, that classified point lowers the overall “goodness” rating of the cluster
  3. Sum up the “goodness rating” contributions from each one of your pixels to get a total “goodness rating” for the whole cluster

Unlike k-means cluster analysis, your algorithm will return information about poorly categorized points. You can use that information to reassign certain points to a new cluster thereby improving the overall "goodness" of your clustering.

Since the algorithm knows nothing about the placement of the centroids of the clusters and hence, nothing about the global cluster density, the only way to insure clusters that are both locally and globally dense would be to run the algorithm for a range of k values and finding an arrangement that maximizes the goodness over the range of k values.

For a significant amount of points, you'll probably need to optimize this algorithm; possibly with a hash-table to keep track of the the nearest points relative to each point. Otherwise this algorithm will take quite awhile to compute.


Some approaches that can be used to estimate the number of clusters are:

  • Minimum Description Length
  • Bayesian Information Criterion
  • The gap statistic


scipy.cluster.hierarchy runs 3 steps, just like Matlab(TM) clusterdata:

Y = scipy.spatial.distance.pdist( pts )  # you have this already
Z = hier.linkage( Y, method )  # N-1
T = hier.fcluster( Z, ncluster, criterion=criterion )

Here linkage might be a modified Kruskal, dunno. This SO answer (ahem) uses the above.
As a measure of clustering, radius = rms distance to cluster centre is fast and reasonable, for 2d/3d points.

Tell us about your Npt, ndim, ncluster, hier/flat ? Clustering is a largish area, one size does not fit all.

0

精彩评论

暂无评论...
验证码 换一张
取 消