开发者

Clustering on a large dataset

开发者 https://www.devze.com 2023-02-20 07:29 出处:网络
I\'m trying to cluster a large (Gigabyte) dataset.In order to cluster, you need distance of every point to every other point, so you end up with a N^2 sized distance matrix, which in case of my datase

I'm trying to cluster a large (Gigabyte) dataset. In order to cluster, you need distance of every point to every other point, so you end up with a N^2 sized distance matrix, which in case of my dataset would be on the order of exabytes. Pdist in Matlab blows up instantly of course ;)

Is there a way to cluster subsets o开发者_开发知识库f the large data first, and then maybe do some merging of similar clusters?

I don't know if this helps any, but the data are fixed length binary strings, so I'm calculating their distances using Hamming distance (Distance=string1 XOR string2).


A simplified version of the nice method from Tabei et al., Single versus Multiple Sorting in All Pairs Similarity Search, say for pairs with Hammingdist 1:

  • sort all the bit strings on the first 32 bits
  • look at blocks of strings where the first 32 bits are all the same; these blocks will be relatively small
  • pdist each of these blocks for Hammingdist( left 32 ) 0 + Hammingdist( the rest ) <= 1.

This misses the fraction of e.g. 32/128 of the nearby pairs which have Hammingdist( left 32 ) 1 + Hammingdist( the rest ) 0. If you really want these, repeat the above with "first 32" -> "last 32".

The method can be extended. Take for example Hammingdist <= 2 on 4 32-bit words; the mismatches must split like one of 2000 0200 0020 0002 1100 1010 1001 0110 0101 0011, so 2 of the words must be 0, sort the same.

(Btw, sketchsort-0.0.7.tar is 99 % src/boost/, build/, .svn/ .)


How about sorting them first? Maybe something like a modified merge sort? You could start with chunks of the dataset which will fit in memory to perform a normal sort.

Once you have the sorted data, clustering could be done iteratively. Maybe keep a rolling centroid of N-1 points and compare against the Nth point being read in. Then depending on your cluster distance threshold, you could pool it into the current cluster or start a new one.


The EM-tree and K-tree algorithms in the LMW-tree project can cluster problems this big and larger. Our most recent result is clustering 733 million web pages into 600,000 clusters. There is also a streaming variant of the EM-tree where the dataset is streamed from disk for each iteration.

Additionally, these algorithms can cluster bit strings directly where all cluster representatives and data points are bit strings, and the similarity measure that is used is Hamming distance. This minimizes the Hamming distance within each cluster found.

0

精彩评论

暂无评论...
验证码 换一张
取 消