If we have a set of M words, and know the similarity of the meaning of each pair of words in advance (have a M x M matrix of similarities), which algorithm ca开发者_JAVA百科n we use to make one k-dimensional bit vector for each word, so that each pair of words can be compared just by comparing their vectors (e.g. getting the absolute difference of vectors)?
I don't know how this particular problem is called. If I knew, it would be much easier to find among a bunch of algorithms with similar descriptions, which do something else.
Additional observation:
I think this algorithm would have to produce one, in this case wanted, side effect. If, from the matrix, word A is similar to word B and B is similar to C, but there is low [A, C] similarity detected, the calculated result vectors difference should produce high [A, C] similarity as well. So, we would fill in the previous gaps in the matrix - smoothen the similarities with this algorithm somehow. But besides this smoothing, the goal is to have results as close as possible to the original numbers that we had in a matrix.
You could do a truncated singular value decomposition (SVD) to find the best k-rank approximation to the matrix. The idea is the decompose the matrix into three matrices: U, sigma, and V such that U and V are orthonormal and sigma is diagonal.
By truncating off unimportant singular values, you can achieve O(k*m)
storage space.
If you are only interested in the first eigenvector + eigenvalue, the power-iteration will probably be useful. I once used it to extract keywords from text documents. (based on inter-word distance within the sentences, but similarity will probably work, too)
精彩评论