How is R able to find eigenvectors for the following matrix? Eigenvalues are 2,2 so eigenvectors require solving solve(matrix(c(0,1,0,0),2,2))
which is singular matrix with no solution.
> eigen(matrix(c(2,1,0,2),2,2))
$values
[1] 2 2
$vectors
[,1] [,2]
[1,] 0 4.440892e-16
[2,] 1 -1.000000e+00
> solve(matrix(c(0,1,0,0),2,2))开发者_运维知识库
Error in solve.default(matrix(c(0, 1, 0, 0), 2, 2)) :
Lapack routine dgesv: system is exactly singular
Both the routines essentially do the same thing. They find x such that (A-lambdaI)x = 0
without finding the inverse of A-lambdaI
. Clearly (0 1) is a solution but how I can't understand why solve did not come up with it and how do I manually solve it.
You asked for an eigen decomposition, you got an eigen decomposition.
Had you asked for rcond()
, the condition number of the matrix, or for kappa()
, you would have gotten the appropriate response.
For your second example:
> mat <- matrix(c(0,1,0,0), 2, 2)
> kappa(mat)
[1] Inf
>
> rcond(mat)
[1] 0
>
For your first example, there is actually no problem:
> mat <- matrix(c(2,1,0,2), 2, 2)
> kappa(mat)
[1] 1.772727
>
> rcond(mat)
[1] 0.5714286
>
>
See e.g. this previous question on SO for more.
Maybe it's using one of the algorithms listed here:
http://en.wikipedia.org/wiki/List_of_numerical_analysis_topics#Eigenvalue_algorithms
?
According to http://stat.ethz.ch/R-manual/R-devel/library/base/html/eigen.html, eigen
seems to use the LAPACK routine at http://netlib.org/lapack/double/dgeev.f (if you have a square matrix which is not symmetric).
Note: you're right that A - lambda * I
is singular if lambda is an eigenvalue but in order to find eigenvectors, one does need invert A - lambda * I
or solve an equation y = (A - lambda * I) * x
(with y
not being the null vector). It is sufficient to find non-zero vectors x
which satisfy
(A - lambda * I) * x = 0
One strategy is to find a non-singular transformation matrix T
such that (A - lambda * I) * T
is an upper triangular matrix (i.e. all elements below the diagonal are zero). Because A-lambda*I
is singular, T
can be constructed such that the last element on the diagonal (or even more diagonal elements if the multiplicity of the eigenvalue is larger than one) is zero.
A vector z
which only has it's last element equal to a non-zero value (i.e. z = (0,....,0,1)
) will then give the zero vector when multiplied with (A-lambda *I) * T
. So one has:
0 = ((A - lambda * I) * T) * z
or in other words, T*z
is an eigenvector of A
.
They are not doing the same. Just take a look at the wikipedia link. And look in code for dgeev.f and you will see that a simple solve of (A-lambda*I)x=0 is not performed.
精彩评论