Determining Eigenvectors
Y A Few Conventions of Linear Algebra
A pneumonic for remembering the row by column convention is to think of RC cola. For a vector (i.e. a matrix with a single column) in 2space, x_{0}, the product is
Notice that an n x m matrix is multiplied by a vector in nspace and the result is a vector in mspace.
An n x 1 vector multiplied by its transpose yields an n x n matrix.
4. Normalizing a vector divides all elements by the square root of the sum of all squares of the elements. For example, to normalize a vector, x = [3 4 5 3 4 5], we would have x = 1/(9 + 16 + 25 + 9 + 16 + 25)^{1/2}[3 4 5 3 4 5] = 1/(100)^{1/2}[3 4 5 3 4 5] = [.3 .4 .5 .3 .4 .5].
Y Determining Eigenvalues and Eigenvectors
For an n x n matrix, A = [a_{ij}], determine the characteristic equation by evaluating the determinant of A  l I = 0, denoted det A  l I = 0. (l I multiplies every element of I by l , i.e. the matrix with l 's along the diagonal and 0's everywhere else. Hence, A  l I is simply l subtracted from each entry along the diagonal.) To find the determinant of a matrix, consider every possible permutation of the columns (not the rows), add the trace of every matrix with an even number of inversions (?), and subtract the trace of every matrix with an odd number of inversions. For a 3 x 3 matrix, there are 3! = 6 possible permutations of the columns. An easy method for determining every possible diagonal for matrices with 2 or 3 columns is best illustrated as follows.
The determinant is a_{11}a_{22}a_{33} + a_{12}a_{23}a_{13}+ a_{13}a_{12}a_{23 } a_{13}a_{22}a_{13 } a_{11}a_{23}a_{32}  a_{12}a_{22}a_{33} = 0. For A  l I, the characteristic equation is (a_{11}  l )(a_{22 l })(l  a_{33}) + a_{12}a_{23}a_{13}+ a_{13}a_{12}a_{23}  a_{13}(a_{22}  l )a_{13}  (a_{11}  l)a_{23}a_{32}  a_{12}(a_{22}  l)(a_{33}  l) = 0 Unfortunately, this representation does not work for larger matrices, such as a 4 x 4 matrix with 4! = 24 permutations of the columns. The best way to find eigenvalues for larger matrices is to use the power method or iteration method described below.
For an n x n matrix, B:
As an example, use the 4 x 4 matrix of ESP data for suits (see MDS for the B matrix used in this example) and begin with a nonzero vector. Notice that a vector of all 1's would force Bx_{0} = 0. Intuitively, we should let the diagonal of B influence or choice. We could let x_{0} = [10 1 10 9]. However, the concept of scaling down encourages us to let x_{0} = (1/10)[10 1 10 9] = [1 .1 1 .9]. Now we have
.
The Raleigh quotient is [(1)(.6) + (.1)(.6) + (1)(.6) + (.9)(1.8)]/[1^{2} + .1^{2} +1^{2} + .9^{2}] = (1.62  1.26)/2.91 = .36/2.91 = 0.1237, which is the current estimate for l . Scaling down, we have
and .
The Raleigh quotient is [(.333)(3.999) * 3 + (1)(11.997)]/ [3*(.333)^{2} + (1)^{2}] = 15.992/1.333 = 12. The estimated relative error is (12  0.1237)/12 = .9897, which is 98.9.97%. Clearly, this is much too high. So, we scale down and repeat the process with
and .
Because x_{1} = x_{2} and Bx_{1} = Bx_{2}, the Raleigh quotient is the same as in the last iteration, [(.333)(3.999) * 3 + (1)(11.997)]/ [3*(.333)^{2} + (1)^{2}] = 15.992/1.333 = 12. Hence, the estimated relative error is (12 12)/12 = 0. Rounding our results and normalizing our first eigenvector, x_{e1}, we have l _{1} = 12 and x_{e1} = (1/(16 + 16 + 16 + 144)^{1/2}[4 4 4 12] = [.289 .289 .289 .866].
Now that we have an eigenvalue and corresponding eigenvector, we can factor them from B, known as "deflating" B, before finding the next eigenvalue.
12*x*x^{T} 
B  12*x*x^{T} 

1.000 
1.000 
1.000 
3.000 
9.000 
0.000 
9.000 
0.000 
1.000 
1.000 
3.000 
3.000 
0.000 
0.000 
0.000 
0.000 
1.000 
1.000 
3.000 
3.000 
9.000 
0.000 
9.000 
0.000 
3.000 
3.000 
3.000 
9.000 
0.000 
0.000 
0.000 
0.000 
Using the matrix B  12*x*x^{T}, we can use the power method in a spreadsheet to find the next eigenvalue and eigenvector. Note: the inital x_{0} is chosen as
[0.25 0.5 0.75 1].
Bx_{i  1} 
x_{i} 
Bx_{i} 
Raliegh Quotient 
Error 
 
0.250 
4.500 
2.250 
16.074 
 
0.500 
0.000 
1.875 

 
0.750 
4.500 
1.200 

 
1.000 
0.000 

4.500 
1.000 
18.000 
36.000 
0.933 
0.000 
0.000 
0.000 
2.000 

4.500 
1.000 
18.000 
18.000 

0.000 
0.000 
0.000 

18.000 
1.000 
18.000 
36.000 
0.000 
0.000 
0.000 
0.000 
2.000 

18.000 
1.000 
18.000 
18.000 

0.000 
0.000 
0.000 
We have l _{2} = 18 with a normalized eigenvector of x_{i} = [.707 0 .707 0].
Try a small program to use this method in metric MDS. Note: The first sample matrix is the same as in Table 8 of the metric MDS example.