Determining Eigenvectors

 

 

Y A Few Conventions of Linear Algebra

    1. "I" represents the identity matrix with 1's down the diagonal and 0's everywhere else.
    2. Scalar multiplication multiplies every entry of a matrix by the scalar. For example, 2A multiplies every entry of A by 2.
    3. Matrix products are calculated by adding products of rows by columns. For example, if A is a 3 x 2 (3 rows, 2 columns) matrix and B is a 2 x 2 matrix, then the product is

A pneumonic for remembering the row by column convention is to think of RC cola. For a vector (i.e. a matrix with a single column) in 2-space, x0, the product is

 

Notice that an n x m matrix is multiplied by a vector in n-space and the result is a vector in m-space.

An n x 1 vector multiplied by its transpose yields an n x n matrix.

 

4. Normalizing a vector divides all elements by the square root of the sum of all squares of the elements. For example, to normalize a vector, x = [3 4 5 3 4 5], we would have x = 1/(9 + 16 + 25 + 9 + 16 + 25)1/2[3 4 5 3 4 5] = 1/(100)1/2[3 4 5 3 4 5] = [.3 .4 .5 .3 .4 .5].

 

Y Determining Eigenvalues and Eigenvectors

 

For an n x n matrix, A = [aij], determine the characteristic equation by evaluating the determinant of A - l I = 0, denoted det |A - l I| = 0. (l I multiplies every element of I by l , i.e. the matrix with l 's along the diagonal and 0's everywhere else. Hence, A - l I is simply l subtracted from each entry along the diagonal.) To find the determinant of a matrix, consider every possible permutation of the columns (not the rows), add the trace of every matrix with an even number of inversions (?), and subtract the trace of every matrix with an odd number of inversions. For a 3 x 3 matrix, there are 3! = 6 possible permutations of the columns. An easy method for determining every possible diagonal for matrices with 2 or 3 columns is best illustrated as follows.

The determinant is a11a22a33 + a12a23a13+ a13a12a23 - a13a22a13 - a11a23a32 - a12a22a33 = 0. For A - l I, the characteristic equation is (a11 - l )(a22 -l )(l - a33) + a12a23a13+ a13a12a23 - a13(a22 - l )a13 - (a11 - l)a23a32 - a12(a22 - l)(a33 - l) = 0 Unfortunately, this representation does not work for larger matrices, such as a 4 x 4 matrix with 4! = 24 permutations of the columns. The best way to find eigenvalues for larger matrices is to use the power method or iteration method described below.

For an n x n matrix, B:

    1. Pick any nonzero vector, x0. (That is, i = 0.)
    2. Multiply Bxi.
    3. Calculate the Raleigh quotient using
    4.  

       

    5. Unless i = 0, calculate the estimated relative error and estimated percentage error using
    6. If i = 0 or the estimated error is not within a predetermined acceptable range, repeat steps 2 through 5 using the scaled down vector, xi+1, which is Bxi multiplied by the multiplicative inverse of element in Bxi with the largest absolute value.

As an example, use the 4 x 4 matrix of ESP data for suits (see MDS for the B matrix used in this example) and begin with a nonzero vector. Notice that a vector of all 1's would force Bx0 = 0. Intuitively, we should let the diagonal of B influence or choice. We could let x0 = [10 1 10 9]. However, the concept of scaling down encourages us to let x0 = (1/10)[10 1 10 9] = [1 .1 1 .9]. Now we have

.

The Raleigh quotient is [(1)(-.6) + (.1)(-.6) + (1)(-.6) + (.9)(1.8)]/[12 + .12 +12 + .92] = (1.62 - 1.26)/2.91 = .36/2.91 = 0.1237, which is the current estimate for l . Scaling down, we have

 and .

The Raleigh quotient is [(-.333)(-3.999) * 3 + (1)(11.997)]/ [3*(-.333)2 + (1)2] = 15.992/1.333 = 12. The estimated relative error is (12 - 0.1237)/12 = .9897, which is 98.9.97%. Clearly, this is much too high. So, we scale down and repeat the process with

 and .

Because x1 = x2 and Bx1 = Bx2, the Raleigh quotient is the same as in the last iteration, [(-.333)(-3.999) * 3 + (1)(11.997)]/ [3*(-.333)2 + (1)2] = 15.992/1.333 = 12. Hence, the estimated relative error is (12 -12)/12 = 0. Rounding our results and normalizing our first eigenvector, xe1, we have l 1 = 12 and xe1 = (1/(16 + 16 + 16 + 144)1/2[-4 -4 -4 12] = [-.289 -.289 -.289 .866].

Now that we have an eigenvalue and corresponding eigenvector, we can factor them from B, known as "deflating" B, before finding the next eigenvalue.

12*x*xT

B - 12*x*xT

1.000

1.000

1.000

-3.000

9.000

0.000

-9.000

0.000

1.000

1.000

-3.000

-3.000

0.000

0.000

0.000

0.000

1.000

1.000

-3.000

-3.000

-9.000

0.000

9.000

0.000

-3.000

-3.000

-3.000

9.000

0.000

0.000

0.000

0.000

Using the matrix B - 12*x*xT, we can use the power method in a spreadsheet to find the next eigenvalue and eigenvector. Note: the inital x0 is chosen as

[0.25 0.5 0.75 1].

Bxi - 1

xi

Bxi

Raliegh Quotient

Error

-

0.250

-4.500

2.250

-16.074

-

0.500

0.000

1.875

-

0.750

4.500

1.200

-

1.000

0.000

-4.500

-1.000

-18.000

36.000

0.933

0.000

0.000

0.000

2.000

4.500

1.000

18.000

18.000

0.000

0.000

0.000

-18.000

-1.000

-18.000

36.000

0.000

0.000

0.000

0.000

2.000

18.000

1.000

18.000

18.000

0.000

0.000

0.000

We have l 2 = 18 with a normalized eigenvector of xi = [-.707 0 .707 0].

Try a small program to use this method in metric MDS. Note: The first sample matrix is the same as in Table 8 of the metric MDS example.

.:: Home : MDS ::.