1 11111 ■p mgm mm ■Hi 81 HSU HM wuSBim i^iHi BHw HE WW ■ ■ Hi ■BH HaH HI ■ *tf* HEX HHHHH HI Hi n Hi ■ Hi ml Mfl flflBJUH H HI HI wmmm H mil HI HHHi LIBRARY OF THE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN 510. 84- no. 601-GI2 cop. 2 for each non-returned \A >A > . .. >A >b>A >A > ... >A >a>A > ... >A >c >0 2— 1— 2— — p-1— p— p+1— — p+q-1 — p+q— — n— 1 and the corresponding eigenvectors . The Algorithm Similar to the method of "Simultaneous Iterations" [2], we propose the following iterative scheme: (i) Choose an n*q matrix X such that o X* X = I o o q (ii) Vk- V B)x k k-o. l. 2. ••• (i) where B is a polynomial of A to be defined later, and T (B) is the Chebyshev polynomial of B of degree m. (iii) Using Gram-Schmidt orthonormalization process we decompose X as X. = U,R 4 , (2) where U.U. = I and R is an upper triangular matrix of order q. (iv) Let Z. = AU.. (3) 3 3 Then form the positive-definite qxq matrix G. = z\l. (h) 3 3 3 and solve for its eigenvectors Q . (v) Thus, v ■ u jV t (5) Go back to (ii) and so on until X AX approaches a diagonal matrix whose elements are the eigenvalues of A in [a, b]; this occurs when Q ap- proaches the identity matrix. The matrix B in (l) may be obtained as follows: Let A = [2A - (a+b)l]/(b-a); (6) then an eigenvalue of A is given by X = [2X - (a+b)]/(b-a) (7) and those eigenvalues of A in [-1, l] correspond to the eigenvalues of A in [a, b]. The interval [-c, c] that contains all the eigenvalues of A is given by, c = max{d , d } (8) where d n = |2c n - (a+b)|/(b-a) 1 1 (9) d 2 = |2c 2 - (a+b)|/(b-a) If some mapping y = f(x) can be found such that f(X) is outside the inter- val [-1, 1] for A in [-1, l] and |f(X)|<_l for X outside [-1, l], then the algorithm described above (i)-(v) will yield the eigenvalues of A in [a, b] and the corresponding eigenvectors. Let us therefore map the interval [-1, l] on the subintervals [-c, -l] and [l, c], one such mapping may he taken as, y = ±/ax+3 or x = (y 2 -6)/a. (lO) x = -1 corresponds to y = ±1, thus 1 = -a + g; and x = +1 corresponds to y = ±c, thus 2 c = a+3. Therefore a = |(c 2 -l) 6 = |(c 2 +l) Hence the matrix B is taken as and (11) B = [2A 2 - (c 2 +l)l]/(c 2 -l) (12) p = x(B) = [2X 2 - (c 2 +l)]/(c 2 -l) (13) i.e. , -(-) < y < -1. We now introduce the following theorems. Theorem 1 : Let E he the linear space spanning the columns of X . In case of stable convergence (no reordering of the eigenvalues if the LR-Cholesky de- composition is applied to G.), the angle . between the i-th eigenvector u. and the linear space E. = {x|x=T (B)y, yeE. }, spanning the columns of X , is asymptotically for j-*» of order 0(q. ) in which J 1 q. = max { |T (u.)|/|t (y,)|) (lU) k^P m ieP, where P = {p, p+1, ..., p+q-l}. The proof is quite similar to that of Theorem 2 in [2] and hence will be omitted here (see Appendix I). Theorem 2: The columns of the matrices X. as generated by (i)-(v) are such that IK - x. (j) || = 0(q. J ) (15) where q. is as given by (lU). The proof is again similar to that of Theorem (3) in [2] (see Appendix I). A proper order of the Chebyshev polynomial, m, can be obtained, as in [2], by stipulating that the parallelization of the columns of X, k+m should not go further than that at most one decimal digit is cancelled out when these columns are orthonormalized, i.e., K-i ( VI <10 > where |y | = max |y |. (l6) ieP An approximation of y is obtained from (13) by replacing X by 1 2 [2X (G.) - (a+b)]/(b-a) where X .(G) is the maximum eigenvalue of G.. *• J x- J J Thus, cosh [(m-l) cosh |y 5 |] < 10, and i ^ cosh" 10 m 3 /.,„>, m-l < - : (IT) , — 1 1 i , -1 I I cosh | y | cosh | y | As soon as one of the eigenvalues of G. stagnates, the corresponding eigenvalue of A can be found within computer accuracy. Once this happens we can test for acceptance of the corresponding eigenvector. The accep- tance test is that of [3] except that the discounting rule is given by f := f x " h ;P (18) l i T (a. ) m i where c. is an approximation to y. obtained as explained above, and h is the number of eigenvalues already accepted. Some Numerical Results The algorithm described above has been implemented in Fortran on UCLA's IBM 360/91 computer. To demonstrate the numerical behavior of the algorithm we tested three symmetric matrices A , A p , and A (shown in Appendix II). In all three examples we obtained the required eigenvalues correct to lU decimal places and the eigenvectors correct to at least 7 decimal places. Example 1 The Gerschgorin disks of the 6k*6k matrix A show that we have 16 eigenvalues in the interval (l.U, 3.6), 8 eigenvalues in the interval (5.^» 6.6), and i+0 eigenvalues in the interval (8.U, l6.0). We seek to obtain the 8 intermediate eigenvalues and eigenvectors. Three different intervals [a, b] have been tested [3.7, 8.3], [U.O, 8.0], and [5.3, 6.7] with X q = [e Ul , e h2 , ..., e UQ ]. TABLE I [a, b] m § of iterations (max. degree of Chebyshev polynomial) [3.7, 8.3] 16 15 [U.O, 8.0] 20 20 [5.3, 6.7] failed to converge after 50 iterations Table I shows that, for the same acceptance test, the number of itera- tions required is smaller when a and b are as far as possible from the eigenvalues to be evaluated (A, A ,«..., X n ), which is clear from p p+1 p+q-i relation (1*0 and the preceeding interval transformation. If we choose X q = [e^, e^, ..., e^ ] , however, and [a, b] = [5.3, 6.7] the process converges to the required 8 eigenvalues and eigenvectors in only IT iterations but m = 58. Example 2 The 6x6 positive-definite matrix A has the simple roots 1, 5, 25, and a triple root at 15. If we seek the triple root 15 and take the in- terval [a, b] as [7, 2U] we require 9 iterations and the maximum degree m of the Chebyshev polynomials is 6. Here we only obtain an invariant sub- space that corresponds to A (A ) = 15. If, however, we seek the four eigenvalues in the interval [2, 2U] we require only 5 iterations with the degree of Chebyshev polynomials not exceeding 2. Again we obtain an invariant subspace corresponding to A (A ) = 15 and the correct eigenvector corresponding to A (A ) = 5. Example 3 The positive-definite matrix A has the eigenvalues A, = l6 sin [7-7 — ry], j k 2 { n+1 ) k=l, 2, ...,n, where n = Gh. There are 26 eigenvalues in (0, 2), 6 in (2, M, and 32 in (k, l6). Here we would like to evaluate the eigenvalues in the interval [2, h] and the corresponding eigenvectors. Table II shows the effect of our assumption regarding the distribution of eigenvalues on the number of iterations required for convergence to the eigenvalues and eigenvectors in the interval [2, h ] , and degrees of the Chebyshev poly- nomials. TABLE II X Degrees of Chebyshev # of Iterations Polynomials (i) [e 33 , ..., e 3Q ] 2, h, 8, 16, 32, 52, 52, ... 9 (ii) [e 31 , ..., e UQ ] 2, U, 8, 16, 32, 52, 52, 52. 8 (iii) [e , ..., e 3T ] 2, 2, U, 8, l6, 32, 50, 50, 52, 52, ... 20 (iv) [e , ... , e 2 g] 2, k, 8, 16, 32, 52, 52, 52. 8 Experiment (ii) indicates that using fever columns in X than the actual number of eigenvalues in [2, k] led to a large number of iterations (20 ) to converge to the assumed 3 eigenvalues, with higher degrees of the cor- responding Chebyshev polynomials. While experiment (iv) shows that if we use more columns in X than the actual number of eigenvalues, even if we completely misjudge the distribution of the eigenvalues in the intervals (0, 2), (2, h) 9 (U, l6), the number of iterations and the degrees of the Chebyshev polynomials required for convergence to the 6 eigenvalues and eigenvectors in [2, h] are no more than those in experiment (i) where we used the true distribution of the eigenvalues. Since the number of itera- tions required appears to be largely independent of the choice of columns of the initial matrix X , a random number generator could be used to generate these. We notice that the above algorithm is quite suitable for parallel computations since the major operation here is multiplying a matrix by a vector, which can be handled rather efficiently on a parallel computer such as the ILLIAC IV. REFERENCES [l] Bauer, F. L. , "Das Verfahren der Treppeniteration und verwandte Verfahren zur Losung algebraischer Eigenwertprobleme, " ZAMP, 8 (1957), pp. 21^-235. [2] Rutishauser, Heinz, "Computational Aspects of F. L. Bauer's Simultaneous Iteration Method," Num. Math., 13 (1969), pp. k-13. [3] , "Simultaneous Iteration Method for Symmetric Matrices," Num. Math., l6 (1970 ), pp. 205-223. Appendix I Proof of Theorem 1 The iterations (i)-(v) are orthogonally invariant, i.e., replacing A by I^AH = A (where ^H = I, A = diag (A. )) and X by A has the ef- 1 o o feet that all X are replaced by H X., while the G. and X are not J J J J changed. Therefore we can assume, without loss of generality, that A = diag (A x , Xg, .. . , \^) . In case of stable convergence E can be spanned by q vectors hi P-l.l x n,l ?12' lX L,< p-1,2 p-l,q '•I .p+q,l .p+q,2* **p+q,q n,2' n,q row p row p+q-1 (A.l) According to (l), E is spanned by T m J V x n t J (y_ -, ) m p-1 p-1,1 m p T J (y . ) x . . m . p + q p + q»i T J (y ) x . m n n,l T J (y n ) x^ P T J^i) *, T J (y .) x . . m p-1 p-l»2 m p+1 m p+q p+q, 2 T J (y .) x . m p-1 p-l,q (A. 2 T J (y ) m VM p+q-l ; ■ T J (y . ) x J m . P+q P+q,q T J (y ) x T J (y ) x mnn,2 m n n,q For example, the angle between e and the first column of (A. 2) is given by 2 T 2j (y ) cos w = m p I <^*> x p,i )2 ' p " {p - ?} or where y is of order 0(q_ J ) P T> a = max { |T (y ) | / | T (y ) | } , n? , m k m p k k^ P 10 Therefore, d>. ^ is at most of order 0(q.^) Proof of Theorem 2 Taking the q vectors (A.l) each divided by T J (y), £ = p, p+1 , .... p+q-1, as coordinate vectors W= [w , w , . .., w , .] in E. , the • * * ' p p+i p+q-i j -2 eigendirections of the projected operator A are the nxq matrix Y = WS vhere in which, Y t A" 2 Y = D. 2 (A. 3) J V dia g (a p ( J>, a p+1 V 9 -i (J)) > Y t Y = I , q and S is a qxq orthogonal matrix. Thus S t (¥ t A" 2 W)S = D ~ 2 , or (W t A" 2 ¥)S = SD " 2 ; (A.U) t -2 i.e., S is the eigenvector matrix of W A W, which can be written as W^^W = A" 2 + K, (A. 5) where the elements of K are of order 0(t), i.e., 0[|T (y. ) l/IT (y J I ] m l m £ i i P. Assume now X. - X. ._=...- X, is an h-i+1 fold eigenvalue of i l+l h A; then as j-*», h-i+1 independent eigensolutions of (A.U) with d-*-X . exist. Everyone of these is described by q values s , s , ..., s , and we assume that these solutions are normalized such that 2 2 2 s. +s., n +... +s, » 1. Then the s „ with £ ^ i , i+1 , . . . , h are of i l+l h £ h ■ p+q-1 order 0(t). This means the angle between ) s A w„ and ) s„ w. is also r £ £ . L _ £ £ l £=P h of order 0( t), while according to Theorem 1 the angle between /, s w 11 an d the eigenspace of X., X, +1 , . . . , * h of A is of order 0(q i ). This establishes the theorem. 12 Appendix II A i = C C where B and C are matrices of order 8, K. a. 0.1 k 0.1 a, 0.1 B = N \ k 0.1 \ N 0.1 0.1 ^a, i k J , C = diag(0.U, . .. , 0.U) and the centres of the Gerschgorin discs {a } are given "by (2, 3, 6, 9, 10, 11, 12, 13). A 2 = 16 778 -h 889 -k 889 -k 889 -1.556 -0.222 -U.889 13 Mk -1 556 -l .556 1.778 3.111 -U.889 -1 556 13 ,kkk -1 .556 1.778 3.111 -I+.889 -1 556 -1 556 13 Mk 1.778 3.111 -1.556 1 778 1 778 1 .778 10.111 6.kkk _ -0.222 3 ill 3. ill 3 ill 6Mh 8.778 " 5 -k 1 -k 6 -h 1 1 -h 6 -k 1 A = , n = 6k 3 1 -h 1 6 -1+ 6 1 -k _ l -h 5 13 UNCLASSIFIED SECURITY CL AERIFICATION OF THIS PAGE (Hh*n Dnta Entered) REPORT DOCUMENTATION PAGE READ INSTRUCTIONS BEFORE COMPLETING FORM 1. KEPORT NUMBER CAC Document No. 91 2. GOVT ACCESSION NO. 3. RECIPIENT'S CATALOG NUMBER 4. TITLE (and Subtitle) On the Intermediate Eigenvalues of Symmetric Sparse Matrices 5. TYPE OF REPORT ft PERIOD COVERED Research Report 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(») Ahmed Sameh, Jonathan Lermit, & Killio.n Noh 8. CONTRACT OR GRANT NUMBERfa.) DAHC0^-72-C-0001 9. PERFORMING ORGANIZATION NAME AND ADDRESS Center for Advanced Computation University of Illinois at Urb ana-Champaign Urbana, Illinois 6l801 10. PROGRAM ELEMENT. PROJECT, TASK AREA A WORK UNIT NUMBERS ARPA Order No. 1899 II. CONTROLLING OFFICE NAME AND ADDRESS U.S. Army Research Office-Durham Duke Station, Durham, North Carolina 12. REPORT DATE October 1973 13. NUMBER OF PAGES 15 14. MONITORING AGENCY NAME ft. ADDRESSf// different from Controlling Office) IS. SECURITY CLASS, (of thia report) UNCLASSIFIED 15a. DECL ASSIFI CATION/ DOWN GRADING SCHEDULE 16. DISTRIBUTION STATEMENT (ol thin Report) Copies may be requested from the National Technical Information Service, Springfield, Va. 22151 17. DISTRIBUTION STATEMENT (ol the abstract entered in Block 20, II dlllerent from Report) 18. SUPPLEMENTARY NOTES 19. KEY WORDS (Continue on reverse aide il necessary and identity by block number) Eigenvalues and Eigenvectors Chebyshev Polynomials Simultaneous Iterations 20 ABSTRACT (Continue on reverse side II necessary and identify by block number) An algorithm has been developed for finding the eigenvalues of a symmetric matrix A in a given interval [a, b] and the corresponding eigenvectors using a modification of the method of simultaneous itera- tions with the same favorable convergence properties. The technique is most suitable for large sparse matrices and can be effectively im- plemented on a parallel computer such as the ILLIAC IV. ^D i JAN 73 1473 EOITION OF 1 NOV 65 IS OBSOLETE UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGE flWien Darn Entered) BLIOCRAPHIC DATA IEET Jritle and Subt itle 1. Report No. UIUC-CAC-DN-73-91 ON THE INTERMEDIATE EIGENVALUES OF SYMMETRIC SPARSE MATRICES 3. Recipient's Accession No. 5. Report Date October 1973 6. 7\uthor(s) hmed Sameh, Jonathan Lermit, and Killion Noh 8. Performin Performing Organization Rept. JPerforming Organization Name and Address Center for Advanced Computation University of Illinois at Urbana-Champaign Urbana, Illinois 61801 10. Project/Task/Work Unit No. 11. Contract/Grant No. DAHC04-72-C-0001 1 Sponsoring Organization Name and Address U.S. Army Research Office Duke Station Durham, North Carolina 13. Type of Report & Period Covered Re search- interim 14. 1 Supplementary Notes I Abstracts An algorithm has been developed for finding the eigenvalues of a symmetric matrix A in a given interval [a, b] and the corresponding eigenvectors using a modification of the method of simultaneous iterations with the same favorable convergence properties. The technique is most suitable for large sparse matrices and can be effectively implemented on a parallel computer such as the ILLIAC IV. Key Words and Document Analysis. 17a. Descriptors Eigenvalues and Eigenvectors Chebyshev Polynomials Simultaneous Iterations b. Identifiers/Open-Ended Terms c COSATI Field/Group •Availability Statement No restriction on distribution. Available from National Technical Information Service, Springfield, Va. 22131 19. Security Class (This Report) UNCLASSIFIED 20. Security Class (This Page UNCLASSIFIED 21. No. of Pages 22. Price Ru NTIS-35 (REV. 3-72) USCOMM-DC I4952-P72 '<. 9 UNIVERSfTY OF ILLINOIS URBAN* 3 0112 064441 527 UN , h HI i ■ ■ HI HI Hi v.*\« 'jfl UMO| ■ H H I m HI H H B 89 RW HH IIE all ^ pi MflMWHll ^HIB SHI mm H