LIBRARY OF THE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN L-ftto*" ^voA^S-444 The person charging this material is re- sponsible for its return to the library from which it was withdrawn on or before the Latest Date stamped below. Theft, mutilation, and underlining of books are reasons for disciplinary action and may result in dismissal from the University. To renew call Telephone Center, 333-8400 UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN 3 lit l <>6 FEB 2 9 to L161— O-1096 Digitized by the Internet Archive in 2013 http://archive.org/details/symmetricfactori440brac REPORT NO. kkO /ruuac /A o2 9 A SYMMETRIC FACTORIZATION PROCEDURE FOR THE SOLUTION OF ELLIPTIC BOUNDARY VALUE PROBLEMS by Amnon Bracha April 30, 1971 Report No. kkO A SYMMETRIC FACTORIZATION PROCEDURE FOR THE SOLUTION OF ELLIPTIC BOUNDARY VALUE PROBLEMS by Amnon Bracha April 30, 1971 Department of Computer Science University of Illinois at Urbana-Champaign Urbana, Illinois 6l801 * This work was supported in part by the National Science Foundation under Grant No. US NSF-GJ-328 and was submitted in partial fulfillment for the degree of Doctor of Philosophy in Computer Science, June, 1971- A SYMMETRIC FACTORIZATION PROCEDURE FOR THE SOLUTION OF ELLIPTIC BOUNDARY VALUE PROBLEMS Amnon Bracha, Ph.D. Department of Computer Science University of Illinois at Urbana-Champaign, 1971 The convergence properties of a symmetric factorization procedure for solving elliptic difference equations due to H. L. Stone is studied. Convergence depends on bounds on a parameter, and computable values of these bounds are obtained. The procedure and the results are generalized to an arbitrary number of dimensions. iii ACKNOWLEDGMENT The author wishes to express his appreciation and gratitude to Professor Paul E. Saylor, who suggested the thesis topic and supervised its development. The support of the Department of Computer Science, University of Illinois, is gratefully acknowledged. Finally, the author wishes to thank his family for their constant encouragement and moral support. iv TABLE OF CONTENTS Page 1. INTRODUCTION 1 2. PRELIMINARIES 5 3- THE SYMMETRIC FACTORIZATION OF STONE 9 k. ANALYSIS AND CONVERGENCE CONDITIONS l8 5. ESTIMATES FOR THE EXTREME EIGENVALUES OF (A + B) -1 A 22 6. ACCELERATION OF THE CONVERGENCE BY SEQUENCE OF PARAMETERS. . 31 7. GENERALIZATION TO MORE THAN TWO DIMENSIONS 33 8. COMPUTATIONAL NOTES 1*8 LIST OF REFERENCES 50 VITA 51 V LIST OF FIGURES Figure Page 1. Coefficients and grid points for x = (jh, kh) 8 2. Grid points and elements of L 9 3- Grid points and elements of U 9 k. Grid points and elements of A + B 10 1. INTRODUCTION Let Au = q be a linear system of algebraic equations resulting from the discretization of an elliptic boundary problem. Direct methods for computing the solution based on some variation of Gaussian elimination are equivalent to factoring A as a product, A =LU, of a lower triangular matrix L by an upper triangular matrix U (the L U- decomposition of A). Such methods are computation- ally inefficient because the factors L and U are not sparse matrices. For this reason, an iterative procedure is generally used rather than a direct method. The ingenious direct methods of Hockney [13] are an exception. For a small but important class of problems these methods are clearly superior to iterative procedures [12, lj]. Unfortunately, they depend on special features and symmet- ries that are not present in many scientific and engineering problems. For such problems, an iterative procedure remains the only practical technique. Two standard iterations are successive-over-relaxation (SOR) and alternating-direction-implicit (ADl). There are disappointments in the applica- tions of both procedures. Unless the set of equations is small, SOR converges rather slowly. Nevertheless, because its properties are well understood it is reliable and popular. The difficulty with ADI is that, although the method is faster for an important class of problems, there are no explicit mathematical principles to guide the application of ADI to general problems. It is typically the user's experimental skill and judgement that makes it successful. The inadequacy of SOR and the difficulties of ADI have motivated other techniques for the solution of Au = q. One such technique, based on "factor- ization" methods is the subject of this thesis. The idea of a factorization method is to replace A by a matrix, A + B, that factors by the L U-decomposition into sparse triangular matrices L and U, i. e. , A + B = L U. The efficient solution of (A + B)u = q, is then used to compute u by truncating a sequence defined by some iterative procedure such as (1.1) (A + B)u. +1 = (A + B)u. - t(Au. - %), i = 0, 1, ..., t a parameter, provided of course that the convergence is sufficiently rapid to be practical. Indeed, the essential difficulty is to make the iterative procedure converge rapidly. There is no difficulty in finding a matrix B such that the factors are sparse. For, infinitely many such matrices can be found for which A + B can be factored into sparse matrices : Multiply any two sparse upper and lower triangular matrices, subtract A and let B equal the result. In this thesis we study an algorithm due to H. L. Stone [8] for constructing factors L and U, for which L U is symmetric. 'We now describe the principal results obtained. The algorithm is stated in section 3> where it is proved that it defines factors L and U. Useful estimates on the elements of L and U are also derived. In section k, we state some standard facts from linear algebra that show that the choice of t for which (l.l) converges requires knowing the extreme eigenvalues of (A + B) A. We then derive a posteriori estimates for these eigenvalues that are easily computed during the computation of the factors L and U. The general approach of Dupont, Kendall and Rachford in [k] is used in the derivation, although the details are quite different. In section 7 we generalize Stone's algorithm and the results of section 2 and 3 "to an arbitrary number of dimensions. The idea of a factorization method first appeared in the papers of Buleev [2] and Oliphant [7]. Of these papers the approach of Buleev has been the most fruitful. In his paper, he described a factorization algorithm and suggested ways to make the iterative procedure converge. Convergence, however, is erratic. To improve it, H. L. Stone has made a series of modifications of Buleev's method, ultimately obtaining an efficient algorithm for solving Au = q. Stone's method is described in [9]. We note that Stone employs the iteration (l.l) for t = 1. The fundamental idea for improving convergence, due to Buleev and developed by Stone, is to design the factorization in such a way as to diminish the magnitude of || B |L by making the terms in each component of Bu cancel. For, when Bu. and Bu are small, it is intuitively plausible for T = 1 that one step of the iteration (l.l) is nearly the same as solving Au = q, and therefore one can hope for convergence to be rapid. In fact, the iteration converges in one step when the components of the solution, u, are the values of the first degree polynomial in the variables x and y. The reason convergence occurs in one step is that for such vectors Bu vanishes. In this sense, Bu, may be said to be of second order. Unfortunately, for more practical problems, when the solution is not so trivial the iteration diverges, which shows that the intuitive idea of cancellation is not sound without additional modification. Stone's technique for obtaining convergence is to vary the matrix B at each step of the iteration. The change is made in B by varying a factorization parameter present in the algorithm defining the factors of A + B. When the factorization parameter has the value 1, B is second order. For other values, there is no cancellation, but the iteration converges, although somewhat slowly for a fixed value of the parameter. In addition to r arying B, Stone also recommends reversing the order of the components of the vectors at alternate steps. Stone's techniques yield a rapidly convergent iteration, superior to that obtained from standard methods for a variety of difficult problems. However, these techniques are derived from intuitive insight and experimental skill. No theoretical analysis is available to guide their application or to make them more powerful. There are two difficult barriers blocking any theoretical analysis. First, the matrix B, which Stone produces, is nonsymmetric. Second, changing B and the ordering at each step of the iteration makes succeeding steps algebraically unrelated to each other. This is a serious difficulty since some algebraic relationship is essential to any study of the iteration. Attempts to avoid these barriers have led to different factorizations and iterations described as follows below. T. Dupont, R. Kendall and H. Rachford defined a factorization in [k] for which B is symmetric, and analyzed convergence for the Dirichlet boundary value problem. In [3], Dupont extended the methods to the Neumann problem, and generalized them to higher dimensions. The application of their method, however, has been disappointing in practice. Another approach to the problem places more emphasis on cancellation than Dupont, Kendall and Rachford. The idea of cancellation is sufficiently appealing that it is natural to try to design a symmetric factorization for which Bu is as small as possible. The constraint of symmetry excludes a second order method, but in attempting to optimize cancellation one is led to a symm- etric factorization algorithm due to Stone [8], which is the subject of this thesis. To avoid the difficulties due to varying B, we study the factorization of Stone applied to (l.l) with B held fixed. For fixed t, experimental studies [9] show that it is comparable or superior to alternating direction techniques for certain difficult boundary value problems. Also, the use of a sequence of parameters 1 in (l.l) can be expected to reduce by a square root the number of operations required for single parameter. Accordingly, the factorization studied here possesses some practical value, if a method for selecting a single parame- ter or sequence of parameters can be prescribed. The estimates obtained here are a first attempt to do so. 2 . PRELIMINARIES Consider the Dirichlet problem - 4 (a i (x) h ] - 4 (a2(x) h ] = Q(x) ' (2.1) u(x) = g(x), xedD where x = (x.,x ), D is the interior of a compact region, with boundary dD, a (x), a (x) are strictly positive functions, and where Q(x), g(x), a (x) and a p (x) are sufficiently smooth. Suppose we wish to approximate the solution of (2.1) on a bounded plane region, D. We cover D with a rectangular grid system, replace the differential operator in (2.1) with a difference operator, and replace u with a vector whose components correspond to the grid points of the system. The result is a set of simultaneous linear equations with a one-to-one correspondence between the grid points and the equations. We shall study a method for the iterative solution of this set of equations. In order to make the problem more specific, let D be the unit square, D = {(x^Xg) : < x x ,x 2 < 1], and let g(x) = 0. Let h = — r- , n a positive integer. Let D, be the grid system over D, with n+1 h 7 uniform spacing h , defined by L^ = {(jh,kh) : 1 < j,k < n} . Denote the grid point (jh,kh) by (j,k). The boundary, dD , of D is defined by OT^ = {(jh,0)} U{(l,kh)} U{(jh,l)) U{(0,kh)} for < j,k < n+1. Order the points of D in a left-to-right, down-to-up fashion. That is, if (j ,k ) and (j ,k ) are two grid points, then (i ,k ) precedes (j 2 ,k 2 ) if k x < k 2 or if ^ = k g and j < ^. 2 Let u be an n -vector whose components are ordered according to the order of D . Denote the components of u by u. , (j,k) a grid point, n j,.k. Let the matrix A defined by j,k " j,k U j,k-1 j,k U j-l,k j,k U j,k (2.2) + D. nn u. n? + B. , -,u. n n j+l,k j+l,k J,k+1 j,k+l be a discrete analog of the differential operator defined in (2.1) The equation (Au) j, k ■ q j,k is then a discrete version of the boundary value problem in (2.1) The important properties required of A are that (1) B j f * V^°' (ii) D^ k = for J = 1, k = 1,2, ..., nj (iii) B = for k = 1, J = 1,2, ..., n; (2.5) (iv) E. ;k = .(8 Jfk ♦ D. ;k + D j+1)k + B. jk+1 ) > 0, if D . ;k /0 and B. jk ^0; W E J>k > -(B. ;k + D. ;k + B. +1>k + B. ;k+1 ), if D. . = or B. , = 0. D,k j,k These properties result from any of the common discretizations of the quantity ^L ( a .(x) k D. 3- (J-l,k; W,iJ u- 3+1, k Uk-l) B.^ Fig. 1 Coefficients and grid points for x = (jh, kh) 3. THE SYMMETRIC FACTORIZATION OF STONE The symmetric factorization of Stone constructs lower and upper triangular matrices, L and U, each with three non-zero diagonals in the same positions as the non-zero diagonals of A. That is, j,k "' j,k U j,k-1 C j,k U j-l,k j,k U j,k (3-D (Uu) . , = u . , + e . , u . . _ , + f . . u . . , y j,k j,k j,k j+l,k 3,1s. j,k+l. The grid points and elements of L and U that appear in (3-1) are displayed in Figure 2 and Figure 3 • (j,k+l) c . . dp* (d-lfk) Uk) (j,k-l) j,k (j+l,k) Fig. 2 Grid points and elements of L Uk+1) f (j-lTkT (j,k-l) 'j,k OTkT j,k (j+l,k) Fig. 3 Grid points and elements of U As a preliminary to the formal statement of the algorithm for computing L and U, observe that the product L U is given by [(A + B)n] JJt -[ b o,k ■ a j,k-i f o,*-i and 3 - 5) c o,k ■ Vl.k e j-i,k Changing the subscripts in (3-5) gives 3 ' 6) Vl,k-1 = d j,k-l e j,k-l" 12 Multiplying the right hand side of (3-6) by the left side of (3-^) gives b . . (d. , . e . . _ ) = c . , n . . (d . , , f . . ,) or (3 - 7) b j,i e j, k -i ■ 'j+l,k-l j,k-l Equation (3»7)> along with (3-^) and (3«5), establishes the symmetry of A + B as defined in (3.2). We now prove a lemma which establishes a useful set of relations between the elements of A and the elements of L and U. Lemma 1 : Assume that the sum of elements B. , and D of A is strictly Jjk j,k negative. Let a satisfy < a < 1. It follows that the quantities defined in equation (3>3) exist and satisfy the following relations: W V5B j)k <0, (5.8) W % k <\ k <°> (c) d J,k>°> (d) -1 % k + f j,k + X > ° 13 Proof: The proof is by induction. For j = k = 1, we have, \i • \i ■ °> and c l,l= D l,l = °' a i,l"\l > °' - 1 < e i i = T 21 $ °' B 12 -1 < f = -±*£ < 0, 1,1 E 1A - , D 2,l B l,2 , , D 2,l + B l,2 + E l,l n Now assume that the relations (a) - (f) hold for all points preceding ( j,k) . To prove (3-8) (a) and (b), we have c o,k-i^ ' W-i^°' W* ' e . , , , < and < a < 1 j-l,k-l - - - It follows that and b. =B, -ac, . f.. . . and 1 + e. , , > 0. Therefore, d. , = E. . - b. . f . . . - c. . e. _ . 0,k J,k j,k j,k-l j,k j-l,k + a c. , , f . . , , + at. ,, e. ., . . j,k-l j-l,k-l J-l,k j-l,k-l = E. . + B. . + D, . - b, . (1 + f . . n ) J,k j,k j,k j,k J,k-1 ■V (lt W >0 ' Next we prove (3*8) (d) and (e) . Certainly, D. nn - a b . , e. . , e = j+l,k j,k j,k-l Q J ' k d j,k B. ., _, - a c . n f . -, , f _ = ^ki±i j, k 3-l,H 0- J ' k d j,k To prove that -1 is a lower bound for these quantities, observe that e +1 iLdS J 'j k J' k . Since (3-8) (d) and (f) are valid for (j,k-l), it follows from (3-3) that e. ? d. . +d. 1 = D. , . - a b. . e. . n + d. . j,k j,k j,k j+l,k j,k j,k-l j,k E. . + D.,_ . - 0! b. . e. . . - b. . f . , . j,k j+l,k j,k j,k-l j,k j,k-l c. . e. ' + a c. . n f . . . , + at. . . e • -, i i j,k j-l,k j,k-l j-l,k-l J-l,k j-l,k-l and 15 = E + D. _ , + B. „ + D -b fl + a e + -p "* J,k j+l,k J,k j,k D j,k u ae j,k-l + f j,k-l } c o,k (1 + Vl,^ >°« Therefore, -1 < e . , . J,k Similarly, -1 < f . < 0. J>k — To prove the remaining assertion, we have from (3*3) + f + i - j+l,k j,k j,k-l 'j,k j,k d B. . . - a c. . f . . . + J,k+1 j,k j-l,k + 1 d. J,k d. . + D._ , + B. . , - ab. , e. . . j,k j+l,k j,k+l j,k j,k-l d - V a c . f j,k j-l,k d - V E - i +D --.n +B --, i -ah., e. . . - c. . e. _ . j,k j+l,k j,k+l j,k j,k-l j,k j-l,k d - V J,k a c . . . f . t , , - a c. . f . , , - b. . f . . , j,k-l j-I,k-l j,k j-l,k j,k j,k-l J,k a b . . . e l j-l,k j-l,k-l J,k E + B +D + B. , + D j,k j,k j,k j,k+l j+l,k d - V J,k b. - -r^ (1 + ae.. , + f, , ,) d. v o,k-l j,k-l' c . 7 JfS. 16 By the induction hypothesis, and from the facts that 1 + a e . . n + f . -, t > 1 + e . . . + f . , n >0 and 1 + a f j-i,k + e j-i,k 2 x + e d -i,k + f o-i, k > °> it follows that e . . + f . , + 1 > , . , = < L Uu,u> =uLUu, u^O 2 where u is an arbitrary n - dimensional vector. Since (u L) . , = d. , u. n + C. _ , u. ,. . +t> _ u. . . J,k j,k o,k j+l,k j+l,k j,k+l j,k+l and (U u) . . = u . . + e . , u . , . . + f . . u . . , - 'j,k j,k j,k j+l,k j,k j,k+l, 17 it follows that < L UU ' U> " j^ k (d 3,k U J,k + °j + l,k U j + l,k + b J,M ^k+1 5 (u. . + e. . u. ., , + f . , u. . .,)• 0,k j,k j+l,k j,k o^k+l 7 By (5 A) and (3-5) we get < L Uu,u> = Z d |u. ;k ♦ . u. +1)k + f * \ > 3} K The inequality is strict, since A + B is nonsingular. For, det(A + B) = det(L) = II d. . 4 °- 18 k. ANALYSIS AND CONVERGENCE CONDITIONS The iteration we consider is defined as follows: (If.l) (A + B)u 1+1 = (A + B)u ± - x(Au i - q), i = 0,1, ... . Our objectives in this section are to find necessary and sufficient conditions for the iterative procedure to converge, and to show that these conditions are satisfied for the factorization (3-3) • Let u satisfy (A + B)u = (A + B)u - x(Au - q) . We have (A + B)e i+1 = (A + B - xA)e., where e. =u-u., ore. _ = T e., where T is the error propagation matrix, 1 2/ 1+1 3/ ^ ^ & ' T = (A + B) _1 (A + B - tA) = I - t(A + B) _1 A. A necessary and sufficient condition for convergence is that the spectral radius of T be less then one, (if. 2) P (T) < 1, 19 Let m = \.. < A. 2 < ••• < X = M, be the eigenvalues of (A + B)~ A. Note that -1 1 -i 1 the eigenvalues are real since (A + B) A is similar to A 2 (A + B) A 2 . Relation (k.2) is then satisfied if and only if -1 < 1 - t\ ± < 1, or < t X. < 2, i = 1,2, ...n. This last inequality is true for every X. if and only if 0< T< |. We have proved Lemma 2: Let M he the largest eigenvalue of (A + B) A. A necessary and sufficient condition that the sequence u. defined in (^.l) converges to u = A q, is that < t < — . Note that M > 1, so that T e (0,2). Remark: The maximum and minimum of the variational form i -1 A < A 2 (A + B) A 2 u,u > < u,u > are the extreme eigenvalues of (A + B) A. i i i. i. 1 With u = (A 2 (A + B) A 2 ) " 2 A 2 y as in [k], it follows that (h.3) max = max < Ay y > u^O -u,u> y ^ <(A + B)y,y > This remark is used in the proof of Theorem 2. 20 Lemma 2 establishes bounds for the iteration parameter t. In the next lemma we show how to choose i in order to minimize p(T). Lemma 3 ' Let m = X , < \ p < • • • < \ = M, be the eigenvalues of (A + B) A. The value of t which minimizes p(T) is 2 T = M + m Further, for t = 2/(M + m), the norm of the error e. is reduced by at least (M - m)/(M + m) in each iteration. Proof : By our definition T = I - t(A + B) _1 A. Since m > 0, it follows that the eigenvalues of T are decreasing functions of x. The spectral radius of T is therefore minimized with respect to i, when I \ . (T) | = X (T) 1 man ' max or | 1 - tM | = 1 - tm. Therefore, 2 T = M + m 21 For this choice of t the spectral radius of T is p( T ) = i . a^ = M-_m WK ' M + m M + m Lemma 2 gives necessary and sufficient conditions for the iteration (U.l) to converge, and Lemma 3 gives the best choice of T. Both lemmas assume the knowledge of M and m, which in most cases is difficult to find. This is the subject of the next section. 22 5- ESTIMATES FOR THE EXTREME EIGENVALUES OF (A + B) -1 A In this section we give bounds for the extreme eigenvalues, m and M, of (A + B)~ A, that are easily computed from the coefficients of L and U. First we prove a lemma, which, with minor changes, is found in [k] . Lemma k : For the matrix A, whose properties are listed in (2.3), and for B defined by B = L U - A, we have and = -^ lk ^ D a+i,klVi,k- u j,k' 2 n n 2 j=l k=l J ' k ^ k H ^' k J +1 > k H J;k+l j U j,k < Bu,u > = a £ £ c .^ ^ 1/k l^ k+1 - u.^| 2 n-1 n 2 + a Z Z b. . e. |u. , , - u. I' j=l k=2 J ' 3A-1 1 J +1 > k J; k n n-1 - Z Z c.,f. ,,|u._._-u..| j=2 k=l a ' k J_1 ' k J-^ k+1 ^ k n n-1 2 + (1 - a) Z Z c f u -j_2 k= 2_ J* 1 *- J J-?-"- J* 1 *- n-1 n 2 + (1 - a) Z Z b e u . 23 Proof : We only prove the second part of the assertion. The first part is a familiar fact. We have n n =-aE E c..-f..._u. , _ u . . j=2 k=2 J'*"" 1 3-1* k-1 j,k-l j,k n-1 n + Z E b . , e . . . u. _ , _ u. , j=l k=2 ' k J ' 3+ljk-l J> k n n - a E £ "b. . . e. , . _ u. . . u. , j=2 k=2 3-l*k o-l,k-l j-l,k j,k n n + a E E (c . . f +b. n . e. . . n ) u. . j=2 k=2 3*k-l 3-1, k-1 o-l, k j-l,k-l' o,k n-1 n _ ^ 1=1 k=2 J ' k ^ k-1 U J +1 ^ k U J> k n n-1 + E E c. , f . , . u. - , ., u. , 0=2 k=l J ' k J > 3-1A+1 0,k n n-1 1 3=2 k=l % k ^ -1 ' k U J> k+1 U ^ k ' In the expression on the right, change the second variable of summation in the first sum and combine with the last sum. Also, change the first variable of summation in the third sum and combine with the fifth sum. Finally, change both variables of summation in the second sum and use (3-M and (3-5) • We obtain n n-1 and < Bu,u> = - a Z g £ =. )k fj. 1;k (u. ;k u. ;k+1 + u. )k+1 u. ;k ) -1 n Z Z b . .. e . . , (u. _ u. . , + u. . _ u. .. ) =1 k=2 J > J» k -! J> k J +1 > k J +1 > k J> k n-1 n - a Z z b 0=1 k=2 n n-1 + z z G 0=2 k=l 1=2 k=l C,] ' ,k J ' _1 > k J> k U J-l>k+l U j-l,k+l U J,k' n n-1 2 + a Z Z c . f . u. j=2 k=l J ' k J-1 ' k J> k+1 n-1 n-1 2 + a Z Z b . , e . . _ u. _ , j=l k=l °' J' 3 *" 1 3 +1 * k From the identities — ■ P P p - u. , , u. , - u . . u. , _ = lu. , _ - u. , " - u. , _ - u. ... 0,k+l j,k j,k j,k+l ' j,k+l j,k' o,k+l j,k' _ pop u j+l,k U j,k " U j,k j+l,k " ' U j+l,k " U j,k' u j+l,k " U j,k u . n . _ u . . + u . , u . , , _=- |u . _ . , - U . , | " + U . .. , - + u . , j-l,k+l j,k j,k j-l,k+l ' j-l,k+l j,k' j-l,k+l j,k we have, IX 11 — J_ , < Bu,u > = a £ £ =. )k fj. 1>k l» d>k+1 - » Jfk |' n n-1 = a Z Z j=2 k=l n n-1 - a Z Z 0=2 k=l 2 ) Z Z c. . f . _ , (u. . ,. + u. . i=2 k=l 3 > 0-l>k v 0>k+l 0>k n-1 n 2 + a Z £ b. . e. . . \vl - u. I j=l k=2 J > St*-' 1 0+1, k 0>k' 25 but n_1 n - a Z Z b. . e. . _(u. . , + uT , ) 1=1 k=2 J ' k J^" 1 J +1 > k J; k - =1 J,k j-l,k' o-l, k+l j, 1 n n-1 z ; 0=2 fc n n-1 z : 0=2 fc n n-1 Z Z •_2 j- = i Jjk o"-l^k o^k+1 n-1 n 2 Z Z b. . e . u. . , , 0=1 k=2 J ' 3>k-l J +1 ; k / =1 C o",k j-l,k^ u j-l,k+l " U o, + a Z Z e. , f . , , u. n n-1 2 n-1 n 2 Z Ec.-f.-.u. -.,.= £ Zb.-e.-_u... 0=2 k=l J ' <] " ' J-!? k+1 ,- =1 k=2 J; k Jjk" 1 J> k The formula to be established now follows. The quadratic form of B consists of four positive sums and one negative sum. For the analysis that follows it is convenient to group these sums and write < Bu, u > = < Cu,u > + < Du, u > + < Eu,u > . where n n-1 (%1) < Cu,u > = - Z £ =. )k fj. 1)k luj. 1)k+1 - u. )k | , 26 (5-2) and (5.3) < Dtt,» > . a £ £ b. ;k e^Ju^ - » 3>k l 2 n-1 n 2 < Eu,u > = (1 - a) Z E b e u n n-1 ■J + (1 " ^ 1=2 k=l ^ k fj '" 1 ' k ^> k ' Observe that < Cu, u > < 0; < Du, ti > > and < Eu, u > > 0. The following lemma of Dupont, Kendall and Rachford [3>^1 will be used in the proof of Theorem 2. Lemma j? ; n Let c. be nonnegative for i - 1,2, . ,.,n. Let E c. be positive. i=l X Let e and a., i = 1,2, ...,n be complex. Then n -1 -1 2 n 2 E c. E c.c.la. - a.l'" < E c. la. - el . L 1=1 J 1< ig - 1 + k where L is a positive constant. Proof : First let us find bounds for < Cu, u > in terms of < Au, u > and < Du,u >. We have n n-1 2 < Cu,u > . - £ £ = j)k f 3 . ljk l^. 1>k+1 - ^ Change the order of summation and use (3-5) to obtain k (5-10 n-1 n-1 d. . e. , d. , f. , o - _ Z V ,-) ? k j,k 3 ,k 3 ,k | _ ,2 Define p = L- , j,k = l, 2, ..., n, By Lemma 1 it follows that g . , > 1 • 28 If 3 = min , then it follows that (5-5) V 2P( -Vj.* _ W- Now, replace d. , in (5-^) by its lower bound (5-5) • By Lemma 5, 2 _ n-1 n-1 d. . e. , d. . f . , < Cu u > > i Z Z j ; k j,k j,k j,k I n-1 n-1 2 > ^- Z Z d. , e. . |u.., , - u. . I P j =1 k=1 J> k J^ k J +1 > k J> k n-1 n-1 + ■£ Z Z d. . f . , |u. , . - u. . | P j = l k=l 0,k J ' J> k+1 J* k ■ I £ £ ( W " a V e j,k-i> 'VIA " u J,kl 2 Let n n < Ku.u > = Z Z (E. , +B. n +D. n + D. _ _ +B. n , )u . , 0=1 k-1 J ' k J ' °> k • J+1,k J> k+1 J> k n-1 2 n-1 2 -Ed... |u. n -u. I - Z b , ,. lu , ,, ■ u , 0+1, n 1 j+l,n j,n' n,k+l' n,k+l n,k' J— .1 K— 1 29 n-1 + a Z b . e. , |u. , - u. | 2 j =1 J>n j, n-1 1 j+i,n j,n' n-1 Z k=l + a Z c . f _ . lu , _ - u , J 2 . n,k n-l,k' n, k+1 n,k' It follows from (5.2) and the definition of < Au,u > that > < Cu,u > > -g-(< Au,u > + < Du,u > - < Ku,u >) For the upper bound of —r-. =4 we have, <(A + B)u,u > ' M _ < Au,u > 1 M ~ maX <(A + B)u,u > ~ ~ <(A + C + D + E)u,u > ' mm — t — l < Au,u > ^___^ 1 ~ min r <(A + D)u ' U > (1 - l) + - < Ku ? U > + <-®h±> ] ' < Au,u > P P < Au,u > < Au,u > Since A, D, K and E are nonnegative, we have M < sr . For the lower bound, m, we have < Au, u > 1 m = mm — r- rr\ = Tk ^ ^ ^ <(A + B)u,u > <(A + C + D + E)u,u > < Au,u > 1 n <(C + D + E)u,u > ' 1 + max — — 2 - < Au,u > From the definition of < Cu, u > it follows that max < Cu,u > = 0. For the remaining quadratic forms define 30 <(D + E)u,u> < Au, u > - 1' where k is a positive constant. The result is then 1 m > 1 + k i Theorem 3 ' Let j.k -e . , - f . , ' j,k = 1, 2, .. .,n, and let P = min P . , . a,* a ' k Then for t < 2(1 - ^), the iteration (4.1) converges. Proof : We only have to show that for x < 2(1 - — ), the conditions of Lemma 2 are satisfied. From Theorem 2, we have, 1 M < _. ; 1 - - 3 therefore, M - and the result follows. 2 > i/d - i) IT " £(1 " I 5 ' 31 6. ACCELERATION OF THE CONVERGENCE BY SEQUENCE OF PARAMETERS A dramatic reduction of the error is possible through the use of a sequence of parameters selected according to classical Chebyshev analysis, The exposition given here is standard and may be found in [6] or [k] . Consider the iteration (6.1) (A + B)u = (A + B)u - t ± (Au ± - q). i _1 For e. = u - u. , where Au = q, we multiply (6.1) by A 2 (A + B) We have 1 -1 -1 A (6.2) A 2 e. = (I - t. A 2 (A + B) A 2 e. . i+l i l -1 -1 i 2 Note that I - t. A 2 (A + B) A 2 is Hermitian thus its L - norm is its i spectral radius . The L - norm of (6.2) is majorized by I (6.3) M t = max | n (1 - t. x) \, m< x 0. If we take x. to be given by (6.4) then the number of iterations required to reduce the norm of the error by a factor e is o - c o sn ~ (e" ) ^ . -1/M + nu cosh (— ) V M - m / Proof : The proof is well known and can be found in [3] or [6] 33 7- GENERALIZATION TO MORE THAN TWO DIMENSIONS In this section we develop our procedure for more than two dimensions. First we give in detail the three dimensional case, including a factorization algorithm and a statement of a lemma similar to Lemma 1 for bounds on the elements of L and U. The rest of this section is devoted to the m dimensional case. Consider the Dirichlet problem - 4 (a i (x) k ] - 4 (a2(x) h ] - 4 {a i {x) % } = Q(x) ' X€D (7-1) u(x) = 0, xedD where x = (x , x , x_J, D is the interior of a compact region, with boundary dD, a, (x), a (x),a,(x) are strictly positive functions, and where Q,(x), a (x) a_(x) and a^x) are sufficiently smooth. We cover the region D with a rectangular grid system, replace the differential operator in (7-1) with a difference operator, and replace u(x) with a vector whose components correspond to the grid points of the system. For illustration of the method we assume that D is a unit cube, D = (( X]L ,x 2 ,x 3 ) : < x 1 ,x 2 ,x 5 < 1} . Let h = — =- , n a positive integer. Let D be the grid system over D, with uniform spacing h, defined by L^ = ((ih, jh,kh) : 1 < i,j,k < n} . 3k Denote the grid point (ih, jh,kh) by (i, o,k) . We order the points of D with increasing values of i, then o, and then k. Let u(x) be a vector whose components are ordered according to the order of D, • Denote the components of u(x) by u ,, (i, j,k) a grid point. Let the matrix M defined by (Ma). = A. . u. . . _ + B. . , u. . . , + D. . . u. _ . . 'i,0,k i,o,k 1,0, k-1 1,0, k i,o-l,k i,j,k i-l,o,k (7-2) + E. . . u. . . + D. _ . . u. _ . . + B. . _ , u. . _ . i,j,k i,j,k i+l, j, k i+l,j,k i,o+l,k i,o+l,k + A. . , i u. . . . i, j,k+l i, o,k+l be a discrete version of the boundary value problem in (7-l) . It is a self- adjoint positive definite square matrix with 7 non-zero diagonals. Again, although M is sparse, it does not yield a sparse L U decomposition. Therefore, we replace M with a matrix of the form M + N, where M + N can be written as the product of sparse lower and upper triangular matrices M + W = L U, and use this in an iterative procedure to compute the solution of M u = q. The L U factors are constructed each with four non-zero diagonals in the same positions as the non-zero diagonals of M. That is, (7-3) and (Lu). . . = a. . . u. . . _ + "b. . . u. + c. . u. : . . 'i,j,k i,o,k i,j,k-l i,j,k i,o-l,k i,j,k i-l,o,k + d. . . u. . , i>0,k i,o, k (Uu) . . , = u. . . + e. . , u. _ . , + f. . , u. . _ , + g. . . u. . , y i,0,k i,0, k i,0;k i+l,j,k i,0, k i,o+l,k i,0,k i,o,k+l 35 The product is given by «» + »«W-Ifc<'Mi f j/ k -«l f 3,k\,J,W + a. . . e. . . . u... , , n i,0, k i,j, k-1 i+l,o,k-l + a. . . f. . , n u. . . , _ i, j,k i,o, k-1 i, j+l,k-l + b. . . u. . - , + b. . . e. . . , u. .- . . , 1,0, k i,o-l,k i,0,k i,o-l,k i+l,o-l,k i,0, k 1-1,0, k i,J,k to i,o,k-l i,o, k + b. . , f. . -. , u. . n + c. . ? e. n . . u. . . i,0, k i,o -l,k i,j, k i,o, k i-l,o, k i,o, k d. . , u. . . + d. . , e. . , u. , . . i,0, k i,o, k 1,0, k i,o, k i+l,o,k + c. . , f. . . , u. .. . ^ . + d. . . f. . . u. . .. . i,0, k i-l,o, k i-l,o+l, k i,o, k i,o,k i,o+l,k i,0,k S i,o-l,k U i,o-l,k+l C i, 0,k S i-1, o',k i-l,o,k+l i,0,k g i,0,k U i,j,k+1 Note that M + N has 13 non-zero diagonals. 36 Let < a < 1. The following is an algorithm for computing the elements of L and U: a i, j,k i, j,k " C i,o,k-l S i-l,j,k-l " i,.j,k-l g i, o-l,k-l' i,0,k " i,o,k " i,j-l,k I-l,J-l,k " i,j-l,k i,j-l,k-l' c. . , = D. . . - cub. ., . , e. n . _ . - a a. , . . e. n . . n , i,0, k i,0, k i-l,0, k i-l,o-l,k i-l,o, k i-l,o, k-1' d. . , + c. . . e. n . . + b. . -, f. . , , + a. . , g. . , , i,J,k i,0, k i-l,0, k i,o, k i,j-l,k i,0, k & i,o,k-l (7-5) = E. . . + A. . . - a. + B. . . - b. + D. - c. . ,, i,0, k i,o, k i,o, k i,0, k i,0,k i,0,k i,0,k' d. . . e. . . = D. . . . - a b. . . e. . _ . - a a. . . e. . . _, i,0,k i,o, k i+l,0, k i,0>k i,j-l,k i,0,k i,o,k-l' d. . . f. . . = B. . , , - a c. . . f. n . . - a a. . n f. . . ,, i,0, k i,0, k i,o+l,k i,o, k i-l,o,k i,J,k i,j,k-l' d. . . g. . . = A. . . _ - a c. . . g. , . . - a b. g. . -. , , i,0, k b i,o,k i,o, k+1 i,0, k to i-l,o,k i,0,k "1,0-!,^ i,0',k = 1,2, ... ,n. Observe that the left side of (7-5) forms the elements of the (i,o,k) row of L U. In the same way as for two dimensions, we may prove the symmetry of N. By ado"usting subscripts in (7-5) it can be seen that it is not necessary to compute a. . . , b. . . and c. . , since i,0, k' i,o, k i,o, k 37 (7.6) a. . , = d. . g. . , i,0, k i,o, k-1 i,o, k-1' (7 ' T) b i,j,k = d i,0-l,k f i,0-l,k' and c i,0,k " i-l,o',k e i-l,o',k* Changing the subscripts in (7.8) gives ( 7.9) c. . . , , = d. . . . e. . _ . . l+l, 0-1, k l, o-l, k i,o -l,k Multiplying the right hand side of (7*9) ^y "the left side of (7-7) gives b. . , (d. . _ , e. . , , ) = c. , . , , (d. . , , f. . , , ) i,0, k v i,o-l,k i,j-l,k' i+l,o-l,k' i,o-l,k i,j-l,k' or i,0,k e i, o-l,k " c i+l,J-l,k i,0-l,k' From (7-7), we have i, o+l, k-1 " i, o,k-l i,o,k-l* Therefore a i,0,k (d i,o,k-l f i,o,k-l ) : b i,o'+l,k-l (d i,0,k-l g i,J,k-l ) or {1 ' 11] a i,j,k f i,o,k-l = b i,o+l,k-l S i,0,k-1* Finally c i+l, j,k-l " i, o,k-l i,J,k-l' 38 Multiply by (7-6) ( 7.12) c. , . . , g. . , , = a. . e. , v ' ; i+l,j,k-l & x,j,k-l i,j,k i,j,k-l Equations (7.6)-(7.8), along with ( 7.10)- ( 7.12), establish the symmetry of M + N as defined in ( 7« 1 . We now state the analog of Lemma 1. The proof is a special case of Lemma 8. Lemma 7 - For the Dirichlet boundary conditions with < a < 1, the quantities defined in (7«5) exist and satisfy the following relations: « -i,j, k s \ it K< ' <*> b i, 3 , k s B i,j,^ ' (c) %t,*£ %i,*^°> :a) a i,J, k >0 ' < e > " x < e i,j,k< ' < f > - 1 < % iyk * °« (g) -1 < 8. , k < °» e i,j,k l,j,k g i, j,k 39 As in the two dimensional case we may show that the matrix M + N is positive definite. <(M + N)u,u> = < L Uu,u> = u L U u, u ^ 0. Therefore <(M + N)u,u> = Z (d. u. . , + c. . . , u. - . , . . . 1,0, k i,j,k i+l,j, k i+l,j,k 1, J,K i,0+l,k i, j+l,k i,o,k+l i,j,k+l' v i,o, k + e. , . u.,_ . . + f . . . u. . _ , + g. . , u. . , ,) i,j,k l+l, j, k i,o, k i,j+l,k i,j,k l^k+l 7 L d. . . lu. . , + e. . . u. _ . , • , v i,0,k' i,j,k i,o, k l+l, o,k 1, 0,-K + f . j , u. . , , + g. . , u. . , , I . 1,0, k i,o+l,k to i,0,k i,o, k+1 1 By Lemma 7, d. . , > for i, j,k = 1, 2, ... , n, and the result follows. i, 0, * Theorem k : Let ft . / -1 -1 -1 V p i,o,k " . mn , { e. . . + f, ~ > T. . . + g. 77 ' T. , . + g. . . ; ' i,0, k i,0, k i,o, k i,0, k b i,o,k i,j,k to i,o,k and let p" = min p. . , . i,j,k l ^ k 2 Then for x < 2(1 - — ), the iteration (^.l) converges. P Proof : The proof is a special case of Theorem 5, below, ko For the m dimensional case we have, m :■-' l i 1=1 (7.13! u(x) = 0, xedD where x = (x_,x , ...,x ), D is the interior of a compact region, with "boundary <$D, a. (x) are strictly positive functions, and where Q,(x), a. (x) are sufficiently smooth. We assume D = { (x 1 , x 2 , . . . , x m ) : < x ± , x 2 , • • ., x ffl < 1} Let h = — — , n a positive integer. Let D be the grid system over D, with uniform spacing h, defined by L^ = {(i x h, i g h, ...,i m h) : 1 < i^ig, . . .,i m < n) We shall adopt the notation of Dupont [3] with some minor changes Let i = (i , i , ...,± ) be a multi-index, i.e., an m- tuple with integer components. Denote the grid point in of D by i. Let e denote the unit vector in the direction of the positive k axis. Order the grid points of D in such a way that i precedes i + e, , and i + e precedes i + e , if k< k', 1 < k, k' 0, (0 > - 1 < U i,i + e v ^ ' ' k m (d) 1 + Z U. . > k=l X ' 1+e k Proof : The proof of (a) and (b) are omitted. To prove (c) and (d), observe that they are valid for i = (.1,1, ...,l) Now assume that relations (a) -(d) holds for all indices up to i. We have m m L. .(1 + U. . ) = A. . +A. . + Z A. . - Z L. . i,i + e k i,x i,i + e k k=1 i,i-e k ^1,1^ m k-1 - Z L. . U. . - a Z L. . U. . , i,i-e. i-e.,i . . i, l-e. l-e ., l+e.. -e . J=l ' J d 3=1 ' J 3 k J m Z L. . U. . n , i, l-e . l-e ., i-e ,+e, m = A. . + A. . + Z A. . + L. . (1 + U. .) 1,1 i,i+e k k=1 i,i-e k i,i-e k i-e k ,i k-1 - Z L. . (1 + U. . + a U. . . v kk m Z L. . (1 + U. . + aU. . ), . . _ 1, l-e . i-e.,i i-e.,i-e.+e n ' k = 1,2, ... ,m. This proves (c) by (d) . To prove the remaining assertion, we have from (7.1*0 m m L. . (1 + Z U. . ) = A. . + Z(A. . - L. . - L. . U. .) 1,1 k=l 1,1+e k 1,x k=l 1 ^ 1_e k 1,1_e k 1,1_e k 1_e k ,:L m m k-1 + Z A. . - a Z Z L. . U. n _ i,i+e n , t . _ i, l-e . l-e .,i+e.. -e . k=l ' k k=l j=l ' j j' k j m m - a Z Z L. . U. i, l-e . l-e .. i+e, -e . k=l j=k+l ' j y k j m A. . + Z (A. . + A. . ) 1,1 . _ l, l-e i, l+e ' k=l k ' k m k-1 Z L. . (1 + U. . + a Z U. i i ij i-e, i-e n ,i . . i-e.,i+e 1 -e. k=l ' k k' o=l j' k j m + a Z U. . ) . . _ l-e .,i+e. -e . m > A. . + Z (A. . + A. . ) - 1,1 i,i-e k i,i + e k m m -Zl.. (1 + aZu. . ) > 0, . _ i, i-e' . _ l-e .,i+e n -e/ k=l ' k j=l y k j ^5 We give now a proof of convergence for the m dimensional case. Theorem 5 • Let r -1 j,k = 1,2, ... ,m-» j,k I i,i+e. i,i+e. j ? k J and let p = min { p ± } . i m 1 Then for t < 2(1 - — ^— ), the iteration (^.l) converges. Proof : The following formulas are a generalization of the result of Lemma k. The proof is omitted. m 2 < Au.u > = Z(A. . + Z (A. . + A. . ) u. ) ' , i,i , , v i,i-e. i, i+e ' i' i ' k=l ' k ' k and m - Z ( Z A. . |tu - u. | 2 ) . \ . i, i+e, ' i+e, i ' i k=l ' k k < Bu,u > = Z S (A. . - L. . U. . ) lu. - u. I 4 , , i, i+e, l, i i, i+e, ' i+e n i ' i k=l ' k ' ' k. k - L. . U. . ) u + Z ( .±-=-2) Z (A. . i a k=i x ' 1+e k i, i i, i+e k i m-1 m - Z ( Z Z L. . U. . U. . |u. - u. r ) • i i • i n i; 1 1*1+61 i, i+e. 1 i+e, i+e ' l k=l j=k+l ' ' k ' j k j i+6 Define m-1 m < Cu.u > = - £ Z Z L. . U. . U. . |u. - U. K . n n . , , 1, 1 1,1+e. 1, i+e . ' i+e. l+e . ' ' i k=l j=k+l ' ' k ' j k j m P < Du,u > = Z Z (A. . - L. . U. . ) |u. - u. I ' . , , l, i+e. l, i l, i+e, ' i+e. i ' i k=l ' k ' ' k k and = Z i^-^) Z (A. . - L. . U. . ) uf . ' . v a . .. v i,i+e. i,i 1,1+e' i i k=l ' k ' k Then we get m-1 m < Cu,u > = - Z Z Z L. . U. . U. . lu. - u. I ' .,_.,._ i.i i,i+e. 1,1+e. 1 i+e, i+e . ' i k=l j=k+l ' ' k , o k j m-1 m L. . U. . L. . U. . ..V V y ^ ^ ^ ^ |u _ u |2 / / / T ' i+e. i+e . ' i k=l j=k+l i,i J Define r -1 j,k = 1,2, ... ,m i P i = mi * 1 ~ TTJ— > . , v J ' = mm { P. } i i and then use Lemma 5- We get m-1 m L. . U. . L. . U. . r-i ,-i r-. 1,1 i,i+e k 1,1 i,i+e < Cu ' U > > 3 / > > L. . U. . + L. . U. . K+e v " U i + e. r k=i j=k+i i * 1 ^ i+e k i ' 1 x > i+e j k j > Eli E Z L. . U. . |u. - u. | 2 . " P i k=l 1 ' 1 X ' 1+ V 1+e k X ^7 It can be seen that 1 m > < Cu.u > > -^— ( < Au,u > + < Du,u > - < Ku,u > ) - — P where < Ku,u > is a residual non-negative definite quadratic form. The analysis of the two dimensional case can be applied now, and we get M < - m-1 ' 3 "1 Therefore for t < 2(1 - -rr-) , the iteration (4.1) converges. U8 8. COMPUTATIONAL NOTES We give below a version of the iteration 'A + B)u _ = (A + B)u - t(Au - q) ' n+1 n n suitable for coding, and give a count of the number of arithmetic operations required for each step. Stone suggests, [9], letting and R = q - Au n n 5 _ = u _ - u , n+1 n+1 n Solve and L t = R n US ^ = t. n+1 Set u , = u +5 n n+1 n n+1 Note that for x = 1, if the initial approximation, u„, is u = 0, then u, is the solution of (A + B)u = q. It is plausible that u is a good approximation to the solution of Au = q. h9 If A is of order N, the factorization of A into the product of L and U, for the m dimensional case, requires 0(2m(2m-l)N) arithmetic operations. For the iterative procedure defined above: (a) To solve L t = R requires 0((m+l)N) arithmetic operations; (b) To solve U 5 = t requires O(mN) arithmetic operations; (c) To determine R requires 0((2m+l)N) arithmetic operations. Therefore there are 0((^m+2)N) arithmetic operations for each iteration. LIST OF REFERENCES [l] A. Bracha, "An Implicit Method for the Solution of Elliptic Partial Differential Equations, " Department of Computer Science, University of Illinois at Urbana-Champaign, Report Ho. 392, April 1970* [2] N. I. Buleev, "A Numerical Method for the Solution of Two-Dimensional and Three-Dimensional Equations of Diffusion," Math . Sbornik , Vol. 51, No. 2, i960. [3] T. Dupont, "A Factorization Procedure for the Solution of Elliptic Difference Equations," SLAM Journal on Numerical Analysis , Vol. 5, No. k, December, 1968. [k] T. Dupont, R. P. Kendall, and H. H. Rachford, Jr., "An Approximate Factorization Procedure for Solving Self-adjoint Elliptic Difference Equations," SLAM Journal on Numerical Analysis , Vol. 5, No. 3, 1968. [5] E.G. D'Yakonov, "An Iteration Method for Solving Systems of Finite Difference Equations," Dokl. Akad. Nauk. SSSR 138 , 1961. [6] G. E. For sy the and W. R. Wasow, Finite-Difference Methods for Partial Differential Equations , John Wiley & Sons, New York, i960. [7] T. A. Oliphant, "An Extrapolation Procedure for Solving Linear Systems," Quarterly of Applied Math. , Vol. 20, No. 3, 1962. [8] H. L. Stone, Private Communication, April, 19&9* [9] H. L. Stone, "Iterative Solution of Implicit Approximations of Multidimensional Partial Differential Equations, " SIAM Journal on Numerical Analysis , Vol. 5> No. 3, September, 1968. [10] R. S. Varga, Matrix Iterative Analysis , Prentice-Hall, Englewood Cliffs, New Jersey, 1962 . [11] E. L. Wachspress, Iterative Solution of Elliptic Systems , Prentice- Hall, Englewood Cliffs, New Jersey, 1966 . [12] B. L. Buzbee, G. H. Golub, and C. ¥. Nielson, "On Direct Methods for Solving Poisson's Equations," SIAM Journal on Numerical Analysis , Vol. 7, No. k, December 1970. [13] R. W. Hockney, "A Fast Direct Solution of Poisson's Equation Using Fourier Analysis," Journal of the Association for Computing Machinery , Vol. 12, No. 1, January 1965- 51 VITA The author, Amnon Bracha, was born in Rishon-Lezion, Israel, on August 19, 19^2. He received his Bachelor of Science degree in Mathematics from the Technion, Israel Institute of Technology in June 196"J, and his Master of Science degree in Computer Science in June 197° from the University of Illinois. Since September 1968 he has been a research assistant in the Department of Computer Science of the University of Illinois at Urbana- Champaign . He is a member of Sigma Xi and the Association for Computing Machinery. Sit --> $ &