wwg bh mm 8s» §8 BMK99 fSSn MMBwtfWMWMBflfflflfflB^ WMBKtqflW eHMHiMtMH IS I8H&1 HH^HBaaHBanaaBnBSB ■MO B BMCMMHM ft) fflflfBBfflil gfl flEffiD ffl O flHl ft m 1 nflinn iHOBUDHuQZjBjDKIDdC maaB H B LIBRARY OF THE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN 510.84 no. 816 - 823 Cop. 2, The person charging this material is re- sponsible for its return to the library from which it was withdrawn on or before the Latest Date stamped below. Theft, mutilation, and underlining of books are reasons for disciplinary action and may result in dismissal from the University. UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN SEP 1 6 19V SEP 2 1 1S95 L161 — O-1096 Digitized by the Internet Archive in 2013 http://archive.org/details/errorestimationi820lind UIUCDCS-R-76-820 Jyt^Ct UHK) Error Estimation and Iterative Improvement for the Numerical Solution of Operator Equations by Bengt Lindberg July 1976 DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN • URBANA, ILLINOIS UnNi rs,ty ot mmois >♦ tjrbana-Cham en UIUCDCS-R-76-820 Error Estimation and Iterative Improvement for the Numerical Solution of Operator Equations by Bengt Lindberg July 1976 Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 Supported in part by the United States Office of Scientific Research under contract AF0SR-75-2854. iii Acknowledgment This work was supported by the Air Force Office of Scientific Research under grant AFOSR 75-2854. The author wishes to thank Professor Robert Skeel , who carefully read the original manuscript and made valuable suggestions for the improvement of it; Professor C. W. Gear who arranged my visit to the University of Illinois and suggested this line of research; Mr. John van Rosendale who programmed parts of the numerical experiments; Ms. Pamela Farr who typed the manuscript. iv Abstract A method for estimation of the global discretization error of solutions of operator equations is presented. Further an algorithm for iterative improvement of the approximate solution of such problems is given. The theoretical foundation for the algorithms are given as a number of theorems. Several classes of operator equations are examined and numerical results for both the error estimation algorithm and the algorithm for iterative improvement are given for some classes of ordinary and partial differential equations and integral equations. V Table of Contents Pa^e 1. Introduction 1 2. General Theory 5 2.1 Preliminaries 5 2.2 Basic Theorems 9 3. Approximation of Linear Functional s 29 4. Applications 32 4.1 Initial Value Problems for Ordinary Differential Equations . . 33 4.2 Two-Point Boundary Value Problems for Ordinary Differential Equations 44 4.3 Two-Dimensional Elliptic Boundary Value Problems 50 4.3.1 Problems Nonlinear in u Only 51 4.3.2 The Minimal Surface Equation 59 4.4 Parabolic Partial Differential Equations 62 4.4.1 The Method of Lines with Euler's Method 63 4.4.2 The Method of Lines with the Backward Euler Method ... 66 4.5 Hyperbolic Partial Differential Equations 70 4.6 Integral Equations 77 5. Concluding Remarks 82 References 85 1 . Introduction A simple technique for estimating the discretization error, or improving the accuracy of the numerical solution of a functional equation F(y) = is discussed. The error estimate is obtained as the difference between the solution of the discretized problem ♦ h (n) = and a slightly perturbed discrete problem * n (n E ) + d>j;(n) = ; the improved solutions are obtained from a sequence of perturbed problems. The perturbation operator $. is a discrete approximation to the operator F that is of higher order of accuracy than . (i.e., if . is consistent of order p with F, then 4>, is consistent of order q > p with F, see section 2 for more precise statements). We can view -. (n) as an approximation to the local discretization error of the operator . . The operator <\>, corresponds to a more accurate discretization method than , . The basic requirement on <$>. is the accuracy; the operator does not even need to be stable as it is applied only to the known quantity n to produce a constant to be added to ,. The perturbations needed to iteratively improve the solution are obtained from operators that are increasingly accurate approximations to F, and the negative of the perturbation can again be viewed as increasingly accurate approximations to the local discretization error of the operator . . If the order of accuracy of the operator cj>, is p we can gain p orders of accuracy per iteration if the perturbations are chosen judiciously. 2 One simple and direct way of constructing the perturbations is the following (other ways will be examined in section 4): For the error estimation algorithm compute the residual (on a certain set of discrete points), which is obtained when the numerical solution n of the discrete problem ,(n) = (a solution defined only on a set of discrete points) is substituted for the unknown function y in the functional equation. Any functionals (e.g. derivatives) in F(y) = that had to be approximated to get the discrete problem . (n) = are approximated by linear combinations of the components of the vector n (solution of 4>. (n) = 0) when the residual is computed. To improve the solution the same technique is used, but now the residuals are computed as the sum of the old residuals and a new residual computed from the most recent approximation to the solution. Further increasingly accurate formulas are used to calculate necessary approxima- tions to functionals that appear in F(y). This technique is useful if there exists an expansion of the global discretization error (in the discretization parameter h) of the discretized problem 4>,(n) = to sufficiently high order and with smooth error terms. If sufficient smoothness conditions are met (e.g. in those cases where iterated deferred corrections can be applied) several iterative improvements can be made, for each iteration the order of accuracy of the new approximate solution is increased, in the same way as when iterated deferred corrections are used. For the cases where one cannot perform iterative improvement due to the existence of significant non-smooth terms in the global error expansion one may still, in many cases, get realistic error estimates with the technique described. 3 Related techniaues are difference correction, Fox [1947], Volkov [1957], and iterated deferred corrections according to Pereyra. Pereyra has worked extensively on boundary value problems for ordinary differential equations, Pereyra [1967b, 1973] and has produced very efficient computer codes, Lentini, Pereyra [1974, 1975a, 1975b] for such problems. He has also treated linear and mildly nonlinear elliptic boundary value problems, Pereyra [1967b], Pereyra [1970] and analyzed general nonlinear functional equations, Pereyra [1967a]. The basic computational tools for high order approximations to linear functional s, needed both in deferred corrections and iterative improvement can be found in Ball ester, Pereyra [1966], Bjdrck, Pereyra [1970], Galimberti, Pereyra [1970]. Whenever iterated deferred corrections can be used, iterative improvement can also easily be used. However to be able to perform iterated deferred corrections one must know and be able to approximate the terms of the local error expansion for the discretization operator . , while for iterative improvement that is not necessary. In some cases, e.g. for boundary value problems y" = f(x, y) ; y(a) = a, y(b) = 3, these local error expansions are easily found (by Taylor expansions) and involve no terms that are difficult to approximate. In other cases, e.g. for problems where F(y) is nonlinear in some derivative of y, the expansions are very cumbersome to find and even more cumbersome to approximate (see Pereyra [1970]), which makes iterated deferred corrections, although theoretically feasible, in practice impossible to perform. Another related technique is iterated defect corrections according to Stetter [1974], Frank [1975]. The main difference between the approach 4 of the two later papers and our approach is that they define a smooth global approximation to the solution of the operator equation F(y) = from the discrete solution n of .(n) = and then compute the perturbation from that global approximation, while we use the discrete solution directly and need to be concerned only about the local properties of the perturbations . In section 3 of Stetter [1974] the author also introduces a formalism, similar to ours, in order to discuss how the leading term of the local discretization error for the numerical solution of ordinary differential equations usually is estimated. He does, however, not use this formalism any further. The starting point for our investigation was Stetter [1974] and the many papers of Pereyra (1967), ..., (1975) and our primary goal was to find cheap ways to estimate the errors in the numerical solution of partial differential equations. Extensive discussions of the existence of asymptotic expansions of global truncation errors for the numerical solution of functional equations can be found in Stetter [1965], Pereyra [1967a], and Stetter [1973]. 2. General theory 2.1 Preliminaries The notation of this section is greatly influenced by Stetter [1965, 1973] and Pereyra [1967a]. The basis for the results discussed here is the existence of asymptotic error expansions for the global discretization error of the solution to the discretized problem. For a thorough discussion of this matter, see the references above. We consider functional equations (2.1) F(y) = where F: E ■* E is a generally nonlinear operator from a Banach space E into a Banach space E . We will always assume that (2.1) has a unique solution y € E. For the purpose of numerical solution the problem (2.1) is discretized in the following sense: We define families - depending on a real parameter h £ H, H = {h = - , n integer in a subset of {v, v * n Q } with n Q > fixed} - of Banach spaces E. , £. and of bounded linear transformations A, , A, , which map E, E into E, , E. , respectively. Then we choose a family of (nonlinear) operators $.: E. ■*■ E, such that for z € E and h € H or for z = y and h € H (2.2) M+l 4 h (A h z) = A^F(z) + I h v -f (z)} + 0(h^ +l ), v=p -0 where f : E -* E does not depend upon h. If for all z € E | | h U h z) - a° F(z)|| q = 0(h p ) the operator <|> h E h is said to be consistent of order p with F. The expression u(a. y) formed with the solution y of (2.1) is often called the local discretization error of .. The original problem (2.1) is now replaced by the "algorithm" (2.3) 4> h ( n ) = 0, which is supposed to have a unique solution n(h) € E, for h £ H. See the references above for a discussion of the unique solvability of 4>.(n) = 0. The global discretization error of (2.3) is defined as e(h) = n(h) - A h y ^ E h where y is again the solution of (2.1). (2.3) is convergent of order p (p ^ 1) if I |e(h)| L < C h p for h € H. L h The global discretization error e(h) admits an asymptotic expansion to the order M (M ^ p) if there are e £ E, v = p(l)M, e independent of h, such that M (2.5) | |e(h) - A. z h V .e|| < C h M+1 for h€H. v=p h We will call <)>, stable at A. z if there exist constants S and r > such that uniformly for all h € H (2.6a) IU 1 - c 2 M E ^ SCM^U 1 ) - 4> h U 2 )|| h E h for all ^ , i = 1 , 2 such that (2.6b) I U h ( C 1 ) - * h U h z)|| < r E h (cf. def. 1.1.10 in Stetter [1973]). To estimate the global truncation error choose a family of (non-linear) operators .: E. ■> E, such that for z £ E and h £ H (2.7) ^(A h z) = A°{F(z)} + 0(h q ) ; p < q $ 2p When the solution n of problem (2.3) is known, solve for n from (2.8) <)> h (n E ) + ^(n) = Under suitable assumptions _ q-1 (2.9) n - n = A.[ E h v e ] + 0(h q ) v=p In this report we are mainly interested in estimating the global truncation error for given algorithms, however the idea behind (2.7), (2.8) can be extended to iteratively compute better approximations to the solution of (2.1 ). To iteratively compute better approximations to the solution of (2.1) choose a family of (non-linear) operators <)>, . : E. -»■ E, , i = 1, 2 ... such that for z £ E and h £ H or for z = y and h ■£ H M (2.10) *. .(A. z) = A°{F(z) + z h v f (z)} + 0(h M+1 ) h>1 h h v=(i+l)p v>1 Put n = n and compute n 1 , i = 1, 2, ... recursively from i (2.11) Un 1 ) + E d, w (n V_1 ) =0 1=1,2, ... n v=l n ' v Under suitable assumptions M i M,+l (2.12) n 1 - A, y = A.[ i h v e y .] + 0(h ' ) h h v=(i+l)p v>1 8 A simple example of the use of the operator formalism of this section is given in note 5 after theorem 2. Readers that are unfamiliar with the notation of this section are advised to study that example before they read the theorems . Note 1 ^(A h y) = A°{F(y)} + 0(h q ) so . is consistent with F of order q. Further, under suitable assumptions (see the theorems below) • * h U h y) + ^(n) = A°{F(y)} + 0(h q ) so <}>, + .(n) is consistent with F of order q. We can view -,(n) as an approximation, accurate to order q - 1 in h, to the local discretization error of the operator <|>. . Note 2 * hJ (A h y) = A°{F(y)} + 0(h (i+1)#p ) so *. . is consistent of order (i+l)-p with F. Further, under suitable n , l assumptions, (see theorem 3 below) * h (A h y)+ \ V^" 1 ) = *h { F(y)} + o(h( 1+1 >-e) (This is not proved in the theorem, but can easily be proved with the i v _-| technique used in the proof of the theorem.) Thus <|» h + E h y (n ) 1 / v-K is consistent of order (i+l)-p with F. We can thus view - z 4> h U ) v=l ' as an approximation, accurate to order (i+l)-p - 1 in h, to the local discretization error of the operator ^. Note 3 From the consistency of the operators above and the stability of . (which implies the stability of . + g. for any constant g. £ E.) one can derive results on the convergence and the existence of asymptotic error expansions for the algorithms (2.8) and (2.11). 2.2 Basic theorems Theorem 1 Let y be the unique solution of F(y) =^0 and let a) the global discretization error n - A, y of (2.3) have an asymptotic expansion n - A, y = a { E h J ' e.} + 6°(h) with M Q >. p + k and ||5 u (h)|| = 0(h ° ) b) the expansions (2.2) and (2.7) hold with M * p and p < q * 2p c) the operator h be stable at A, y in the sense of (2.6) d) there exist constants L and b such that uniformly for all h £ H IIV^-.V^II^Mly 1 -Al E for all y 1 £ E, i = 1, 2 such that ii i || , I |y - y| l E * b j = p, p+l, ..., N ] {U ] = min (M, M Q - k, q - 1) e) there exist constants C, C £ and d such that uniformly for all h € H 10 llOh^ 1 ) - ^ 2 )\\ o « c-IU 1 - c 2 || E .h" k E h h Ikfc 1 ) - * h V)|| s^llc 1 -c 2 || E .h" k E h h 1 2 for any g' s c € Eu such that C 1 - a. y | | p s d , i = l,2 n L h Then the solution n of 'h satisfies the inequality 4> h (n E ) + ^(n) = E V 1 n L = A h y + 0(h ' ) where N, = min [M, NL - k, q - 1]. Proof Note that = 4> h (n E ) + ^(n) = <^(n E ) - E (n) i.e. * h (n E ) - * h (A h yy = - E (n) N +1 We will show below that * h ( A h y) + ♦ kO 1 ) = 0( h ) so f° r sufficiently small h we have | | h (n E ) - h (A h y)| | < r Thus from the stability of <{>. at A, y we have | |n E - A h y| | $ S| |4> h (n E ) - 4> h (A h y)| | = S| |- h (A h y) - E (n)| | Here and in the sequel, whenever there is no danger of confusion, we omit the indices on the norms that refer to the actual Banach spaces. 11 Introduce the notation M z = z h J e h J e J and form h (A h y) + ^(n) = *h (A h y) " *h (n) + *h (n) = h (A h (y + z)) + *h (A h (y + z)) + 0(h ) N l . N l = A°{F(y) + z f.(y) h J " - F(y + z) - z f Ay + z) h j j=p J j=p J N,+l + F(y + z)} + 0(h ' ) ] i V a^{ £ If Ay) - fAy + z)] h J } + o(h ' ) j=p J J N,+] 0(h ] ) Hence E V 1 n - A h y = 0(h ' ) Corollary Let the conditions of theorem 1 be satisfied with M Q >, 2p + k - 1 , q = 2p, M * 2p - 1, then the solution n E of satisfies the inequality > h (n E ) + ^(n) = n E = A h y + 0(h 2p ) 12 Theorem 2 Let y be the unique solution of F(y) = and a) conditions a), c) and e) of theorem 1 hold b) F, c^ and . be twice continuously Frechet differentiable c) the inequalities below hold M * n (A h y) = A°{F(y) + z f (y) h J } + 0(h M+1 ) j=p J ^(A h y) = A°{F(y)} + 0(h q ) p < q * 2p ^(A h y) = A°{F'(y)} + 0(h q ' P ) ^'(A h y) = A°{F'(y)} + 0(h q "P) (2 ) E (2) with M ^ p. Further let K '(A, z) and 4>, (a, z) be uniformly bounded with respect to h for any z £ E. Then the solution n of * h U E ) + ^(n) = satisfies the inequality N.+l n h = A h y + 0(h ' ) N-j = min(M, M Q - k, q - 1) Proof Use the same notation as in the proof of theorem 1. This proof proceeds in the same way as that proof, except that 13 E F M n +1 " k 4> h (A h y) + $ h ( n ) = ^(A h y) - ^{\(y + z)) + V A h (y + z )) + °( h ) = * h (A h y) - * h U h y) " *h(\ y) A h z + * h E (A h y) E« N l +1 + ^ U h y) A h z + o(h ) M = a°{- F'(y) z - ( Z fi(y) h J ) z + F(y) + F'(y) z} N,+l + 0(h ' ) N.+l = 0(h ] ) , Note 1 These theorems are the basis for estimation of global discretization errors for algorithms where the error expansion contains only a few smooth terms. If k ^ 1 , and p $ M fl < p + k the error expansions contain at least one smooth term hr e but due to condition e) no estimate of the error can P be obtained. Numerical experiments (see section 4.5) indicate that it might be possible to relax this condition. No theoretical results along these lines have yet been obtained. Note 2 The condition on M in b) of theorem 1 is somewhat less restrictive than the condition on NL in a). However, to obtain the asymptotic expansion in a) one would need to have M $ p + k in the expansion (2.2). The lower limit on M for the expansion (2.7), though, may be of value in some cases. Note 3 The constant k of condition e) is in general the order of the highest derivative present in the functional equation F(y) . Note 4 These theorems differ only in the differentiability assumptions on the operators involved and the set of elements from E for which the inequalities (2.2) and (2.7) must be valid. 14 Due to the fact that (2.2) and (2.7) only are needed for the exact solution y to F(y) = in theorem 2 some very interesting types of perturba- tion operators . are allowed in that theorem, while they are not allowed in theorem 1. In essence, all operators . for which the local discretization error is of order greater than p can be used in theorem 2, while in theorem 1 the order of consistency of . with F must exceed p. Several examples in section 4 will clarify this distinction. Whether the more relaxed differentiability conditions of theorem 1 are of any practical importance I don't know. Future studies will hopefully provide some answers to this question. Note 5 The value of k of condition e) in theorem 1 depends on the norm for the space E.. For linear multistep methods for initial value problems for ordinary differential equations one can (at least for . ) get k = by choosing Spijker's norm (see Stetter (1973), section 2.2.4, p. 81-84) for eP. To get k = also for $. the operator must be chosen judiciously, cf. also note 2 on p. 41 of section 4.1. Note 6 As an exercise in the formalism of this section we analyze an algorithm for the two-point boundary value problem y" = f(x, y) y(a) = a ; y'(b)=e ; f £ C S ([a, b] * R) ; b=b+e The 2s-continuity of f is chosen for convenience of notation. 1 . Operator equation F: E+E° with E = C 2s [a, b] E° = R x C[a, b] x R and the following norms 15 M*M E = a r x **j> {v) WI/v! g|l E o= I9 1 ! ♦ |g 2 l *^s\9M for 9 = I g(x), a $ x $ b .2 F(z) = € E ( /z(a) - a z"(x) - f(x, z(x)) a * x s b \z'(b) - 3 2. Discretization Introduce the grid x. = a + i h, i =0, 1, ..., N + 1, h = 2(b - a)/(2N + 1) i.e. x x l x 2 X N-1 X N and discretize the problem according to k N+l ? i+l " 2 «i + *i-l - f(x r ^.) = i=l,2, K ~ a = 5 N+1 " ? N -3 = In the operator formalism this means define A h : E.E h ; E h =R N+2 A, z = [z(xj, z(x,), ..., z(x M+1 )] 'N+l h fc Li - v/s o A°: E° + E° ; E°=RxR N xR h h h A h 9 = for and use the norms for gUj) g = g 1 g(x) 16 \ i = 1, 2, .... N a s x $ b € E l Ell = max le MI E h 0*i*N+l 1*1 M ii - i 1 1 j. i 2i . max | IIyII o - Iy I + |y I + UUH \y i L h Y = Y i = l,2 N € E Then define u : E, •*■ E, K o - a ♦ h (c) E. , - 2E. + E, , 11 ,2 d -^t 1 l i = 1, 2 N ? N+1 " ? N 17 Note that, by Taylor expansions we get z(x i+1 ) - 22^.) + zU^.,) - f(x r z(x.)) = z"(x.) - f(x., z(x.)) s-1 + i (2j + 2)! _(2j+2),„ ^ u 2j Uj) h 1 + OCh 25 " 1 ) and Z(Vl) h " ZUN) - 3 - Z'(b) - 3 ♦ *1 T ^f T7TZ (^D (b)( l ) 2J + l h 2j + 0(h 2s_1 ) for z^E. If we define f : E -> E°, v = 2, 3, . . . , 2s - 2, f(z) = v v f (z) = v v ' v odd 2 Jv+2), x (v+ 2)! Z (x) 2 7 (v+l)/ h wlsV+l (v + 1)! z (b) ¥ a s x s b v even we have 2s-2 R b h 60h' R b (0 - "*N-3 + 5? N-2 " 9? N-1 " 17? N + 22? N + 1 24h Note that, by Taylor expansions we get and for z £ E. Now define D.(A h z) = z"(x.) + 0(h 4 ) i = 1, 2, R b (A h z) ■ z'(b) + 0(h 4 ) C Q - a >fa) - 0^5) - f(x. s ?.) R b (5) " B then for z £ E ^(A h z) = A°{F(z)} + 0(h 4 ) i = 1, 2, .... N 4. As a further exercise we examine some of the conditions of theorem 1 19 I My 1 ) - f-(y 2 )l o , 2 (y KJ+2) . 2(j+2h (j + 2)! ^ y y ' Hj+1)/^ _ W 2(j + 1) /K vv ,lxj+l ^(y'^^bl-y^^b)).^ 1 U + 1)! = 2 . ( i)^i ly 1(J+1) (b).-. . Y 2W) (b) + 2 max. | y HJ^2) ix) . 2(j + 2) (x) (3+2)1 asxsb * 2-(l) j+1 My 1 -y 2 || E +2- II* 1 - y 2 \ | £ Q I I 1 2, | * 3«||y - y lie 1 .2 for any y , y £ E, j = 2, 4, . . . , 2s - 2. Further f.(y ] ) - f-(y 2 )ll n = for j = 3, 5, . . . , 2s - 3 so condition d) of theorem 1 is satisfied 20 > n U ] ) -* h (c 2 )|| L h ^0 ^0 «1+1 - ^l - 2( s] - 5 i } + *1-1 - ? i-l : (x,, e!) - f(x,, s 2 ) h E h C N+1 " ? N+1 " ^N " c rP k 1 - £ 2 - (e 1 - E 2 ) i^O " ^O 1 h + max i C i + 1 " g i+1 " 2U i " e i } + C i-1 " g i-1 f ( y F Up 1 r 2 ) hi^n ' h 2 ■ yv s- n s- M J S lis 1 - c 2 M E ♦£|| 5 W|| E ♦* f ||e , -« 2 1l E ♦ILIU 1 -« 2 II E L h n L h fi L h y L h < o||r - r|L -h c L h where C ■ h 2 (l + M ) + 2h Q + 4 ^ € IntU], C 2 ] M y = a .T,b IV X ' Z(X)I ; H^-A h yll Eh .d 1-1.2 |z - y(x) | $ d Analogously we get I Ufa 1 ) -♦fc z )ll «c E .|l« 1 -« Z ll Eh - h ' 2 where 21 C E ■ hg(l ♦ M y ) + |i h Q + ™ 1 2 and M , € , £ are as above. Thus condition e) is satisfied. Theorem 3 Let y be the unique solution of F(y) =0 and a) the global discretization error n - A, y of (2.3) have an asymptotic expansion M i n - a. y = a.{ e h J e.} + 6 u (h) J=P J with M * v(p + k) and | |6°| | = 0(h M+1 ) b) the expansions (2.2) and (2.10) hold with M >, y(p + k) c) the operator . be stable at A. y in the sense of (2.6) d) the operators F, f. and f. . have the following differentiability properties: F: [M/p] + 1 times Frechet differentiable f- : C(M - j)/p] + 1 times Frechet differentiable j = p, p + 1 , . . . , M f v -|- C(M - j)/(vp)] + 1 times Frechet differentiable j = (v + l)-p, (v + l).p + 1, ..., M y v = 1, 2, ..., y - 1 Here [expression] stands for the integer part of the expression within the square bracket. e) there exist constants d, C and C , v = 1 , 2, . . . , y - 1 such that uniformly for all h € H 22 l*h,v^)-*h,v^N ^N* 1 -* 2 M- h ' k for all c 1 € E h , i = 1 , 2 such that II? 1 - A h y|| E s d n L h f) [F'(y)]" 1 exist g) it be possible to define e.,j=(v+l)p, ...,M,v=l,2, ...,y-l,M=M-k«v according to F'(y) e . = q . where g . are defined in equation (2.18) below; then the global discretiza- tion error n - A. y of (2.11) has an asymptotic expansion of the form M. n 1 - A. y = A.{ E e,. h j } + 6 1 (h) h h H1+D-P 1J M.+l where | I6 1 ! | = 0(h ] ) ; M. = M - ik, i = 1 , 2, . . . , y - 1 Proof Introduce the family of operators <|>. . : E. -> E,, i = 1 , 2, . . . , y - 1 by (2.13) * h>i ( 5 ) - % U) * I ♦h.jfn^ 1 ) Then n 1 , i = 1, 2, ..., y - 1 are the solutions of (2.14) Vi (n1) = ° Note that as we have 23 = ^ hji . 1 (n i -b=^ i " 1 ) + ^/h, j ^" 1 ) j = * M (n 1 ) = ^(n 1 ) - 4> h (n i_1 ) + ♦ hf1 (n 1 " 1 ) i .e h (n ) = h (n ) - 4> h .(n " ) Further note that as iii. . and . only differ by an additive constant the h,i h stability of ^. . is implied by the stability of <\>.. We will prove the theorem by induction on i, so assume that M i-1 (2.15) n 1 " 1 - A y = A.{ E e. , . h j } + 6 ] '\h) j=i - p J 1-1. M i-1 +1 with M6 1 '|| = 0(h n ' ) and M i = M - i-k. s i (2.16) z s = 2 e . h J s = 0, 1, ..., y - 1 Introduce the notation M j=(s+l).p ~ SJ From (2.14), (2.13), (2.2) and (2.10) we get HVi (n1) " *h,i ( V y + zl})11 = 0(hP) so for sufficiently small h we have H*h.i (,|1) -*h,i ( V* + zi} 'H . . we get for h £ H (2.17) | I6 1 ! | = | In 1 - A h z 1 - A h y| | = | In 1 - A h {y + z 1 } * SlKjtn 1 ') -*h,iK { * + z1} > 24 i M.+l To prove that |5 = 0(h n ) if e. . are defined as in g) we note that i >J = 4> h (n 1_1 ) - ♦ hti (n 1 " 1 ) - h (V y + z1}) = * h (A h {y + z 1 ' 1 }) - * h)i (A h (y + z i_1 >) - * h (A h {y + z 1 }) + 0(h i ' 1 ) M. = A°{F(y + z i_1 ) + l f (y + z 1 " 1 ) h j - F(y + z 1 '" 1 ) J*=P J M. M i , , , 1 . M.+l z f. Ay + z 1 " 1 ) h J - F(y + z 1 ) - i f.(y + z 1 ) h J } + 0(h ] ) Hi+D-P 1,J j=p J M. = A°{- F(y + z 1 *) + z [f (y + z i_1 ) - f.(y + z 1 *)] h j j=p J J M i ■ , . M.+l z f. Ay + z 1 ' 1 ) h J } + 0(h 1 ) j=p(i+l) 1,J M. M. =-A°{F(y)+ ^ 7r F {r) (y)(z i ) r - l(l ^ f< r > (yJKz 1 " 1 ) r - (z 1 ) r ]) h' r=l r - j=p r=l r " J M i , /^ , , M.+l + z [ z l r f i (r) i (y)(z 1 " 1 ) r ] h j } + 0(h i ) j=p(i+l) r=0 r * 1,J j-p(1+l) The upper summation limits in the two sums over r above are chosen such that all relevant terms are included. Insert the expressions for z 1 and z 1- and collect terms of equal powers of h, then M. ^h i (n ') " *h i ( V y + z ' } ) = " V z (F ' (y) e i i " 9 i i } hJ} n,i n,i n n j = ( 1+1 \ p i J i J 25 where g. • is independent of e. . and h and can be constructed from M. M. M. (2.18) I g h j = - I i_ F (r >(y)( z e., h*) r j=(i+l)p 1J r=2 r! a-(i+l)p " M. M. 1 + e (i ^f m r) (y)[(r e._ u hV m=p r=l £=ip M. - e' e u hV])h m £=(i+l)p U M. M. , m-(l+l).p r=0 r! im s,=ip 1 U ♦ Oft"™) Now, if e. ., j = (i+l)p, ..., M. are the solutions of then F'(y)e. . - g. . = *h.i (T,1) - Vi ( v y + z1)) = 0(hMl+1) and thus from (2.17) « 1 || =o(h Mi+1 ) But from condition a) the assumption of induction is true for i = with n = n» thus the theorem is proved. Q.E.D. Theorem 4 Let y be the unique solution of F(y) = and let a) all the conditions, except b), of theorem 3 hold b) h , 4>. be [M/p] + 1 times continuously Frechet differentiable c) the inequalities below hold for v = 0, 1, ..., [M/p] 26 (u) n (\j) M-V *P (m\ i M+l-vp *h (A h y) = V F M + ^ f v) (y) h J } + 0(h ) j=P J ^ V ](A h y) = A ° {F < v >(y) + V f{ v ](y) h J } + 0(h M+1 - v P) M>1 h h J=(i+D-P 1,J for 1 « 1 , 2, ..... w - 1 then the global discretization error n 1 - A. y of (2.11) has an asymptotic expansion of the form M. n 1 - A. y = A,{ z e.. h J '} + 6 1 {h) h h J-(1+l)p 1J where M II6 1 ! | = 0(h 1 ') ; M. = M - i-k i = 1, 2, .... y - 1 The proof of this theorem is analogous to the proof of theorem 3, cf. the proofs of theorem 1 and 2. Note 1 These theorems are the basis for iterative improvement of the numerical solution of an operator equation. When the conditions of the theorem are satisfied at least y - 1 improvements can be made. At no extra expense an estimate of the global discretization error of the solution n can be obtained as n 1 - n 1 " . Note 2 The condition on M in the expansion (2.10) can be relaxed somewhat, and could be made dependent on i so M would decrease with i. Note 3 Other combinations of the conditions on . and . u of theorems Y h h,v 3 and 4 may be used. We can, e.g., use the conditions of b) and c) on . from theorem 4 and the conditions on , of d) from theorem 3. 27 Note 4 The main problem of applying this theorem is of course the construction of the operators . . Several examples of such operators will be given in section 4. To illustrate a major difficulty and resolve it we will again consider the two point boundary value problem y" = f(x, y) y(a) = a y'(b) = 8 of note 5 to theorem 2. We cannot use the operator . as $. -, because D 1 (A h z) = z"( Xl ) + c r z (6) (x 1 ).h 4 D i (A h z) = z"(x.) + c .z (6) ( Xi )-h 4 D N (A h z) = z"(x N ) + c 2 .z (6) (x N ).h 4 c, t c Q ji c 2 so the expansion (2.10) is not valid. This is caused by the use of approximation formulas for the two points closest to the boundary that are different from the formulas in the interior of the interval. However, if we fix the number of iterative improvements that we want to ma ke to (y - 1) we can construct operators D. : E. i = 1, 2, ..., N N+l D.U) = I o. E v=0 IV V (some a. may be zero) such that D i (A h z) = z"(x.) + 0(h y(p+k) ) Analogously construct operators R,: E. -> R such that R b (A h Z) = 2'(b) + 0(h y(P+k) ) Define 28 ho - a r = 1, 2, then ♦h,r (0 = - 1 o^d - f(x r ^) R b U) - e i = 1, 2, *h,r (A h z) = A h {F ( z )> + 0(h^ ( P +k) ) Hence all $. , r s 1,2, . .., y - 1 are identical and they are consistent n j r of order y(p + k) with F. From the formulas it is obvious that we do not need to use this trick for the approximations to z'(b) but can use operators R, : E, -► R such that d ,r h b,r v h in the definition of 4> R u (a. z) = z'(b) + M v=(r+l)*p h v c +0(h M+l ) rv h,r" 29 3. Approximation of linear functional s To construct the perturbation operators . and <|>. ., i = 1, 2, ... in the applications of section 4 we need in many cases formulas that approximate the value and the value of the derivatives of a function z for a given argument by linear combinations of the components of a, z. Such formulas have been examined in Ballester, Pereyra (1967), Bjorck, Pereyra (1970), Galimberti, Pereyra (1970), (1971), and Pereyra (1973), so we simply refer to those papers and introduce some notation that will be useful in section 4. Let E = C S (a, b) , E k = R P+1 ■h Define A.: E -> E. A h z = [z(x Q ), z(x ] ), ..., z(x p )] x. € [a, b] j = 0, 1, ..., P for z € E. Define the linear operators (depending on h) x € [a, b] D^' m : E h - R v = 0, 1, ..., P according to m = 1, 2, ..., P + 1 P D^ m U) = z a (x) 5, x r=0 (most of the a (x) may be zero) such that v ,m , r D^' m (A h z) = z (v) (x) + I g,(z)(x) h j + 0(h S+1 ) for z € E. x ^ h ' ji-J 30 Note : As P is finite the order of consistency m of the operator D v ' m with z^ ^(x) cannot exceed P + 1 - v. When v, m and x are fixed the constants a (x) ; r = 0, 1 , . . . , P can be v,m,r obtained as the solution of a Vandermonde system of linear equations. In Bjorck, Pereyra (1970) an efficient algorithm for the numerical solution of such linear systems is given. For a Vandermonde system with n unknowns this algorithm requires approximately 3n operations. To get D ' consistent of order m with A z^ '(x) we only need to have m + v of the coefficients a f 0, i.e. the Vandermonde system we have to solve has only m + v unknowns. Further most of the operators D ' needed have x = x., where x. is a gridpoint. A Once we have found the operator D v ' m for one gridpoint we can easily get A the operators DjJ' m for most of the other gridpoints (if the gridpoints are equidistant) without solving a system of linear equations. In conclusion, the amount of work needed to find these operators is in general negligible compared to the amount of work involved in solving the operator equations h (n) = and <}> h (n E ) + ^(n) = or i . , ^(n 1 ) + i h -(n 1-1 ) =0 , i = 1, 2, ... v=l ■ For some examples of operators of this type see note 6 after theorems 1 and 2. For functions of several variables the formulas above can be used for each of the variables or one can use more compact approximation formulas taking linear combinations of the function values at all the gridpoints. 31 To distinguish between the different partial derivatives we adjoin to D the variable that we differentiate with respect to, e.g. v Dx v '|J is consistent of order m with ( — -)(x, y) x ' y 9x v DT ' .is consistent of order m with ( )(x, y, z, t) x,y,z,t at v 32 4. Applications The main emphasis of these applications is the construction of the perturbation operators <$>, and . . , i = 1, 2, ... . The operators . all correspond to well known discretizations from the literature. We do not claim that the methods chosen are the best possible for the actual problems; rather we have tried to use methods that are fairly well known and in some cases widely used. Although the examples of this section are mainly given as illustrations of how to apply the general ideas of section 2 to different classes of problems, we believe that the algorithms for iterative improve- ment of elliptic partial differential equations and for iterative improve- ment of integral equations compare \jery favorably with existing algorithms for these classes of problems. The questions of the smoothness of the expansions of the global discretization errors for the basic discretizations have not been examined in detail. For the different applications, wherever possible, reference to known results on such expansions are given, while in other cases we discuss very briefly the possible existence of such expansions. These questions need further study. In the theorems of section 2 we make some assumptions on the perturbation operators . and . . , i = 1 , 2, . . . . The validity of these assumptions can easily be checked in all the applications. In future studies on each of the applications of interest to us a more detailed analysis will be given. Some numerical results are given in most of the subsections below. 33 4.1 Initial value problems for ordinary differential equations Consider the scalar differential equation y' ■ f(y) ; y(o) = a x£ [o, t] f £ c s (R) The use of an autonomous problem is only for convenience in notation, as is the restriction to a scalar differential equation. All the results can easily be generalized to non-autonomous systems of ordinary differential equations. The existence of smooth asymptotic error expansions for discretization methods for initial value problems for ordinary differential equations is discussed in Stetter (1965), (1973). In our operator formalism the problem is -0 F: E -> E v E = C 5 (0, T) E° = RxC(0, T) for g = F(z) = Consider Euler's method max OsjxsT S E v=0 z (v) M v! 1 Y , g(x) Z(0) - a z' - f(z) y ■ a 1 1 max Y I + 0«x*T \ = H 1 ^-^) i = , 1 , . . . , N - 1 Note that z(x i+1 ) - z(x.) S-l z (j+1) (x ) f(z(x.)) = z'(x.) - f(z(x.)) + Z (j + 1} ] h J + 0(h S ) for any z £ E so S-l * h (A h z) = a[J{F(z) + Z f\(z) h J } + 0(h b ) J 35 where f j( z) - z (J+1 »(x) \(j * li! £ X S T / Now construct . and d>. , v = 1, 2, ... . We will give some different n h,v operators , 5 . in order to illustrate some useful techniques for construction of these operators. 1 . /? - a ♦h<«> " •5 2 + 45, - 3 5q 2h 5 i+l " c i-l fU ) 2h - f( ?i ) 1 = 1, 2, N - 1 Note that -z(x 2 ) + 4z(x 1 ) - 3z(x Q ) 2h - f(z(x )) = z'(x ) - f(z(x Q )) 1 4 - 2 j+1 (j+1) and z(x. +1 ) - zU^) 2h f(z( Xi )) = z'(x.) - f(z(x.)) (S-D/2 + I 5.-1 ,(2A+1) 2£ (2TTT7T Z '< x i ) - h +0(h) for i = 1, 2, ..., N - 1 Both the expansions are valid for any z £ E. Hence, for any z £ E *h (A h z) = A h {F(z)} + 0(h2) but we do not have 36 S-l ♦j;U h z) = a°{F(z) + z f.(z) h j } + 0(h S ) because of the different expansions for x« and the rest of the gridpoints 2. By extending the operators so an approximation to the solution at x_, = -h is computed by the formula we can define ho- a = ho - a - !ii2_iSmi!i 2h - f( 5l ) i = , 1 , In both cases we have, for any z £ E N - 1 S-l *h (A h z) = A h {F(z) + l f j (z) hJ} + 0(hS) The operators f. will of course be different for the two different operators <$>.. By extending the numerical solution even further outside the interval of interest in this way one can easily construct operators . , 37 v = 1 , 2, . . . such that n S" 1 i s *h V K z > = V F ( Z ) + E f v i^ h } + °< h ) n,v h h j=( v +l) V ' J For these extended operators the spaces E, E , E, and E, must be modified in order that the theorems of section 2 be applicable. Slight modifications of the theorems may also be necessary. ? - a Vv (?) = \ d!' s U) - f(c,) x. 1 i = 0, 1, ..., N - 1 / v = 1, 2, ... Here D ' are operators of the type discussed in section 3. For any z € E *h,v (A h z) = A h {F(z)} + 0(hS) v = 1, 2, 4. ♦ h L U> = C Q - a ^^-jrCfte^l + f^)] Note that for y £ E such that y' = f(y) i = 0, 1, ..., N - 1 y(x. +1 ) - y(x.) , 11 h - j [ftyfy)) + f(y(x i+1 ))] = y'(x.) - f(y(x.)) + z _T_ h v -l 1 E f( r >( y (x.))( e — rr^hV v=2 v ' * r=l t=l + 0(h S ) = y'(x.) - f(y(x.)) + 2 g.(y(x.))-h J + 0(h b ) 1 i j=2 J i The absence of a term g-.(y(x.))h is due to the fact that for the elements y that we consider we have y" = f'(y) y 1 ! Note that the expansion above is not valid for arbitrary z £ E, but only for z € E such that z' = f(z). In this case we have to rely on theorem 2 while for the previous perturbation operators we can rely on both theorem 1 and theorem 2. Note that for y £ E 3 y' = f (y) , S-l 4>h (A h y) = A°{F(y) + i f*j(y) h j } + 0(h S ) J ^ Further note that any method of second order or more can be used to construct the perturbation operator $. . The main advantages with our perturbation operator (which is based on the trapezoidal method) is that 1) no new values of f(y) need to be computed to get the perturbation 2) the perturbed solution at x. can be computed directly after the calculation of the unperturbed solution at x. 3) the basic discretization formulas for . and <}>. are both one-step formulas. For general linear multistep methods 39 k k E a y . - h E 8 f ... = v= v n+v v=0 v n+v one can e.g. use the following basic discretization formula for the perturbation operator k k Z a* y . - h E 3* f . ., v=o v n+v v=0 v n+v (We assume that the global discretization error for the solution obtained by the linear multistep method has a smooth error expansion; this may not be true in many cases!) Remember that the maximal order of a stable linear multistep method of stepnumber k is k + 1 when k is odd, and k + 2 when k is even (see e.g. Lambert (1973)). By choosing a*, B*, v = 0, 1, . .., k judiciously one can get £ a* y(x . ) - e b* f(y(x . )) = y'(x ) - f(y(x )) + 0(h^ K ) v JK n+v' _ n v V,7V n+v" J v n' ^ v n" ' for y€ E such that y 1 = f(y). One can of course also use perturbation operators of the type discussed in point 3 above. The basic discretization operator for $. here, however, has the same stepnumber k as the basic discretization operator for . while the stepnumber for the basic discretization operators in point 3 may be considerably larger. No study of the practical implementation of these ideas for initial value problems for ordinary differential equations has been undertaken yet. 5. As a further illustration we choose a different operator 40 V E h " E h E° - R x R N A h g ■ for g = g(x i ) i = 1, 2, ..., N 1 1 max Iy .0 - Y si. is the same as above, i.e ♦ h (0 = C Q - a ^i+1 " *1 f( ?i ) i = 0, 1, ..., N - 1 Note that for i = , 1 , . . . , N - 1 and any z £ E z(x. ,) - z(x.) 1^4 L~- f(z(x.)) = z'(x. +1 ) - f(z(x. +1 )) s-1 Choose then ♦£(€) - C Q - a ? i+l " c i-l + 2 g,(z(x. +1 )) h J + 0(h a ) \ - f(5 i ) i = 1, 2» .... N y s-1 ♦fjU h Z) = A°{F(z) + Z fj(z) h J } + 0(h s ) 41 Note 1 Some of the results of this section carries ov^r to the case when x. , i = 0, 1 , . . . , N - 1 are not equidistant. E.g., the perturbations of point 3 and point 4 are useful in such cases. Note 2 Consider again the scalar differential equation y' - f(y) = y(0) = a ; x € [0, T] ; f € C S (R) Note that this problem is equivalent to the Vol terra integral equation y(x) - y(0) - / f(y(s)) ds = Euler's method for the original differential equation can be viewed as an approximation to the integral equation, namely y i+1 = y n - + h f(y.) = y i _ 1 + h f^-.-,) + h f(y.) = ... = a + h z f(y.) i = 0, 1, 2, ... j=0 J -0 -0 With appropriate definitions of the spaces E, E , E., E, , the operators >0 A, , A and the corresponding norms we have F(z) = (z - a - / f(z(s)) ds /«n ~ « ♦ h (5) i-1 E . - a - h E f(E.) 1 j=0 J <: x . is consistent of order 1 with F, i.e. with the integral equation. 42 Define ♦fc> = ? o " a 5 1 - a - h Z a.. f(?.) 1 j=0 1J J i = 1, 2, ..., N where a iO ~ a ii = ^ 2 ' a ii = 1 j = 1 , 2, . . . , i - 1 For the actual computations we would use the formula [*hU)] = ? - a C^U)],- = C*h (?)] i-i + K i " ? i-i ' I [fU i ] + f(c i-i )] i = ] - 2 ' 3> -•• N Note that . is consistent of order 2 with F. Other perturbation operators 4>. could be used! The advantage of this point of view is that for the maximum norms i i i i ill. max I | where Y = 1 '1 i = 1(1)N € E and where . max | ? l IE. " O^i^N l 5 i 5 = (E 1 . i ■ 0(1)N) € E, we have 43 i i V^ ' V " ' c ' ' M E L E h h E h h i.e. k = in condition e) of theorem 1, while for Euler's method directly we have k = 1 for the maximum norm. The same result, i.e. k = 0, may be obtained for Euler's method directly if we use Spijker's norm for E^ (see Stetter (1973), section 2.2.4, p. 81-84) which in our case reads I U, II i 1 1 . ^ max | „ | I|y|| f o - Iy I +h UvgN | I Yj| for Y = '€El Yi i = Kl )N - 1 Note the relation between Spijker's norm for E, corresponding to Euler's method and the maximum norm for E, corresponding to the integral equation formulation. The reformulation of the original problem as an integro-differential equation may be a useful trick for other methods for initial value problems for ODEs and initial boundary value problems for PDEs. The ideas presented in this note need further investigations and are included mainly as an illustration of a way to increase the range of problems and methods for which the theorems may be useful. 44 4.2 Two-point boundary value problems for ordinary differential equations Consider y" = f(x, y, y') ^(yCa), y'(a)) = a-e^xsb + e r 2 (y(b), y'(b)) = Assume that the functions f, r-, and r ? are such that the boundary value problem has a unique solution which is 2s times differentiable. Define F: E + E E = C 2s (a - e, b + e) -0 max 2 J lz (v) (x) E = R x C(a, b) x R 1 max x v=0 v! g(x) for Introduce 9 = ,1 g(x) 2 \ a-esxsb + e €E l r^zCa), z'(b)) F(z) z" - f(x, z, z') a - e $ x $ b + e r 2 (z(b), z'(b)) A h : E + E h A h Z = [z(Xq), Z(X 1 ) R N+2 z(x N+1 )] b - a where x. = a - y+ i«h, i =0, 1, . .., N + 1; h = N 45 and V E " E h E° = R x R N x R Define V E h" E h ♦ h (5) = r r 2 ' h ; g i>i - 2g i + g i-i f(x E gi+j - g 1-K i = 1, 2, ..., N r g N+1 * g N C N+1 " % r 2 K 2 , h ; Assume that f, r, and r 2 are such that . is stable. h is consistent of order 2 with F. The existence of smooth error expansion is discussed in Stetter (1965), Pereyra (1968). Construction of perturbation operators 1 . Direct approach. Define a E - F F° h' -h h ♦J(5) = r l (D a' 4 ^^ D^ 4 (c)) Q 2 >\t) - f(x., £., dJ» 4 (0) x. i i x. \r 2 (D^ 4 ( c)j D^ 4 U)) i = 1, 2, ..., N where D ' : E. -► R are operators of the kind discussed in section 3. 46 Define *h,i : E h+ E h /^(Dg^^JfcJ.Dl-WDfc)) i = 1, 2, .... y - 1 *h,i ( ^ = ,2,2m 1,2m d,;^(c) - f(x., i v d;;^(0) ' 2 (D° b ' 2(i+1) (0,D^ 2 ( i+1 )(0) i = 1, 2, ..., N Note that for h ^ we have used the trick described in note 4 of theorem 4 in order to satisfy (2.10). 2. Use of Cowell's method. If f is independent of y\ i.e. f(x, y t y f ) = g(x, y) define ^: E h ■* E° V D a' 4( ^' D a' 4( ^ } ? i + l - 2 ^i + ? i-l 1 <(0 - + 9(x i+1 , 5 i+1 )); i=l,2, ^0,4 1,4, r 2 (D b ^), D^ H (0) In this case we rely on theorem 2 because 4> n (A. z) = <}> h {F(z)} + 0(h) only for z £ E }z" = g(x, z). 3. Extending the solution outside the interval [a - e, b + e] (from an unpublished paper by H. Keller, V. Pereyra, communicated by V. Pereyra) . To be able to use symmetric and identical formulas to approximate z* v '(x. )» v = 1, 2 at all gridpoints x. (including the gridpoints close to the boundary) one can extend the numerical solution to "artificial" gridpoints outside the interval (a, b) by the basic formula 47 y n+1 - 2y + y , h^ n n Sufficiently many exterior points are introduced, so symmetrical and identical approximation formulas for the first and second derivative of any z € E can be used for all gridpoints needed. Now introduce h • , i = 1 , 2, . . . n > i 'r 1 (D°' 2(1+1) .'>]' 2(1+1) (e)) Vi (5 > = | D x: 2(i+1) < 5 > - f < x j- «j. D x: 2(1+1) <5)) j ■ i. 2 r.(D°' 2 < i+1 >(c),Dj' 2 < i+1 >(c)) 2 v "b Then 2s-2 2s-l *h iK z > = A h {F < z > + E W z ) h > + 0(h) with a proper definition of A. and A. . Numerical results The following special cases of (1) (A) y" - f(x, y, y') = y(a) = a y(b) = @ (B) y" - f(x. y, y') = y(a) = a y'(b) = e were discretized by formulas analogous to those above. As the boundary conditions for these cases are simpler than in the general case, somewhat different discretizations are used here: (A) Xi = a + i-h i = 0, 1, ..., N h = ~~ 48 C Q - a = 2 Ux., i r 2h J u i = 1, 2, .... N - 1 *N " 3 = and (B) x. = a + i • h i = , 1 N h = ? " a = ? i + l 1 l-l f / 1 + 1 l-h h 2 TU i s S"' 2h ^ ~ ^N-l IN ' - « = n 2N - 1 ) = The perturbation operators . we use are (A) ♦h«> ■ (B) ♦h<*> = C - a d 2 x a U) - f(x., C,, dJ' 4 (0) X. 1 1 x. C Q - a D^' 4 (0 - f(x., ?., dJ' 4 U)) x i n 1 x i i = 1, 2, ..., N - 1 i = 1, 2 N - 1 \dJ- 4 (c>-.-« The systems of non-linear equations obtained by the discretization were solved by Newton iteration and the initial approximation was used. 49 The iterations were terminated when the last correction in the iterations was less than e. The perturbed problem was solved by a modified Newton iteration, where the LR-factorization of the final iteration matrix obtained in the solution of the unperturbed problem was used. As initial approximation the solution of the unperturbed problem was used and the iterations were carried on until an estimate of the iteration error was either less than one tenth of the estimated discretization error or less than e. All quantities were measured in the maximum norm. In the table below the entries of the column "Number of iterations' stand for the number of iterations for the unperturbed problem/number of iterations for the perturbed problem. The following differential equations were solved: (Al) y" = \ (y + x + I) 3 y(0) = y(l) = exact solution: y(x) = 2/(2 - x) - x - 1 (Ciarlet, Schultz and Varga (1967)) (A2) y" = y 3 - sin(x) * (1 + [sin(x)] 2 ) y(0) = yU) = exact solution: y(x) = sin(x) (Pereyra (1968)) (A3) y" = - 1 - 0.49(y') 2 y(0) = y(l) = exact solution: y(x) = ^ ln(^^^) (Bellman, Kalaba (1965)) (Bl) y" = \ (y + x + I) 3 y(0) = y'(l) = 1 exact solution: y(x) = 2/(2 - x) - x - 1 50 The problems were solved in single precision on an IBM 360/75 with -6 N = 10 and e = 10 Problem |Actual error| | Estimated error | Number of iterations | Sol ution | Al 7.3 • 10" 4 7.1 • 10" 4 4/2 1.7 • 10' 1 A2 2.38 • 10" 3 2.36 • 10" 3 6/2 1 A3 1.059 • 10' 4 1.060 • 10" 4 3/2 1.3 • 10" 1 Bl 1.12 • 10" 3 1.04 • 10" 3 4/2 1.7 • 10" 1 Obviously one can get wery good error estimates at a low cost for these problems. All the problems are very simple with fairly rapid convergence of the Newton iterations for the original discretization. In more difficult cases where it may be essential to have a good initial guess for the solution the quotient of the amount of work for the unperturbed problem to the amount of work for the perturbed problem may be much bigger. No experiments have been done with iterative improvement for this class of problems. 4.3 Two-dimensional elliptic boundary value problems We will discuss two classes of problems on rectangular regions. All problems discussed are assumed to have unique smooth solutions, however some numerical results for a problem with a non-smooth solution will be presented. The existence of smooth error expansions for discretizations of elliptic problems are discussed in Volkov (1957), Stetter (1965), Hofman (1967) and Pereyra (1970). 51 4.3.1 Problems non-linear in u only Consider v u = f(x, y, u) u = h(x, y) (x, y) £ a (x, y) € an where v 2 =^U 3 * 2 ' 2 ax ay Q s to $ x $-1, $ y * 1} an the boundary of n Assume that f and h are such that the solution u £ C"(n). In our operator formalism we have .0 • 2s, E = C 2s (q) F: E ■+ E' .0 C(fi) X C(3fi) with the norms max 2s , v -v / % s -| z | j z(x, y) e; (x,y)£« v V! y t Vay^ max max »IU-(x.^)€qI» 1x -* ) ^(x.~€mI» 1x '* ) where g ] (x, y) 9 (x, y) (x, y) € « (x, y) 6 9^ F(z) = v z - f(x, y, z) z - h(x, y) (x, y) € n (x, y) € an 52 Introduce and v E -* E h A h z= (z(x., yj ) x. = i«h y j = j * h E h =R( N+1 ) i = 0(1)N, j = 0(1)N) i = 0(1)N j = 0(1)N r r h = 1/N Ul l E = max l^ii L h OsisN 1J OsjsN E = R (N-1)% R N + 1 xR N+l xR N xR N A h 9 ■ g (x r yj) g°(x , y^ g°(x N , yj) g°(x i5 y ) g°(x i> y N ) for g € E as above. ,1 = 1(1)N - 1 { j = 1(1)N - 1 j = 0(1 )N j = 0(1)N i = 1(1 )N - 1 i = 1(1 )N - 1 J Ml = max[| Y ].|;i = 1(1)N - 1, j = 1(1)N - 1, 01 | . . 0(1)N, IvfhJ = 0(1 Hi. | Y 03 |;i = l(l)N - 1, |yf|;i = WW - 1] where 53 Y = Define 1 id i = 1(1 )N - 1 j = 1(1 )N - 1 01 j j = 0(1 )N 02 j j = Kl)N 03 i = 1(1 )N - 1 04 i i = 1(1 )N - 1 V E h" E h €E ♦ h (5) = *1+1J - *1J+1 - 4g ij + ? i-1j + *1J-1 _ f( , u^ I J I J i = ICON - 1, j = 1(1)N - 1 j = 0(1)N j = 0(1)N i = 1(1)N - 1 i = 1(1)N - 1 ? oj - h(x 0>yj ) 5 NJ - h(x N , y.) E 10 - h(x i5 y ) E 1N - h(x.,y N ) One can easily verify that 2s-2 ♦h(A h z) = A°{F(z) + " E f,(z) h J } + 0(h 2s - ] ) V"h j=2 Let ♦fc> = ♦ ld (ri *0J ~ h(x 0' y j } C Nj - h(x r y.) ^TO " h( V y } C iN - h( Xl .,y N ) 1 = 1(1)N - 1, j = KDN - 1 j = 0(1)N j = 0(1)N i = 1(1)N - 1 i = 1(1)N - 1 54 where \h. .: E, +R. ij h We will consider three different alternatives for \b..: r2,4 f v '. nv 2,4 ^ )= K^y^ + Krly^-^vyy^ i. where DX and DY are operators of the kind discussed in section 3. It is easy to verify that *h (A h z) = A h + f(x i . l .y j+l ,C 1 . liCJ+1 )+4f(x i ,y j . 1 ,5 ij . l ) -20f(x i>yj> 5ij ) ♦4f(x 1 .y J+r e 1iJ+ ,) + f(x i*i-'*j-r 5 i+i,m» + 4f(x i + r y j- h+Lj 1 + f(x i+1 ,y j+1 ,5 i+1>j+1 )] - f(x t . yj . e^J One can show that for the solution u of F(u) = ^ ij (A h u) = v 2 u,(x r y.) - f(x. s y.., u(x., y..)) + 0(h 4 ) so ♦h (A h u) = A h {F ( u ) } + °^ 4 ) for the solution u of F(u) = 0. In this case we have to rely on theorem 2, while for perturbations 1 and 3 we can rely on both theorem 1 and theorem 2. Further one can show that 2s-2 ^(A h u) = A°{F(u) + ' Z f E}j (u)-h j } + 0(h 2s " 1 ) J 55 We have used the ninepoint operator Vg to approximate the Laplace 2 operator v . For notation and the results below see e.g. p. 321 in Bjorck, Dahlquist (1973). We have v 2 u = v 2 u + !li^ + 0(h 4 ) 2 But v u = f(x, y, u) so v 2 u = f(x, y, u) + h 2 v 2 f(x, y, u)/12 + 0(h 4 ) 2 Thus if we use the ninepoint operator to approximate v f(x, y, u) we get 2 v 2 u - f(x, y, u) - ^v 2 f(x, y, u) = 0(h 4 ) which gives the result above. 3. * 1j U)=D^( 5 ) + DYy j { 5 )--f(x i .y j .e iJ ) then *h (A h z) = A h {F(z)} + ^ q ) • so if q ^ 2* (v + 1 ) we can take max Vv E *h ; v= 1, 2, ..., v max and make v iterative improvements. Numerical results The problem above with f = and h(x, y) = sin(x) sin h(y) 2 2 + cos h(x) cos y - (x - y )/2 was discretized as above (Kronsjo, Dahlquist (1972)). The system of linear equations obtained was solved iteratively with SOR with optimal relaxation parameter, and to get an initial approxi- mation we interpolated the boundary values linearly. The iterations were terminated when the residual was less than e. 56 The perturbation operators of both 1 and 2 above were used. The perturbed problem was solved with the same method but now we used the solution from the unperturbed problem as initial approximation and the iterations were terminated if an estimate of the iteration error was either less than y times the estimated discretization error or less than e. The errors were measured in the maximum norm. The calculations were performed in double precision on an IBM 360/75 with N = 10 and some different values of e and y. The entries in the column "Number of iterations" stand for the number of iterations for the unperturbed problem/ the number of iterations for the perturbed problem. Perturbation e Y Actual Error Estimated Error Number of Iterations 1 lO" 7 lO" 10 0.1 lO"* 1 .51 - 10" 4 1.51- TO' 4 1.40-10' 4 1.50.10" 4 32/7 44/21 2 io-* 10" 5 10" 6 lO" 7 lO" 7 lO" 10 lO"* lO"* lO"* lO"* 0.1 lO"* 1.27-10" 4 1.47-10" 4 1.51-10" 4 1.51- 10" 4 1.5M0" 4 1.51-10" 4 5.3-10' 5 1.36-10' 4 1.50-10" 4 1.50-10" 4 1.40-10" 4 1.50-10" 4 21/1 24/8 29/14 32/18 32/7 44/21 Note that there is no difference between the results for the two different perturbations. Further note that to get a reliable error estimate we must get the iteration error sufficiently small. The estimation algorithm can only estimate the discretization error, not the iteration error. In fact, there is a danger of amplification of the iteration errors when we apply 57 the operator , to the solution of the unperturbed problem. Note also that we can save some work by terminating the iterations for the perturbed problem early, i.e. by choosing a larger value for y. Analogous results were obtained for Laplace equation with derivative boundary conditions on some of the sides of the unit square. The same problem was solved with iterative improvement, using the perturbation operators . , v = 1 , 2, .. . according to point 3 above. We -14 -9 used e = 10 , y = 10 and some different values of N and q. The tables below give the maximal errors in the successive iterates for the iterative improvement. N = 10 q = 8 Iteration number 1 2 3 4 5 6 Error 1.5 10" 4 ' 1.2 10" 6 5.4 10" 8 -9 7.8 10 y -9 1.4 10 y 2.7 10^ 10 5.6 10" 11 Number of Iterations in SOR 52 40 37 31 26 24 22 N = 10 q = 4 Iteration number 1 2 3 4 5 6 Error 1.5 10" 4 1.2 10" 6 6.6 10" 8 6.9 10" 8 6.9 10" 8 6.9 10" 8 6.9 10' 8 Number of Iterations in SOR 52 40 37 30 24 20 19 N = 20 q = 12 11 teration number 1 2 3 4 5 6 Error 3.8 10" 5 7.6 10* 8 4.2 10" 9 9.0 10" 10 2.7 10" 10 9.9 10" 11 4.0 10" 11 Number of Iterations in SOR 102 80 62 51 48 45 43 58 N = 20 q = 8 11 :eration number 1 2 3 4 5 6 Error 3.8 TO" 5 7.6 10" 8 3.4 10" 9 5.2 10" 10 9.8 10" 11 2.1 10' 11 -12 4.4 10 u Number of Iterations in SOR 102 80 62 51 47 43 41 Note 1 In all cases but one (q = 12) we have made more iterative improvements than suggested by theorems 3 and 4. In the theorems it is implied that q should be chosen such that q :> (maximal number of iterations + 1)«2. Further note that we improve the accuracy of our solutions even after the suggested maximal number of iterations has been exceeded. For q = 4 e.g., only one improvement is reasonable according to the theorems, but the second iterate has a considerably much smaller error than the first. This astonishing result is a lucky coincidence in the construction of the perturbation operators , , v = 1, 2, ... . For almost all the gridpoints (all but the points closest to the boundary points) the formulas we use are of order 6 (rather 2 than of order 4) for all functions u such that v u = 0. A similar result holds for q = 8. If <|>. (a. u) = 0(h q ) we cannot increase the order of accuracy of our solutions by making extra iterations, however the "error constant" can in some cases be decreased in this way. Note 2 In the cases q = 8 and q = 12 the maximal errors in the last three iterations occur close to the boundary. At the interior points the errors are much smaller. The explanation for this is that for the points close to the boundary we have to use non-centered approximations to the derivatives, while at the interior points we can use symmetric approximations. Note 3 There is a certain amplification of the iteration errors when h is applied to the solution. The larger q is the larger is this 59 amplification. Further the larger q is the larger are the "error constants" for the formulas employed close to the boundaries. These facts may explain why we get better results with q = 8 than with q = 12 for N = 20. 4.3.2 The minimal surface equation As an exercise in how to proceed in a more difficult case we consider |r E I : |£] + |T7 [ I = |y 3 = (X, y) ( Q ax 2 2 ° x °j /, 2 2 °y A + \ + Uy /] + u x + u y u = (cos h 2 (y) - x 2 ) 1/2 (x, y) € 3 ^ where fi = {0 ? x ^ 1, ? y $ 1} In vector operator notation the equation reads o V- [y ( I VU | )vu] = where y(|vu| 2 ) = (1 + |vu| 2 )' 1/2 Note that the exact solution is u(x, y) = (cos h 2 (y) - x 2 ) 1/2 and that u has a singularity at x = 1, y = 0. The problem is discretized and solved according to Concus (1967) in the following way: Introduce the gridpoints x. = i-h i = 0(1)N Yj = j-h j = 0(1 )N where h = 1/N. 60 Approximate the differential equation by f u ■ nr< 2u ij - "1-i.j - u i,j-i» + mi,j< 2u ij - u i + i,j - u i,o-i» + ^T,T + i (2u ij " u i-i,j " u i.j+i> + T+i ,J+i (2U 1 i - Vi,j " u i,jti» =0 , i = 1(1)N - 1, j = 1(1)N - 1 where yjj = y(|vu|j^-) denotes y for the mesh cell with center (i - 1/2, 3 - 1/2), which is evaluated by use of IVulfr = -T- C(U.. - U. , -) 2 + (U- • " U. . J 2 + (u. • , - u. , . ,) 2 + (u- -, • - u. , • ,) ]. v i,J-l i-l, j-r v l-l, j i-I, j-r J * This approximation is consistent of order 2 with the differential equation. The boundary conditions are represented by u id = [cos h 2 ( yj ) - x 2 ) 1/2 (x i5 y.) C3 Q The system of nonlinear equations above are solved iteratively by computing k+1 u. . , the (k + l)th approximation to u.. from f Tu k+1 u k+1 u k u k 1 u k+l = u k 'ij Lu 11 * ••" u i-l,j' u 1jV •••' U N,N-1 J lj ^ ° ^S [u k + l u k+l u k u k ] ' 3u.. LU 11 ' ••'• u i-l,j' u ij' "" U N,N-1 J where u is the relaxation parameter. The initial approximation was obtained by linear interpolation of the boundary values and the iterations were terminated when the last correction was less than e. The following basic discretization formula was used for the perturbation operator . : Introduce the notation € = U-jj i = 0(1)N, j = 0(1)N) and DX px(c) = 61 X i' y j A + (dxJ' 4 U)) 2 + (dyJ' 4 U)) 2 x i ,y- x. ,y .. i = 0(1 )N j = 0(1 )N pyU) = DY 1,4 U) i J J /l + (DXJ' 4 (d) 2 + (DY^' 4 U)) 2 x i ,y .. x. ,y j i = 0(1)N j = 0(1)N Here DX and DY are operators of the type described in section 3. Further *..(£) = DxJ' 4 (px(5)) + DyJ' 4 (py(?)) i = 1(1)N - 1, j = 1(1)N - 1 1J X i ' y j X i' y j Note that with proper definition of a., *ij (i h z) ■ [ t* l j77T? « 9") + _L (-= •» /I ♦ u 2 + u 2 r^^)](Xi.y,) + o^) «|>.. is the basic discretization operator for the perturbations. The perturbed problem was solved with the same iterative method as the unperturbed problem, but now we used the solution of the unperturbed problem as initial approximation and terminated the iterations when an estimate of the iteration error was either less than y times the estimated discretization error or less than e. All quantities were measured in the maximum norm. The problem was solved in double precision on an IBM 360/75 and we -8 -4 used e = 10 , y = 10 and some different values of N and u. The entries in the column, "Number of Iterations," refer to the number of iterations for unperturbed problem/number of iterations for perturbed problem. N O) Actual Error Estimated Error Number of Iterations 10 40 1.7 1.87 4.3-10" 3 2.3-10" 3 8.9-10" 3 5.0-10" 3 55/28 147/71 62 The maximum error was obtained in the vicinity of x = 1, y = 0. Due to the singularity of u at that point we cannot expect to get s/ery good results (or good error estimates) in that region. However, for the rest of the unit square much better results and error estimates were obtained. Below the results at some representee points are tabulated for N = 10 and a) = 1.7. X y Solution Actual Error Estimated Error 0.2 0.6 1.17 1.10-10" 4 1.16-10" 4 0.5 0.2 0.89 1.13.10" 4 1.90-10" 4 0.5 0.6 1.08 4.64-10" 4 4.50-10" 4 0.8 0.2 0.63 1.24-10" 3 1.7M0" 3 0.8 0.6 0.88 1.48-10" 3 1.38-10' 3 0.9 0.3 0.54 4. M0" 3 8.9-10" 3 Analogous results were obtained for the same problem with 3u/3x = at x = and unchanged boundary conditions on the remaining sides of the unit square. 4.4 Parabolic partial differential equations Consider u t = f(t, x s u, u x , u xx ) (t, x) £ Q u(a, t) = f-|(t), u(b, t) = f 2 (t), t > ; u(x, 0) = h(x), a s x s b where n = {a < x < b, t > 0}. Assume that f, f -, , f« and h are such that u£C s (q). The definition of spaces, norms and operators for our operator formalism is left as an exercise for the reader. 63 We will use the method of lines to discretize the problem in space and then solve the system of ordinary differential equations so obtained with two simple methods; the explicit Euler method and the implicit backward Euler method. It is plausible that for sufficiently small h (discretization parameter) there exists a smooth expansion of the global discretization error for both these methods (Stetter (1965), Keller (1970)). However, the main advantage of implicit methods is that one can use large time steps and still obtain accurate results. For realistic stepsizes in time there are, to my knowledge, no general results on the existence of smooth expan- sions for the global discretization error. The system of differential equations that was obtained by the method of lines is stiff and hence results on error expansions for numerical methods for stiff systems of ordinary differential equations may be useful for our methods. For such problems a discussion of error expansions for large values of the stepsize h (not only asymptotically) for implicit one-step methods can be found in Dahlquist, Lindberg (1973). These questions warrant, further study that are outside the scope of this report. Here we assume the existence of sufficiently smooth error expansions for the methods and stepsizes we use and proceed under that assumption to estimate the global discretization error with our algorithm. Numerical results indicate that the assumption is reasonable. 4.4.1 The method of lines with Euler's method Discretize the problem according to 64 b - a x, = a + i«h; i = 0, 1 , . .. , N; h = N t- = j-k; j = 0, 1 , ... ; k = c-h 2 c $ 0.5 J u ij * u( v y U i.M ' "Li f(t x u Vl.1 - U 1-1.1 u 1+1.1 - 2u 1.1 * "1-T.1) - Q k n i' V u ij' 2h 2h > u i . 1(1)N - 1, J - 0, 1, ... u 1Q - h(x.) =0 1 = 0(1)N U 0j- f l1,i ' e 1+2,1 The perturbed problem is solved in parallel with the unperturbed problem. Several alternative perturbations exist, cf. section 4.1. The problem below was solved on an IBM 360/75 using double precision arithmetic. We used N = 10, and c = 2.5/tt . 65 u = u < x < tt, t > u(0, t) = u(tt, t) = L A A with the following initial value functions h(x) (1) h(x) = sin(x) -t (2) exact solution: u(x, t) = e~ «sin(x) h(x) = x(tt - x) exact solution: u(x, t) = - z (2n - l)" 3 e "^ 2n "^ l sin(2n - l)x (3) h(x) = n=l IT - X $ x < tt/2 tt/2 < X < tt exact solution: u(x, t) = - I (-l) (m ' 1)/2 -4 sin(m x).e" m t 'iif1(2) iti The tables below give the maximal errors for selected time levels for the different problems. Voblem H N = 10, k = 0.025 t Actual Error Estimated Error 0.05 2.05-10" 4 2.13-10" 4 0.20 7.05-10" 4 7.46-10' 4 0.50 1.30-10" 3 1.35-10" 3 0.75 1.52-10" 3 1.56-10" 3 Problem n N = 10, k = 0.025 t Actual Error Estimated Error 0.05 2.6-10" 3 8.4-10" 3 0.20 2.2-10" 3 3.4- 10" 3 0.50 3.2-10" 3 3.6-10" 3 0.75 3.9-10' 3 4.3-10" 3 66 Problem #3 N = 10, k = 0.025 t Actual Error Estimated Error 0.05 3.7-10" 2 1.2-10" 2 0.20 1.6-10" 2 4.9- 10" 3 0.50 9.8-10" 3 3. MO" 3 0.75 8.3-10" 3 3. MO" 3 For these problems the amount of work needed to get the error estimate is approximately equal to the amount of work needed to solve the unperturbed problem. From the numerical results it is obvious that the estimates are not ^ery reliable for test problem 3 where the initial value function is not smooth; for the other two problems the error estimates are quite good 4.4.2 The method of lines with the backward Euler method Discretize the problem according to x. = a + in; i = 0(1 )N h = (b - a)/N t- = j'k; j = 0, 1 , ...; k = c-h u.. - u(x., tj ) u- .., - u. u. ij + l iJ f(t x u i+1 J +1 i-1 J +1 k u j+l' x i' u ij+r - u_. 2T u i + i , i+1 " 2 Vi+i + u i-i jip - o h 2 i = 1(1)N - 1, j = 1, 2, ... u iQ -h(x.) =0 i = 0(1 )N "Oj-^Ctj) = u Nj - f 2 (t.) =0 J. 1.2. ... 67 The system of nonlinear equations that is obtained in each time step is solved by Newton iteration. As initial guess we use the solution at the previous time level and the iterations are terminated when the last correction is less than e. The maximum norm is used to measure all quantities. Introduce the notation e = U^y i = o(i)n, j = o, i, ...) The basic discretization for the perturbation operator is #1j(0 -5uii^iiti.. f(tji v ,.., pxj^w. Dx^u)) i = 1(1)N - 1 j = 1, 2, ... The operators DX * are of the type discussed in section 3. A ) L For the perturbed problem we use modified Newton iteration to solve the system of nonlinear equations obtained in each time step. As initial approximation we use the solution of the unperturbed problem and terminate the iterations when an estimate of the iteration error is either less than Y times the estimated discretization error at the current time level, or less than e. The problems below were solved in this fashion. (1) u t = (2 + sin(u xx ))u xx + (1 - sin(u))u < x < it/2, t > u(0,t) = ; u(V2, t) = e _t t > u(x, 0) = sin(x) < x ^ tt/2 exact solution: u(x, t) = e~ ' sin(x) (Kolar (1972)) 68 (2, 3, 4) Burger's equation u. = - u u +vu t x xx 00 exact solution: u(x, t) = 4v u(0, t) = u(tt, t) = u(x, 0) = sin(x) 2 1 z exp(- v n t)n I (^-)-sin(n x) n=l n dy (^) + 2 E exp(- v n 2 tJI^^-cosCn x) L v 2v The functions I , n = 0, 1 , . .. are the modified Bessel functions. (Cole (1951)). We have used the following values of v: (2) v = 1 (3) v = 1/8 (4) v = 1/16 In the calculations for the tables below we have used e = 10" , k = 0.1 and some different values of N. The computations were done in double precision on an IBM 360/75. The maximum errors at some different time levels and the number of iterations for the unperturbed/perturbed problems are recorded in the tables. Problem #1, N = 10, k = 0.1 t Actual Error Estimated Error Number of iterations 0.1 3.2-10" 3 2.6-10" 3 5/2 0.5 6.6-10" 3 5.8- 10" 3 6/2 1.0 4.0.10" 3 4.2- 10* 3 6/2 1.5 2.22.10" 3 2.19.10" 3 5/2 2.0 1.22-10" 3 1.21- 10" 3 5/2 69 Problem #2, N = 20, k = 0.1 t Actual Error Estimated Error Number of Iterations 0.1 1.1-10" 2 7.5-10" 3 3/2 0.5 1.9-10" 2 1.6-10' 2 3/2 1.0 2.0-10" 2 1.8-10' 2 3/2 2.0 1.4-10" 2 1.3-10" 2 3/2 4.0 3.9-10" 3 3.8-10" 3 2/2 6.0 8.3-10" 4 -4 8.5-10 H 2/2 8.0 1.6-10" 4 1.7-10" 4 2/2 Problem #3, N = 20, k = 0.1 t Actual Error Estimated Error Number of Iterations 0.1 4.3-10" 3 3.8-10' 3 3/2 0.5 1.5-10" 2 1.4-10" 2 3/2 1.0 1.8-10" 2 1.5-10" 2 3/2 2.0 1.7-10" 2 1.6-10" 2 3/2 4.0 2.5-10" 2 2.6-10" 2 3/2 6.0 1.5-10" 2 1.6-10" 2 3/2 8.0 9.7-10" 3 9.9-10" 3 3/2 70 Problem #4, N = 20, k = 0.1 t Actual Error Estimated Error Number of Iterations 0.1 4.5-10" 3 4.M0" 3 3/2 0.5 1.8-10" 2 1.7-10" 2 3/2 1.0 2.6- 10" 2 2.3-10" 2 3/2 2.0 1.9-10" 1 3.5-10" 1 3/2 4.0 1.1-10' 1 1.4-10" 1 3/2 6.0 3.9-10" 2 3.8- 10" 2 3/2 8.0 2.1-10" 2 2.2- 10" 2 3/2 All the estimates above are very good, except for problem #4 where for t s 2.0 we get estimates that are approximately twice the actual error. This problem developes a fairly sharp front with large derivatives with respect to x. When the front disappears the estimates become quite reliable again. Note that the amount of work needed to find the error estimate is quite small compared to the amount of work needed to find the approximate solution. For large values of N we need approximately N /3* (number of iterations for the unperturbed problem) operations to find the approximate solution and N * (number of iterations for the perturbed problem) operations to get the error estimate. 4.5 Hyperbolic partial differential equations For the commonly used discretizations of initial-boundary value problems for hyperbolic partial differential equations, I know of no extensive discussion of the existence of smooth expansions of the global 71 discretization error. Some very interesting results are given in Gourlay, Morris (1968) and Skollermo (1975a, 1975b). In essence the main difficulty seems to be how to represent the numerical boundary conditions (i.e. the extra boundary conditions imposed by the numerical scheme) so the errors due to that representation (these errors are not smooth) is sufficiently small . For the method of characteristics some authors have used extrapolation to increase the accuracy, see Lister (1960), Werner (1968), Smith (1970). The results of this section are of an experimental nature, and no rigorous mathematical analysis is attempted. The numerical results indicate that by careful choice of the perturbation operators one can obtain good error estimates for problems with smooth solutions. A further study of this class of problems may prove very fruitful. Consider the initial -boundary value problem u = a u t * 0, $ x $ 1 a > \ u(x, 0) = f(x) $ x <; 1 u(l, t) = h(t) t * and use the discretization x i = i-h i = 0, 1, ..., N ; h = 1/N t . = j • k k = 0, 1 , ... ; k = c-h u.. - u(x., t.) u ij+i - Vi-i a Vi.i - Vij _ 2k a 2h u u i0 = f(x i } 1 = 0(1)N u Nj = h(t,.) j = 0, 1, ... 72 This is the leap-frog method. Note that to be able to compute the approximate solution with the formula above we must, in some way, find u., i = 0(1 )N (extra starting values) and u oi j = 1, 2, ... (numerical boundary values) If the extra starting values are computed with a sufficiently accurate method we have where |R..| = 0(h m ) m = min(4, y + 1) u depends on the method we use to find the numerical boundary values E..J = (-1) C(x.j, t.) where C is a smooth function (adapted from Skollermo (1975a)). We have made some numerical experiments with our estimation algorithm for this discretization. To construct the perturbation operator $. we proceed in the following way: Define K = («,j. i ■ 0(1)N, j = 0, 1, ...) e = U-jj, i = 0(1)N, j = 0, 1, ...) 6(0 = ((C i+lj - C i _ lj )/2h, i = 1(1)N - 1, j = 0, 1, ...) Note that when we apply 6 to the non-smooth error term h y «e of the expansion above we get 73 6 id (h y E ) = (-l) 1 " 1 .^. tj)-^ +0(h 2+y ) As the basic discretization for the perturbation operator . we now use ♦lite) = DT 1'\ CO - a DX y . (6(d) where u i DX y (6(0) = 2 a. 6 .(€) x i ,t: j S=£. 1S SJ is a linear combination of some of the components of 6(?) such that DX . (6(a. z)) is consistent of order 4 with z (x. , t.). The coefficients x • s t • n xij a. are independent of h so h y ^jU) = C-D^^-c^x., t.) - a c x (x., t d ))h y + 0(h y+4 ) = 0(h M ) This choice of the perturbation operator u(x, 0) = sin(2-iT x) < x < 1 u(l, t) = sin(2u t) t > with the exact solution u(x, t) = sin(27r(x + t)) was solved with the leap- frog method. First we extended the solution to the half plane t >, and used the periodicity of the exact solution to find the numerical boundary values, i.e. 74 U 0j =U Nj i - '• 2 ' ■■■ We also used the periodicity to simplify the formulas for the perturbation operator <|>. . This problem was included to see if our algorithm would work in the case where the numerical boundary condition did not introduce a non- smooth error term. Two ways of finding the extra starting values u.-,, i = 0(1 )N were tested, I. u tl = sin(27r(x i + k)) i = 0(1)N i.e. values from the exact solution. II. u n = u iQ + 1 x(u 1+1 - u i _ 1 Q ) + 1 x 2 (u i+1 - 2u iQ * Ul-1 ) with x = k/h i.e. the Lax-Wendroff scheme. In tables 1 and 2 the maximal errors for some time levels are given for these cases. In the next experiment we did not use the periodicity of the exact solution, but used the following two formulas to find the numerical boundary values A. u Qj+1 = u oj + Mu,., - u 0j ) ; X = k/h i.e. the explicit Euler scheme. - x ( u ij - u oj» x - k/h i .e. the Box scheme. The Lax-Wendroff scheme was used to get the extra starting values u.j-|> i = 0(1 )N. In tables 3 and 4 the maximal error for some time levels are recorded. 75 In all the calculations for the tables we used h = 0.025 k = 0.01875 and the computations were carried out in double precision on an IBM 360/75 Table 1, starting procedure I t Actual Error Estimated Error 0.15 1.38-10' 3 1.41-10" 3 0.75 8.31- 10" 3 8.27-10" 3 1.5 1.70-10" 2 1.7M0" 2 3.0 3.41- 10" 2 3.42-10" 2 Table 2, starting procedure II t Actual Error Estimated Error 0.15 0.75 1.5 3.0 1.7M0" 3 8.52-10" 3 1.70-10" 2 3.41- 10" 2 2.18- 10" 3 9.29-10' 3 1.71-10* 2 3.42-10" 2 Table 3, boundary scheme A t Actual Error Estimated Error 0.15 1.71-10" 3 1.72-10" 3 0.75 8.71-10" 3 1.15-10" 2 1.5 2.28-10" 2 2.63-10" 2 3.0 3.43-10" 2 4.69-10" 2 76 Table 4, boundary scheme B t Actual Error Estimated Error 0.15 1.71-10" 3 1.72-10" 3 0.75 8.37- 10" 3 9.09O0' 3 1.5 1.69-10" 2 1.73-10" 2 3.0 3.37-10" 2 3.45-10" 2 The nonlinear problem < x < 1, t > u(x, 0) = 1 - x 0 with the exact solution u(x, t) = (1 - x)/(l + t) (Gourlay, Morris (1968)) was solved with the leap-frog method. We used the Lax-Wendroff starting procedure and the Box scheme for calculation of the numerical boundary values. The discretization error was estimated as above. For h = 0.05 and k = 0.015 the maximal errors for some time levels are recorded in the table below. 77 Table 5 t Actual Error Estimated Error 0.15 3. MO" 5 3.3-10" 5 0.75 3. MO" 5 3.6-10" 5 1.5 2.2-10" 5 3.2-10" 5 3.0 1.9-10" 5 3.2-10" 5 4.5 2.0-10' 5 3.9- 10" 5 6.0 1.9-10" 5 4.6-10" 5 7.5 2.0-10' 5 6.0-10' 5 4.6 Integral Equations First consider Fredholm's integral equations of the second kind b y(x) - x f K(x, t) y(t) dt - f(x) = a with the following discretization t i = x i = a + i-h, i = 0(1)N ; h = (b - a)/N N ♦ h (c) = (5,-hX E a. K(x., t.) %. - f(x.) ; i = 0(1)N) II I .j-Q J I J J I where a Q = a N = 1/2; a. = 1 , j = 1 (1 )N - 1 . We have used the trapezoidal rule to approximate the integral in the equation above. Construct the perturbation operators N ♦ h V U) = U r hA l a iv K(x i' t i ) ? i ' f(x i } ; i = 0(1)N) J-0 v = 1, 2, 78 Here the coefficients a. are such that N b s «+l 2 a. *(t.) = / *(t) dt + I f .(*) h J + 0(h s ') j=0 JV J a j=2(v+l) J The system of linear equations obtained for the unperturbed problem is solved by Gaussian elimination. The LU factorization of the coefficient matrix is saved for use in the sequence of perturbed problems. The unperturbed problems takes approximately (N + 1) /3 operations (ignoring the number of operations needed to compute K(x. , t.), i = 0(1 )N, j = 0(1 )N and f(x.), i = 0(1)N) while each of the iterations in the iterative improvement takes approximately 2(N + 1) operations (ignoring the number of operations needed to calculate the coefficients a. of the perturbation o operators, approximately 3 [2(v + 1)] ). The a- are best computed as weights of a composite quadrature formula of order at least 2(v +1). The length of the intervals for the basic quadrature formulas are M«h, where M has to be a factor of N (M may be equal to N). The coefficients of the basic formulas-can be obtained as the solution of Van der Monde systems of linear equations in the same way we obtained the coefficients of the differentiation formulas of section 3. In Van der Sluis (1972) a discussion of asymptotic error expansions for quadrature formulas of this kind can be found, and in Laurent (1964), Stetter (1965) asymptotic expansions for the global discretization error for our method are discussed. The problems below were solved on an IBM 360/75 in double precision. All problems are taken from Netravali, de Figueiredo (1974). 79 (1) 1 x y(x) - / xt y(t) dt - (e x - 1) = (2) (3) (4) exact solution: y(x) = e 1 y(x) - / xt y(t) dt - (sinUx) - x/ir) = exact solution: y(x) = sin(Trx) 1 y(x) / x 4 e xt y(t) dt - (x - x 3 [e x (l - 7) + 7]) - xx exact solution: y(x) = x 1 4 x y(x) - / x 4 e xt y(t) dt - (sin(irx) - ,rX i? + 2 ^ ) = X + IT exact solution: y(x) = sinUx) We have used N = 16 and record below the maximum errors in the successive iterates. Problem Error in Iterate Number 1 2 3 4 5 6 1 2 3 4 2.2 10' 3 1.5 10" 3 2.3 10" 3 6.6 10' 3 2.1 10' 6 1.4 10" 6 2.3 10" 5 5.7 10" 5 2.1 10" 9 1.4 10" 9 2.0 10" 7 5.1 10" 7 -12 2.0 10 xc 2.7 10" 11 1.8 10 * 4.8 10" 9 1.1 10" 14 2.7 10" 14 1.6 10" 11 4.3 10" 11 -15 1.6 10 ,D -15 4.8 10 ID -13 1.6 10 l0 -13 3.7 10 ' -15 7.1 10 ID 3.2 10" 16 -15 7.5 10 l0 -15 7.5 10 ID Note 1 The coefficients a. , j = 0, 1, ..., N, v = 1, 2, ..., v can also be made independent of v if they are chosen as a. = w., j = 0, . . . , N, where 80 N b E w ,j,(t.) = / ${t) dt + 0(h y ) j=0 J J a and m 5 (v + l)-2. v max ' Note 2 The technique described above can be used for other discretizations of the integral equation, using e.g. non-uniform grid points x. , 1 =0, 1, ..., N. We can also use the same technique for other types of integral equations, e.g. nonlinear, as long as the assumptions of theorem 3 or 4 are satisfied. Essentially we require smoothness of the exact solution and a smooth error expansion for the solution of the discretized problem. Consider nonlinear integral equations of the type b y(x) - / K(x, t, y(x), y(t)) dt - f(x) = a with the following discretization t. = x. = a + i-h , i = 0(1)N , h = (b - a)/N N ♦ h U) = U, - h z a. K(x., t., £,, E f ) - f(x.) ; i = 0(1)N) " ' i=0 J * J I J I where a Q = a N = 1/2 ; oj = 1 , j = 1(1 )N - 1 The system of nonlinear equations obtained is solved by Newton iteration. The perturbations are constructed analogously to the perturbations for the linear problems above, and the system of nonlinear equations obtained for each perturbed problem is solved by Newton iteration. The iterations are terminated when the iteration error becomes less than e. The problems below were solved in this fashion on an IBM 360/75 in double precision. 81 (1) (Moore (1968)) 1 y(x) - f x-t-y(t) dt - 3x/4 = exact solutions: y(x) = x and y(x) = 3x 1 2 (2) y(x) - / j-$—y(x) 2 y(t) 2 dt - e x - e 2x (e 2 - 1)/(4(1 + x)) = y exact solution: y(x) = e (artificial ) -14 We have used N = 10, e = 10 and recorded below the maximum errors in the successive iterates. Problem error in iterate number 1 2 3 4 5 6 1 2 0.5-10" 2 1.2-10" 2 0.5-10" 4 1.9-10" 4 0.5-10" 6 3.0-10" 6 0.5-10" 8 4.3-10" 8 0.5-10" 10 6.4.10" 10 0.5-10" 12 9.3-10" 11 0.5-10' 14 8.7-10" 11 Note 1 With N = 10 one cannot expect to get the error less than 0(h ), which is obtained after five iterative improvements. For problem 1, however the sixth iterate improves the accuracy as much as the previous iterates. This astonishing result is due to the fact that in the error expansion M h ' v h h j=(v+l).2 J we have t|/.(y) = for the exact solution y of the integral equation. This happens because the quadrature rules that are used for v = 1, 2, ... are exact for polynomials of degree three or less and for the exact solution 3 the integrand is t . cf. note 1 after the numerical results of section 4.3.1. 82 5. Concluding remarks In the final stage of the work with this report we got a preliminary version of a paper, "Iterated defect corrections based on estimates of the local discretization error," by R. Frank, J. Hertling and C. W. Uberhuber, report number 18/76 from the Institut fur Numerische Mathematik, Technical University of Wienna, Wienna, Austria. There the authors consider an algorithm yery similar to our algorithm for iterative improvement. However their perturbation operators (or approximations to the local discretization error) are computed differently. For the two point boundary value problem y" = f(t, y) y(0) = A y(l) = B with the discretization / E o- A ♦ h (0 - i+1 25 i + ? i-l f(t., ?.) = i=l,2, they define locally smooth functions P k (t, n),k=l,2, ...,N-1 that interpolate the solution n of .(n ) = at some points surrounding x = x, . Then they compute the local discretization error as V n0) = (p k (t k-r n0) - 2P k ( V n ° } + p k ( Vr n ° ))/h2 - (p k )M(t k' n0) k = 1, 2, , N - 1 The succeeding approximations to the local discretization error (corresponding to our approximations 4>, (n ~ ), v = 2, 3, ...)are computed by the same technique, but now higher and higher order interpolation is used to define the functions P^t, n). The polynomials PJJ(t, n V ) do not need to be computed explicitly but the second derivative can be computed by forming 83 linear combinations of the components of n V , as we do when we approximate linear functionals. In our notation (cf. section 4.2) they define families of perturbation operators , , v = 1, 2, ... according to ^0" A \ *h,v ( ^ = *k-1 - 2 h + Vl _ D 2,2*2v u) k = 1, 2, N - 1 'N-l - B Note that for any z (: E 4,(a, Z) = A°{F(z) + E f,(z) h 2j } + 0(h M+1 ) V"h j-l where fj (z) - (2j + 2)1 ,(2j+2) ° and * h,v v h (a, z) = a°{ z f,(z) h 2j + s f ,(z) h 2j } + 0(h M+1) h 'j=l 3 j=V+l VJ (if D ' ' uses points symmetrically distributed around t.) with proper \ K definitions of a, and a.. The improved solutions are computed according to h (n°) = ^(n 1 ) - 4> h ^n 1 " 1 ) = 1-1,2. 84 Our theory does not cover this type of perturbations, but theorems similar to ours could be proved with our technique for this algorithm. 85 References Ballester, C, Pereyra, V. (1967) "On the Construction of Discrete Approximations to Linear Differential Expressions," Math. Comp. 21, 297-302. Bellman, R. E., Kalaba, R. E. (1965) "Quasilinearization and Nonlinear Boundary Value Problems," American Elsevier Publishing Company, New York. 3] Bjorck, A., Dahlquist, G. (1973) "Numerical Methods," Prentice-Hall, Englewood Cliffs, New Jersey. 4] Bjorck, A., Pereyra, V. (1970) "Solution of Vandermonde Systems of Equations," Math. Comp. 24, 893-903. Ciarlet, P. G., Schultz, M. H., Varga, R. S. (1967) "Numerical Methods of High-Order Accuracy for Nonlinear Boundary Value Problems I," Num. Math. 9, 394-430. 6] Cole, J. D. (1951) "On a Quasilinear Parabolic Equation Occurring in Aerodynamics," Quarterly of Applied Mathematics, Vol. IX, 225-236. 7] Concus, P. (1967) "Numerical Solution of the Minimal Surface Equation," Math. Comp. 21, 340-350. Dahlquist, G., Lindberg, B. (1973) "On Some Implicit One-Step Methods for Stiff Differential Equations," TRITA-NA-7302, Dept. of Information Processing, The Royal Institute of Technology, Stockholm, Sweden. Fox, L. (1947) "Some Improvements in the Use of Relaxation Methods for the Solution of Ordinary and Partial Differential Equations," Proc. Roy. Soc. London Ser. A 190, 31-59. JO] Frank, R. (1975) "The Method of Iterated Defect-Correction and Its Application for Two-Point Boundary Value Problems I," Report 8/75, Institute fur Numerische Mathematik, Technische Hochschule, Vienna. '11] Galimberti, G., Pereyra, V. (1970) "Numerical Differentiation and the Solution of Multidimensional Vandermonde Systems," Math. Comp. 24, 357-364. 12] Galimberti, G., Pereyra, V. (1971) "Solving Confluent Vandermonde Systems of Hermite Type," Num. Math. 18, 44-60. 13] Gourlay, A. R., Morris, J. L. (1968) "Deferred Approach to the Limit in Nonlinear Hyperbolic Systems," Comp. J. 11, 95-101. !14] Hofman, P. (1967) "Asymptotic Expansions of the Discretization Error of Boundary Value Problems of the Laplace Equation in Rectangular Domains," Num. Math. 9, 302-322. 86 [15] Joyce, D. C. (1971) "Survey of Extrapolation Processes in Numerical Analysis," SIAM Review 13, 435-490. [16] Keller, H. B. (1970) "A New Difference Scheme for Parabolic Problems, in Numerical Solution of Partial Differential Equations II," SYNSPADE 1970 (ed. Hubbard B.), p. 327-350. [17] Kolar, W. (1972) "Uber Differenzenverfahren Von Monotoner Art fur Nichtlineare Parabolische Randwert Probleme, in Numerische Losung Nichtlinearer Partieller Differential und Integro - Differential Gleichungen," R. Ansorge, W. Tornig (eds.), Springer Verlag. [18] Kronsjo, L., Dahlquist, G. (1972) "On the Design of Nested Iterations for Elliptic Difference Equations," BIT 12, 63-71. [19] Lambert, J. D. (1973) "Computational Methods in ODE's," J. Wiley & Sons, London. [20] Laurent, P. J. (1964) "Etudies des Procedes d'Extrapolation en Analyse Numerique," Thesis, Univ. of Grenoble, Grenoble, France. [21] Lentini, M., Pereyra, V. (1974) "A Variable Order Finite Difference Method for Nonlinear Multipoint Boundary Value Problems," Math. Comp. 28, 981-1003. [22] Lentini, M., Pereyra, V. (1975a) "Boundary Problem Solvers for First Order Systems Based on Deferred Corrections, in Numerical Solution of Boundary Value Problems for Ordinary Differential Equations," Academic Press, New York. [23] Lentini, M., Pereyra, V. (1975b) "An Adaptive Finite Difference Solver for Nonlinear Two-Point Boundary Problems with Mild Boundary Layers," Rpoert STAN-CS-75-530, Computer Science Dept., Stanford University. [24] Lister (1960) "The Numerical Solution of Hyperbolic Partial Differential Equations by the Method of Characteristics, in Mathematical Methods for Digital Computers," A. Ralston and H. S. Wilf (eds.), J. Wiley, New York. [25] Moore, R. H. (1968) "Approximation to Nonlinear Operator Equations and Newton's Method," Num. Math. 12, 23-34. [26] Netravali, A. N., de Figueiredo, R. J. P. (1974) "Spline Approximation to the Solution of the Linear Fredholm Equation of the Second Kind," SIAM J. Numer. Anal. 11, 538-549. [27] Pereyra, V. (1967a) "Iterated Deferred Corrections for Nonlinear Operator Equations," Num. Math. 10, 316-323. [28] Pereyra, V. (1967b) "Accelerating the Convergence of Discretization Algorithms," SIAM J. Numer. Anal. 4, 508-533. 87 Pereyra, V. (1968) "Iterated Deferred Corrections for Nonlinear Boundary Value Problems," Num. Math. 11, 111-125. Pereyra, V. (1970) "Highly Accurate Numerical Solution of Casilinear Elliptic Boundary Value Problems in n Dimensions," Math. Comp. 24, 771-783. Pereyra, V. (1973) "High Order Finite Difference Solution of Differential Equations," Report STAN-CS-73-348, Computer Science Dept., Stanford University. Rail, L. B. (1969) "Computational Solution of Nonlinear Operator Equations," J. Wiley & Sons, New York. Skollermo, G. (1975a) "How the Boundary Conditions Affect the Stability and Accuracy of Some Implicit Methods for Hyperbolic Equations," Report No. 62, Dept. of Computer Science, Uppsala University, Uppsala, Sweden. Skollermo, G. (1975b) "Error Analysis for the Mixed Initial Boundary Value Problem for Hyperbolic Problems," Report No. 63, Dept. of Computer Science, Uppsala University, Uppsala, Sweden. Smith, R. R. (1970) "Extrapolation Applied to the Numerical Solution of Hyperbolic Partial Differential Equations," Doctoral Thesis, University of California, San Diego. Stetter, H. J. (1965) "Asymptotic Expansions for the Error of Discretization Algorithms for Nonlinear Functional Equations," Num. Math. 7, 18-31. Stetter, H. J. (1973) "Analysis of Discretization Methods for Ordinary Differential Equations," Springer Verlag, New York. Stetter, H. J. (1974) "Economical Global Error Estimation, in Stiff Differential Systems," R. A. Willoughby (ed.), Plenum Press, New York. Van der Sluis, A. (1972) "The Remainder Term in Quadrature Formulae," Num. Math. 19, 49-55. Volkov, E. A. (1957) "A Method for Improving the Accuracy of Grid Solutions of the Poisson Equation (in Russian)," Vycisl. Mat. 1, 62-80, translation in American Mathematical Society Translations, Ser. 2, Vol. 35 (1964). Werner, W. (1968) "Numerical Solution of Systems of Quasilinear Hyperbolic Differential Equations by Means of the Method of Neben- Characteristics in Combination with Extrapolation Methods," Num. Math. 11, 151-169. BIBLIOGRAPHIC DATA SHEET 4. Title and Subtitle 1. Report No. UIUCDCS-R-76-820 Error Estimation and Iterative Improvement for the Numerical Solution of Operator Equations 7. Author(s) Bengt Lindberg 9. Performing Organization Name and Address Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 12. Sponsoring Organization Name and Address Department of the Air Force Air Force Office of Scientific Research (AFSC) Boiling Air Force Base, DC 20332 15. Supplementary Notes 3. Recipient's Accession No. 5. Report Date July 1976 8. Performing Organization Rept. UIUCDCS-R-76-820 No 10. Project/Task/Work Unit No. 11. Contract/Grant No. AF0SR-75-2854 M Type of Report & Period Covered 14. 16. Abstracts "" ~~ ^^— — — — — — — — — — A method for estimation of the global discretization error of solutions of operator equations is presented. Further an algorithm for iterative improvement of the approximate solution of such problems is given. The theoretical foundation for the algorithms are given as a number of theorems. Several classes of operator equations are examined and numerical results for both the error estimation algorithm and the algorithm for iterative improvement are given for some classes of ordinary and partial differential equations and integral equations. 17. Key Words and Document Analysis. 17a. Descri ptors operator equations, discretization error, iterative improvement 7b. Identifiers/Open-Ended Ter 7c. COSATI Field/Group 8. Availability Statement Unl imited 3RM NTIS-35 ( 10-70) 19. Security Class (This Report) ■ c UNCLASSIFIED 20. Security Class (This Page UNCLASSIFIED 21. No. of Pages 94 22. Price USCOMM-DC 40329-P71 m 14 1577