LIBRARY OF THE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN 5IQ&4 The person charging this material is re- sponsible for its return to the library from which it was withdrawn on or before the Latest Date stamped below. Theft, mutilation, and underlining of books are reasons for disciplinary action and may result in dismissal from the University. UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN MAR 2 2 1978 OEC 1 4 1976 DEC 1 2 R r CD L161 — O-1096 Digitized by the Internet Archive in 2013 http://archive.org/details/effectofvariable570gear UIUCDCS-R-73-570 C00-1U69-0220 ?/ THE EFFECT OF VARIABLE MESH SIZE ON THE STABILITY OF MULTISTEP METHODS C. W. Gear K. W. Tu May U 1973 April 1973 UIUCDCS-R-73-570 THE EFFECT OF VARIABLE MESH SIZE ON THE STABILITY OF MULTISTEP METHODS* ty C. W. Gear K. W. Tu April 1973 DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF ILLINOIS AT URBAN A- CHAMPAIGN URBAN A, ILLINOIS 6l801 * Supported in part by grant US AEC AT( 11-1 ) 1^69. 1 . INTRODUCTION This paper is concerned with the stability over the finite interval ts[0,T] of multistep formulas subject to varying step sizes. Although practical algorithms vary the step size in order to use as large a step as possible consistent with a local control of the estimated error, the existing literature — with the exceptions noted below — deals only with the stability of fixed step formulas. The standard derivations of multistep formula, e.g. Henrici [3], are based on equal intervals. There are two common ways of handling variable steps for multistep formula. One is to interpolate through the computed points to get values at equally spaced points for use in a standard equal interval formula. The other is to derive multistep formula directly based on unequal intervals . These techniques do not have the same stability characteristics. In 1962 Nordsieck [k] noted an apparent instability in his interpolation version of Adams formula if the step size was varied too frequently. In 1969 Piowtrowski [5] proved that a variable step form of the Adams-Moulton formula was stable and convergent. Brayton, et.al. [l] and Gear [2] (pp. 1U5-1U6) point out that the two techniques for changing step size are not equivalent, and Brayton, et.al. give test examples that show that both can cause instability, but that the technique of step variation used by Piotrowski might be more stable. The stability of a method is affected by three factors: the underlying multistep formula, the technique used to handle variable steps, and the scheme used to select step sizes — the formula, technique, and scheme for short. Method will be used to refer to the combination of a formula and a technique. This paper introduces the idea of a step selection scheme, and defines the stability and convergence of a method with respect to a scheme. A stability condition is derived. It is necessary for stability, and, together with consistency, implies convergence and stability. It is easy to generate consistent formulas. The more difficult task is to determine whether a combination of formula, technique, and scheme is stable. The two common techniques for step changing are examined, and it is proved that the variable step technique is superior to the interpolation technique for Adams formula. Specifically, it is proved that the variable step technique used with Adams formula is stable and convergent with respect to any step selection scheme, while the interpolation method can be unstable if the steps are allowed to change too much. However, it is proved that the interpolation technique with the k-step Adams formula is stable if the step selection scheme guarantees that the step is constant for at least k-steps after each change The investigation is extended to all multistep formulas, whether explicit, implicit, or predictor-corrector. It is proved that if the underlying formula is stable, then both step changing techniques lead to stable methods if the rate of change of the step size is small. It is also shown that common error control algorithms lead to small rates of change in the step size. 2 . BACKGROUND The two step changing techniques will be called interpolation and variable step, respectively. Intuitively, we define them as follows. Suppose we are using a k-step formula for integration and suppose that an approximation y . to the solution is known at k values of the independent variable t, say t ., k > i _> 0. The interpolation technique for changing step size to h consists of the following two steps: 1. Interpolate through the known points to get the values of the solution at the points t .=t -ih,k>i>0. n-i n — 2. Use the fixed step integration formula on these new points t . to find the value at t -, = t + h. n-i n+1 n In the variable step technique we start with k unequally spaced points t . , k > i >_ and compute the coefficients of the multistep formula Vl = "l, n ^n + - • - + a k,n y n-k + l + B 0,n \ Cn (l) + B l,n h „-l y n + -'- +6 k,n h n-^n-k + l so that it is exact to the required order, where h = t _ - t . Since n n+1 n the order cannot exceed 2[k/2]+2 if the formula is to be stable for fixed step sizes, additional restrictions must be imposed. These consist of specifying the values of some of the coefficients a and g. The fixed step formula underlying the variable step technique is said to be the formula that is obtained from (l) when equally spaced points t . are used. For example, if we required that a. = 0, j >_ 2, and that (l) have order k+1, then Adams-Moulton is the underlying fixed step formula. If we also ask that g =0 and lower the order to k, we get a variable step method based on the Adams -Bashforth formula. The stability of an integration formula subject to a step changing technique may depend on the way in which the steps are varied. Therefore, we must define stability with respect to a particular step selection scheme. First, therefore, we state Definition 1 A step selection scheme is a function 9 such that h = hQ(t } h) n n where, for all h > 3 < t < T 1 > Q(tjh) 1 A > NOTE: As h is reduced to zero, the maximum step size is reduced to zero. The lower bound A serves to prevent zero step sizes for any non- zero h; these would not give useful numerical integration schemes! In this paper we will be concerned only with differential equations y' = f(y,t) for which f(y,t) is Lipschitz continuous in the strip |y| <°°, <_ t <_ T < °°. Small perturbations to the solutions of such equations at a given point cause small perturbations later on. Stability means that small numerical disturbances at one point t cause n small perturbations in the numerical solution at later times for all small h. Convergence means that the numerical solution can be made arbitrarily accurate ash+0. Definition 2 A method is stable with respect to a scheme 6 if there exists a constant M < °° (dependent on the differential equation only) such that \y -ii < M \y - ij '^m v m> li7 n a n ] for all <_ t < t W + (B* h . y' +...+ g* h , y« ... l,n n-1 n k,n n-k n-k+1 (2) .(0) A predictor-corrector formula is one in which a predicted value y is computed by the explicit equation .(0) y'n=ot, y + . . .+ a y , , J n+1 l,n °n k ,n ^n-k+l (3) + 3 n h y' +...+ e h y' ^ l,n n-1 n k,n n-k n-k+1 and the equation (2) is used to compute a fixed finite number of corrected (l) (M) (m) values y ,.,..., y '. The approximation y ,. is defined as y , ' and y',. n+1 n+1 n+1 n+1 n+1 is set to either f (y ) or f(y ) depending on whether or not a final function evaluation is needed. An implicit formula is one in which equation (2) is solved for y = y . One way of doing this is to iterate a predictor-corrector formula until successive iterates are approximately equal. This converges for h sufficiently small but may diverge for large h. Other methods, such as Newton-Raphson, can also be used to solve (2). The stability of a method based on an implicit formula is independent of the process used to solve equation (2), so we do not need to specify that process. During the computation, a number of previous values of the function y . and the scaled derivatives h . _ y 1 . must be saved, 'n-i n-i-1 n-i Let us collect these together in a column vector y defined "by ^n = [ V y n-l : » K , y 1 , K o y ' -. » • • • » * t. y ' ^ ] T n-k+1' n-1 J n* n-2 J n-1 n-k ' n-k+1 - for a k-step formula, where T is the transpose operator. (if some of these values are not needed, we can drop them from the vector. For example, in Adams formula, y would be [y , h _ y ',..., h . y' . ,. *-n n n-1 n n-k n-k+1 A numerical integration method is a method for computing ] T .) y from y . We will first describe it for the variable step technique, Let y\,i be defined as r (m) y:i? } > K n y:.-.-» n+1' *&""* °n-k+2' n J n+1 ' n-1 J n where h y'*? is h f(y ( ™, , t .. ) for m > 1, n n+1 n w n+l n+1 — y' 1 n-k+1 n y n -k+l + 6 h v' + 5 h y' +...+ 6 h y' l,n n-1 ^n 2,n n-2 °n-l k,n n-k ^ n-k+1 when m = 0, and the coefficients y and 6 are given by Y- = (a. - a* )/6* i ,n i ,n i ,n 0,n and 6. = (6. - (3* )/6* i,n l ,n i,n ,n From these definitions we can write til - * n *, (h) where A = n — — l,n 2,n • • • k-l,n k,n j l,n B 2,n • • • e k-l,n 6 k,n 1 1 • •• • 1 l,n Y 2,n 9 ■ • Y k-l,n Y ! 5 k,n j l,n 6 2,n 6 k-l s n k,n 1 1 B • • • • • • • • • . i o 1 J and (m+l) (m) r f( y (m) , t ) - h y' U) ] ^n+1 ^n+1 V n K7 n+V n+r n y n+l J (5) where * = t3* , 0,..., 0, 1, 0,..., (6) Equation {h) represents the predictor — the evaluation of y ,, and h y' Ln as linear combinations of the known information plus "n+1 n J n+1 ^ the shifting over of the saved information and the discarding of y and y' . Equation (5) represents the corrector — the evaluation of s-t ( m ) a. \ a j.-, n ^ • j> (m.) ^ •, ,(m) _ £\Y xD "t , -, ) and the updating of y ' and h y ',. . For convenience, n+1 n+1 n+1 n n+1 we define / (m) x . _/ (m) , >. . , (m) V^n+1 n+1' n+1' n "n+1 (T) Then equation (5) can "be written (m+l) (a) +l F ( (m) } (8) ^+1 *n+l -n n v,L n+l ; v ; If a predictor only formula is used, we still formally must apply one step of (8) with H = £_ = [0,. . ., Q, 1, 0,,..,0] . (in this case the value of a* and 3* are unimportant.) If a final function evaluation is used, we apply (8) M times using the & given in (6), and then apply p one further step of (8) using & = l_ . Now let us consider a fixed step method in preparation for formalizing the interpolation technique. In this case, the a, 3, y, and 6 in A and I are independent of n. Suppose we have just completed the step from t . to t using step h . . The values saved in y are the n-1 n n-1 •4i computed approximations at the points t -ih , <_ i < k. Before stepping to t , , = t + h we must interpolate to the points ** n+1 n n r t . = t - ih . This is a linear process and can he represented by n-i n n first premultiplying y by an interpolating matrix C . Consequently, we write the predictor step for the interpolation process as C\ - A C n *n (9) The corrector step (8) is unchanged except that I = l_ is independent of n. Both for theoretical development and computational ease, it is convenient to perform a transformation based on Nordsieck's [k] form of Adams formula for the interpolation technique. This consists of computing and saving Q y rather than y where Q is a nonsingular matrix. The particular form of Q used is obtained as follows: The set of values of y _. and y'_. <_ i < k determine a unique (2k-l)-th degree polynomial. 10 Represent that polynomial instead by its value at t and the values of its first 2k-l scaled derivatives h J y /j ! , 1 <_ j < 2k. (This leads to what is called a 2k -value method in Gear [2].) If y is the set of saved values of y and hy' "based on step h n-1 1 and a is the set — n (10) (11) of values of [y , h . y', h 2 _ y"/2,..., h 2 *" 1 y (2k-l) /(2k-l) ! ] T n n-1 n n-1 n n-1 n representing the same polynomial of degree 2k-l, then Q is defined "by a = Q y • Q is independent of y and h. . If we apply this transformation to (9) and (8) we get Q 4°| = (QAQ" 1 )(QC n Q~ 1 )Q y^ To simplify the presentation we will write y rather than a , A rather than GAQ , C rather than QC Q , I rather than Q£ , and F(c) rather n n — n -n — than F(Q c_) when we are talking about the transformed scheme. Consequently, equations (10) and (ll) are identical to (9) and (8). The advantage of making the transformation is that pv_1 C n = diag[l, h n /h n _ l5 ..., (\/\_ ± ) 1 (12) so that the interpolation process is simplified. This was Nordsieck's motivation. Finally, we note that (9) is identical to (k) if A is n interpreted as A C for the interpolation technique. Consequently, equations (k) and (8) represent the predictor only formula or the prediction-correction formula in either step changing technique. The corrector only formula can sometimes be implemented by iterating the corrector to convergence and treated theoretically by setting y = lim yV , • However, this process may not converge (as in stiff equations), in which case the corrector must be solved by other 11 means. We note that the corrector equation (8) always adds on a multiple of & to the current value. Consequently, the final value has the form Vl ■ J&i + "4 = A n *n * «*» (13) where W is a scalar. The value of go is such that nzlH * co^) = o (no (Although we have used predictor coefficients in A , some straightforward arithmetic will show that the final value of y is independent of these coefficients when the corrector equation is solved by (13) and (lU).) We must now develop the error propagation formula for equations (k) and either (8) or (13) and (lU). The global error n is defined by * n " ^ - £(t a ) (15) where y_(t ) is the value of the components of the vector y as evaluated along the desired solution of the differential equation. The truncation error d is the error introduced by one step of the -n method, starting from the solution of the differential equation. Thus, d is defined by the equations either &? - tit ♦ 4 F nO - ^n+1 ^ti+I if M corrector iterations are used, or 2^4.t = liVi + wi = A y_(t ) + Sd , (16) *-n+l *-n+l — n n *- n —a 12 where F(jf _ ) = if the corrector is solved exactly, and 4 = 4 +1 - 2L(t n+1 ) From the definition of e , , and d -n+1 -11 ^n+1 = 4+1 " 1.(^+1 ) " 4+1 " 4+1 + 4+1 " ^ n+r = 2,,+! " 4+1 + 4 (1T) (Note that if round-off errors are to "be considered, they can be included in d at this point, so that the computation of y _ can be considered to be exact.) If M corrector iterations are used, (17) can be written as -n+1 ^+1 *ia+l -n =e i) + ^ve i) )-e i) -4vai i, >^ 3F (M) by the mean value theorem. - — is the row vector of partial derivatives of F with respect to the components of y_ evaluated somewhere in the interval (y , f ). From equation (7) we see that |^ M - [h f< M) , o,..., 0, -1, 0,..., 0] dy_ n y where r M ' is 9f/3y evaluated in the interval (y^~ , y +~ ) . The process leading to (l8) can be repeated to get ,(M) _(D ^n+1 -n 3y_ -n 3y_ "-n+l *-n+l -n aF (M) . (1) - [I +£ I 1 ]...[! +£ I 1 1 A e + d (19) -n 3y_ — n 3y_ n -n n We write this in the form 13 e Ln = S e + d (20) -n+1 n -n -n T Let e. be the unit row vector consisting of a one in the i-th —i T position and zeros elsewhere. Thus, e_ is [l, 0,..., 0]. Let subscript d represent the component corresponding to the h j l entry in y . (d is k+1 for the untransformed method, 2 for the transformed method.) Then fl (m) . h f <»> J . £ ' ( 21) 9v_ n y — 1 -d Therefore, from (19), (20), and (21) 's = n [(I -I e?) + h f (m) I e?]A (22) n . -n-d ny-n— In m=l T M = [(I - I e, ) + I {Products of M terms of form —n-d h f (m) I e%r (I - I eT) n y -n — 1 -n — d with at least one of the former} ]A n We define Then we can write (22) as / T.M . S = (I - I e, u A 23 n -n — d n S = S + h S (21+) n n n n where S is a matrix whose elements are polynomials in h , f and n n y entries from A and % . n — n Ik Lemma 1 If a, g, a* and 3* are uniformly "bounded or if A and I n -n are uniformly "bounded so is j |S | j for all |h| <_ u. Proof The elements of S are polynomials m h whose coefficients _ n n are polynomials in f and the entries of A and I . If I is bounded y n -n y ' by the Lipschitz constant, while entries in A and I , except possibly n — n for the d-th row of A , are bounded if a, 3, a* and 3* are bounded. The n d-th row of A contains elements of the form (a-a*)/3* , and could be n ,n unbounded if 3* can approach 0. However, from (22) and (2U), the only way the d-th row of A enters S is when A is premultiplied by * n n n T (I - i e ., ) which zeros the d-th row and changes the first row from — n -d a to a*. Consequently, the coefficients of h in S are bounded, so for * n n any u > 0, Is ; is bounded for all 0 i i n i i _ _ n -n uniformly bounded, the result follows similarly. Q.E.D. The behavior of the error e is determined by equation (20). -n When I Is is bounded, we can use equation (2U) to relate the stability n of the method to the properties of S . The following condition will be shown to be necessary for a method to the stable: 15 Definition 4 A method satisfies the stability condition with respect to a step selection scheme if there exists a C < <*> 3 such that n m-1 m-2 n satisfies | 1^1 | <_C (26) n' for all < t < t < T. J — n m — If an implicit method is used, the corrector is solved exactly, and we must derive equations (20) and {2k) by a different technique. Substitute (13) and (l6) in (17) to get e n = A e + (w-fiS) i + d (27 -n+1 n — n — -n (The subscript n has been omitted from l_ for clarity later.) We know = F(v ii+1 ) - F(2n+1 ) 9v_ ^+1 *n+l' 3F 3v _ #A e } 3v_ — 3v_ n -n Note that both quantities enclosed in braces are scalars. Substituting in (27) we get S„+i = A e " ^^> l <■¥■ A £ > + d -n+1 nn 9 y_ — — 3v_ n -n -n °%_ ~ - " ov_ n -n -n = S e + d n — n — n 16 where n 9y_ - - 3y_ n i- [I - "'{L h f - a}" 1 £(h f e^ - e^)] A 1 n y d — n y — 1 -d /J n Here, I . is the i-th component of l_. If we choose u such that for 0>- He ^, s n = [I-^Ajg] A n + h n s n ' ( 28 ) = S + h S n n n The form of S is the same as the previous form with M = 1 and I scaled n — so that Jl = 1, as it always is in methods discussed in this paper. We also need to extend Lemma 1 to this case. Lemma la || S || defined in (28) is hounded if a, 6, a* and B* or A and I are uniformly bounded, n -n Proof From (28) and the preceding equation S = -U n h f -I A' 1 f I e. A n lnyd y In + [U.h f -4 J" 1 + C" 1 ]^ eT A /h lnyd M dnn = -U n h f -2, J -1 f I e? A lnyd y In + nT^H-h f 4)" 1 f a s, e^ A d lnyd y— 1-dn IT All "but possibly the d-th row of A are bounded. The first term does not involve the d-th row of A , and is, therefore, bounded for n < h < u. The second term involves the d-th row of A , but scales it by — — n £ ., = 3* . Hence, it is also bounded for < h < u. Hence, Is is 1 0,n — — ' ' n 1 ' bounded. If A and I are bounded, the result follows directly, n — n Q.E.D. 18 h. STABILITY AND CONVERGENCE Theorem 1 If a method is stable with respect to a step selection scheme 6j it satisfies the stability condition (26) with respect to that scheme. Proof Consider the differential equation y' = for which S , n defined "by (2k), is zero. Evidently, the difference between two numerical solutions satisfies ^n+1 ~ -n+1 ~ S n^n " Za) i i A m i i If the S II are not uniformly bounded, there exist t and t and 1 ' n 1 ' m n bounded y - y such that — n — n y m ~ J m = S m (y - y ) is arbitrarily large, implying that the method is not stable with respect to 0. Hence stability implies the stability condition. Q.E.D. The convergence theorem depends on the following lemma. Lemma 2 If, for the difference equation, x^=Sx+hSx+hA (29) —n+1 n -n n n -n n — n there exist constants k , k , and k such that 19 Is | | < k. ¥ n n — 1 A < v ¥ n, —n — 2 and | |s m | | < k. ¥ m > n n — — where s n = I n and S is defined by equation (25) a n then, if k > or if k =0 I II rv II II k2 i Vl ( W k 2 ,, M 1^11 1^ M^ll + -] e -- (30) 1^1 1 i^o H*qII +v 2 ( V t o ) (31) Proof From (29) we have N-l x= s' x^ + E s 4.1 h S x + A — N -H) „ n+1 n n — n -n n=0 Therefore , 112,11 ik I I*, I I ♦ I Vn (k l 1 1^1 1 + k 2> (32 n=0 Consider the case k > first. We show (30) by induction. It is evidently true for N = since k >_ |s | = 1. Assuming its validity for n < N, we substitute in (32) to get 20 N-l k„ k n k n (t -t A ) I I xl I < k 11x11+ E kkh fk llxll+— I*- 01 n ° I l%ll ±k Q | IXqI I + L K Q K n n LK Q HXqI I + k J e n=0 1 k N-l k k (t -tj k„ n II II 2 irn j. v i i v. 1 n t 2 = [k Q MxqII + — ][l + E Wn e ] - — 1 n=0 1 k N-l k n k,h k_k n (t -tj k ^ n II ii . 2 T r _, _,/01n_x01 v n0 / 1 2 1 n=0 1 This completes the inductive proof of (30). If k = 0, equation (31) follows from (32) directly. Q.E.D. The convergence theorem is now straightforward. Theorem 2 If a method satisfies the stability condition with respect to a step selection scheme 0, if a, a*. 3 and 6* or A and I are uniformly bounded, and if the truncation error d satisfies 3 J — n I Id I I < h e(h) V n i \_ n \ i _ n where e(h) + as h -*■ 3 then the method is stable and converges with respect to _ 1 (37) = Uq oj.-u.I > A if i 4 J (38) Divide (36) by rh to get , r Z 3*. (w.-ok.. )uk = - — (39) . _ i,n 1 l+l 1 r i=0 2k The right hand side of (39) and the coefficients of s* on the left hand i,n side are bounded uniformly in n by (37) • Ike determinant D of the coefficients is a simple multiple of the Vandermonde determinant, namely D = [ n (w.-oj.)][ n (w.-o). ._)] k>i>j>p X J i=0 ^ 1+1 By (38) this is hounded away from zero, hence the solution of (39) is bounded uniformly in n . Q.E.D. Theorem 5 The variable step Adams -Bash forth (AB) 3 Adams -Bash forth- Moulton predictor-corrector (ABM) 3 and Adams -Moulton (AM) methods are stable and convergent for any step selection scheme. {ho) 1 Proof The vector of saved values y is [y , hy',..., hy ' , . ] so *-n n n n-k+1 the matrix A is given by n l,n 2,n k,n 6 n 6_ ... 6, l,n 2,n k,n while 2, is [0, 1, 0,. .., 0] for AB and [3* , 1, 0,. — ti u ,n and AM. , 0] for ABM Hence, from equation (23), S = (I -& eT) M A n -n -2 n 25 1 n l,n n 2,n • • • \,n 1 Ui) where i,n 3. for AB i,n i,n for ABM and AM From Theorem h, n are uniformly bounded, therefore, S is i,n n uniformly bounded. Hence the |S n+mi are uniformly bounded for < m < k. From the structure of S , we see that for m > k — — n *n+m __ ~n+k n n 1 x. \ (U2) Hence the I Is are uniformly bounded for all m > n and the method satisfies the stability condition. From Theorem k, A and I are ' n -n uniformly bounded. Finally, we can expand y(t . ) and hy' . n (t .) n-i n-i-1 n-i by Taylor's series about t with a remainder involving h y (?)• If the order is greater than r-2 , then all but the remainder terms r— 1 ( r ) i will vanish, and we can bound d by h (h C Max y ). 1 '-n 1 ' n r '" ' Thus, the three conditions of Theorem 2 are satisfied. Q.E.D. 26 Theorem 6 If the step selection scheme is such that at least k steps of constant size are taken between step changes, then the k-step interpolation AB 3 ABM 3 and AM methods are stable and convergent. Proof Once again, we show that the hypothesis of Theorem 2 are satisfied. We will use the Nordsieck vector to express the method as a k+1 value method. Then the matrix S is given by n § - [I - A elf A C (1*3) n z. n = R C n The matrix R has the important properties that the simple eigenvalue one corresponds to the right eigenvector e_ , and that all other eigenvalues are zero (Gear [2], pp. 138-lUo). Consequently, R x = y e_ where |y |/||x|| is uniformly bounded for all x (Tu [6], p. 20). The eigenvalue one of C corresponds to the eigenvector e_ . When h = h n , C =1. From the hypothesis of the theorem, it is not n n-1 n ' possible for C. ^ I ^ C. if |i-j| < k. Consequently, the product S m = RC n RC . ...EC n m-1 m-2 n either contains less than 2k+2 matrices, or can be reduced to S m = R C , . . . R C R k C , R^- k_n n m-1 q q-k where <_ q-k-n < k Since the ratio of adjacent steps is bounded by A , |C ii m i i is uniformly bounded. Hence, ||C R <_ k uniformly. Therefore, if S contains 2k+2 or more matrices n 27 S m z=RC n ...RC R k C . R 1-k-n n — m-1 q q-k n ± = R C n ...EC R k x m-1 q — where | x | | <_ k^ | zj | . Hence, m n — "m-1 ' ' " il "q '"x — i S„ z = R C_ ., . . . R C_ y„ e = y x% so that S m z|| = Iv n — ' x ' '— ' 1kg |x|| (as y,y||x|| is bounded] l k 5 k 6 I 111 I Therefore, ls m || < k k, n ^ ~6 If S contains less than 2k+2 matrices, S is bounded by the finite n ' n 1 ' product of the norms of each matrix, hence the method satisfies the stability condition. Theorem 3 shows that A and I are bounded, d is bounded n -n -n by the same argument as used in Theorem 5. Hence Theorem 2 proves convergence and stability. Q.E.D. Tu [6] points out that the interpolation form of the Adams formula is not necessarily stable under any step selection scheme. He shows that for the three step ABM formula with h 2i = Wh 2i+1 = h 2i + 2 = ^21+3 = * ' ' 28 the matrix S has an eigenvalue of [(co-l) /Uoo] . For sufficiently large w this is unbounded. Examples of instability when oj = 10 are given in that thesis along with examples showing that the variable step technique is stable for the same step selection scheme. This case points out that the variable step technique is more stable in some cases. Another example in Tu's report substantiates Brayton, et.al.'s claim that the variable step technique is more stable. The problem y' = -y with h = .05 and h = .005 was integrated using a three step backward differentiation formula. It was apparently stable for the variable step technique and apparently unstable for the interpolation technique. It is hypothesized that the variable step method can also cause instabilities, but no examples have yet been found. In practice, we do not vary the step in an extreme way. When the step changes are controlled to be small, both techniques are stable as shown below. Theorem 7 If a method satisfies the stability condition for fixed stepSj it also does so with respect to a step selection scheme that produces step size changes small in the sense that h n+l = 1 + 0(h) as h + (44) h n Note If 6(t, h) = 6(t) and is a boundedly differentiable function of t, then the hypotheses of the theorem are satisfied since h ._ 6(t .,) 6'(£ ) n+1 n+1 _ , . . n "h~ = e(t ) ~ 1 + \ "em n n n = 1 + h0'(C ) ii = 1 + 0(h) 29 Proof We will use the hypotheses of the theorem to decompose S into the form S = S + h S (U5) n n n where S is the form of S when constant steps are used and S can be n n bounded. (This step will be delayed until Lemma 3 below.) Then we can examine I Is II as follows. For fixed n, set x=Sx m>n (U6) -m n — n — Then x ... = S x -m+1 m -m = Sx+hSx (1+7) -m m m -m If the constant step method satisfies the stability condition, then Is "l| < k rt . Lemma 3 will show that Is < k, . Therefore, Lemma 2 ii ii_q ' ' n M — • 1 can be applied to (^7) with k_ = . Hence, k k ,T M M I io m ll^i 01 II II x =S x (t ) h + 0(h ) where (t ) h r+1 + 0(h r+2 ) = e (or eh ) n n n n Hence , h. = ( -tt| J q + 0(h n ) where q = r+1 or r, n provided that (t ) is not zero. If <\>(t) is bounded away from zero and '(t) is bounded above, then n n+1 = 1 + 0(h ) n and h . /h > A > 0. Consequently, we can expect either method to be mm max — stable if the fixed step method is stable. However, in practice we do not send h to the limit and we often do not bother to adjust the step if the change is small. Therefore, there is reason to prefer the variable step approach. Brayton, et.al. [l] point out that the amount of work in evaluating A y is less in the variable step approach than in the interpolation/Nordsieck approach, but that the coefficients in A and i must be recomputed for each step. For a large n -n system of equations, this is not important as A is the same for each equation in the system. For a few equations this is a serious consideration, which leads us to recommend variable step methods for large systems of equations and interpolation methods with the Nordsieck vector for small systems LIST OF REFERENCES 33 [l] Brayton, R. K., Gustavson, F. G., and Hachtel, R. D. "A New Efficient Algorithm for Solving Differential -Algebraic Systems Using Implicit Backward Difference Formulas," Proceedings of the IEEE , 6p_, #1, pp. 98-108, 1972. ]2] Gear, C. W. NUMERICAL INITIAL VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS, Prentice -Hall, Inc., New Jersey, 1971, [3] Henrici, P. DISCRETE VARIABLE METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS, Wiley, New York, 1962. [k] Nordsieck, A. "On the Numerical Integration of Ordinary Differential Equations," Math . Comp . , l6, pp. 22-^9, 1962 . [5] Piotrowski, P. "Stability, Consistency and Convergence of Variable K-Step Methods," Conference on the Numerical Solution of Differential Equations , 109 , Springer, Berlin, 1969 [6] Tu, K. W. "Stability and Convergence of General Multistep and Multivalue Methods with Variable Step Size," Department of Computer Science Report $26 , University of Illinois at Urban a- Champaign, July 1972 (Ph.D. Thesis). Form AEC-427 (6/68) AECM 3201 U.S. ATOMIC ENERGY COMMISSION UNIVERSITY-TYPE CONTRACTOR'S RECOMMENDATION FOR DISPOSITION OF SCIENTIFIC AND TECHNICAL DOCUMENT ( See Instructions on Reverse Side ) 1. AEC REPORT NO. COO- lh69 -0220 2. TITLE THE EFFECT OF VAEIABLE MESH SIZE ON THE STABILITY OF MULTISTEP METHODS 3. TYPE OF DOCUMENT (Check one): Pel a. Scientific and technical report I I b. Conference paper not to be published in a journal: Title of conference _________________ Date of conference Exact location of conference. Sponsoring organization □ c. Other (Specify) 4. RECOMMENDED ANNOUNCEMENT AND DISTRIBUTION (Check one): __ a. AEC's normal announcement and distribution procedures may be followed. I | b. Make available only within AEC and to AEC contractors and other U.S. Government agencies and their contractors. [ | c. Make no announcement or distribution. 5. REASON FOR RECOMMENDED RESTRICTIONS: 6. SUBMITTED BY: NAME AND POSITION (Please print or type) C. W. Gear Professor and Principal Investigator Organization Department of Computer Science University of Illinois Urban a, Illinois 6l801 ^Jftj&^Cfa^. Signature Date April 1973 FOR AEC USE ONLY 7. AEC CONTRACT ADMINISTRATOR'S COMMENTS, IF ANY, ON ABOVE ANNOUNCEMENT AND DISTRIBUTION RECOMMENDATION: 8. PATENT CLEARANCE: a. AEC patent clearance has been granted by responsible AEC patent group. 1 I b. Report has been sent to responsible AEC patent group for clearance. I 1 c. Patent clearance not required. BIBLIOGRAPHIC DATA SHEET 1. Report No. UIUCDCS-R-73-570 4. Title and Subtitle THE EFFECT OF VARIABLE MESH SIZE ON THE STABILITY OF MULTISTEP METHODS 3. Recipient's Accession No. 5. Report Date April 1973 7. Author(s) C. W. Gear, K. W. Tu 8. Performing Organization Rept. No. 9. Performing Organization Name and Address Department of Computer Science University of Illinois Urban a, Illinois 6l801 10. Project/Task/Work Unit No. 11. Contract /Grant No. US AEC AT(ll-l)lU69 12. Sponsoring Organization Name and Address US AEC Chicago Operations Office 9800 South Cass Avenue Argonne, Illinois 60^39 13. Type of Report & Period Covered technical 14. 15. Supplementary Notes 16. Abstracts The effects of two different techniques for implemementing variable mesh sizes in multistep formulas are investigated. It is proved that one is more stable than the other for some cases , but that both are stable when the step changes are small. The practical impli- cations of these results are discussed. 17. Key Words and Document Analysis. 17o. Descriptors interpolation variable step 17b. Identifiers/Open-Ended Terms 17c. COSATI I- ic Id /Group 18. Availability Statement unlimited 19. Security Class (This Report) UNCLASSIFIED curitv Class (Thi 20. Security Class (This Page UNCLASSIFIED 21. No. of Pages 35 22. Price FORM NTIS-38 (10-70) USCOMM-DC 40329-P7I OCT 2 9 1973 nUOHVlKIHul