LIBRARY OF THE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN 510.84 I£ i and the requirement that the method have order i. No special action is needed to change order. The vector of saved values can formally be defined to include k past values of the derivative by Zn = [ V Vl y n"-" Vk y n-k + l ]T > and the method can be represented by the matrix A = n l,n 2,n 1 1 — k-l,n \n 1 _ and the vector ^ = [0, 1, 0,..., of. The method changing operations are represented as matrices, in this case by the identity, so 0. . = I. A2. If the Nordsieck form of Adams method is used, and step changing is handled by interpolation, we have from equations (9) through (12) *n = K>\-iK Ci^ k)/k!]T > A = A C , n n where A = 111 1 2 . 1 l 1 I if the method is currently equivalent to an i-step method, O n .«Uag[l,^-.(^-) 2 ...., (^-) k ]. , n-1 n-1 n-1 and I is the vector for an i-step method extended to k+1 entries with -n zeros. In this case, changing the order may involve some operations. Decreasing the order can be done by dropping the higher order derivatives. Since they are ignored in lower order methods, we can take 0. . = I if i <_ j. When the order is increased, we could elect to start with zeros as approximations to the new derivatives required in which case we could take 0. . = I also for i > j. Another common way of increasing order is to only allow an order increase of one at a time, and, if the order has just increased to j , estimate h J y /j ! by aomputing l/j times the backward difference of h y /(j-l)l. To do this, we need to save the previous value of the latter scaled derivative, so we must modify A to be p A = n 111 1 2 1 J when an i-th order method is being used and i < k. Then i+l,i 1 1 -1 i+1 i+1 (2.1) (i+l)-th row (i+2)-th row (2.2) X J produces the required difference since the (i+2)-th row is used to save the previous value of the i-th scaled derivative when the order i is less than k "We will call this the difference method of changing order. Difference between examples Al and A2 Since the Nordsieck representation is a transformation of the multistep representation, there do exist matrices 0. . for the Nordsieck representation such that changing the order in the latter representation is equivalent to the method used in Example Al. However, these matrices are complex (a discussion of what is needed for Adams methods can be found in Shampine and Gordon [2]). A representation is chosen with a view to making computations simple. Therefore, we expect that if the Nordsieck vector is used, the interpolation step changing and order changing techniques discussed in A2 will be used, whereas if a multistep representation is used, a variable step technique and the type of order changing discussed in Al will be used. For this reason, we will talk about order changing with variable step and interpolation step changing techniques, meaning that the type of order changing discussed in Examples Al and A2 are used respectively. 2.1 Stability and Convergence When the formula changing process is represented by a matrix, as above, we can derive an error equation similar to equation (20). It will be -n+1 n l , , i — n — n ' n+1 n where i is the number of the formula used on the step from t . to t . n r n-1 n To simplify the notation, we will write for 0. . , and T = q . n i . i ' n D nn n+1 n Stability and convergence were defined in definitions 2 and 3 with respect to a step selection scheme 6. For variable formula methods we extend the definitions in the obvious way with respect to formula and step selection schemes . / 8 Definition 2.3 A method is stable with respect to a step selection scheme 6 and a formula selection scheme I if there exists a constant M < «> (dependent on the differential equation only) such that \y -to | < U\v -y for all < t < t < T . where y. and y. are two numerical solutions. Definition 2.4 A method is convergent with respect to a step selection scheme and a formula selection scheme I if the computed solution y converges to y(t ) for any <_t <_ T as h -> and the starting errors -*■ 0. Definition 2.5 A method satisfies the stability condition with respect to a step selection scheme 6 and a formula selection scheme I if the matrices ¥" = T ,5 „ . . . T n m-1 m-2 n are uniformly bounded,where T is the value of T evaluated for the " ° n n differential equation y' = 0. Theorem 2.1 If a method is stable with respect to 8 and J, it satisfies the stability condition with respeat to 6 and I. The proof is identical to the proof of Theorem 1. In the discussion below, we will usually refer to (k+l) x (k+l' matrices and (k+l) element vectors. Any vector norm and subordinate matrix norm can be used. When a matrix is partitioned and a norm of one of the partitions is used, that norm is defined as the norm of the (k+l) x (k+l) matrix obtained by replacing all other partitions by 0. Thus, if X is a (k+l) x (k+l) matrix, and then [•] A ,' B I 1 C ! D B i B i Thus, | |x| | <_ | |a| | + | |b| + I |c| + I |d| I . The same definition is used for the norm of a part of a vector. 10 3. ADAMS TYPE METHODS As with fixed order methods , varying order methods appear to be less stable with the interpolation technique than with the variable step technique. We will prove that a class of formulas which include Adams formulas are stable with respect to any formula and step selection scheme if the variable step technique is used. However, we can only prove that Adams formula is stable for the interpolation technique if the step and order are periodically held constant f or p steps after a change to an r-step formula, where p = r if the O^s are the identity, and p = r+1 if the difference method is used The next theorem is a straightforward extension of Theorem 2. Theorem 2. 2 If a method satisfies the stability condition with respect to and J, if the quantities a 3 a*j 3 and 3* or A and I defined in [l] are uniformly bounded^ if the . . are bounded and if the truncation I'd error satisfies I \d I I < h e(h) ¥■ j 1 '-rc 1 ' — n n where e(h) ->■ as h -*■ S then the method is stable and convergent, and there exist constants k- and k 4 such that I 1^1 I 1^ llitfll + * 4 e(h) for <_t <_T. 11 Theorem 2.3 If a variable step method uses the corrector + *0,n\<+l f ~- + K, n h n-mK-mH> where the a*, are independent of the step sizes and formula used (we oall these "constant-p" methods) 3 if the 3* are uniformly bounded. % 3 n j j k k-1 and if p(E,) = -E, + a* E, + ...+ a* £ satisfies the stability con- dition, then the method satisfies the stability condition with respect to any step and formula selection schemes. NOTE: If an explicit method is used, the predictor coefficients a. must satisfy the constant-p and stability conditions. Proof From equation (23) 12 S = (I - I £f A n -n -d n a* 1 a £-i a o o l,n 5* . S* m-l,n m,n C ' D I n , L - i —i Since the are uniformly bounded, the S n+pi are uniformly .m bounded for < p < m. From the structure of S , we see that L = ; — — n hence if p > m, ;n+p _ ,p-m /^m-1 ,m-2 .0 C* " (C EL+ C D„.,L +...+ C w D Ti n+1 n+m-1 L 111 " 1 ) Hence &+»\\ < ||oP|| ♦ i| C p- m n iic m-1 D +. . .+ D . ,L n n+m-1 m-1 for p > m. However, C is the companion matrix of the polynomial p(£). Hence, if p(£) satisfies the stability condition, |Cr| and ||C | are bounded. Hence I Is P | is bounded for all p > 0. 1 ' n ' — Q.E.D. Corollary If the 3. and 8* are chosen to achieve maximum order (at least l l m for the predictor, m+1 for the corrector), the method is stable and 13 convergent for any step selection scheme, even if m is changed from step to step, and from predictor to corrector, as long as m is "bounded, p(0 satisfies the stability condition, p(l) = 0, and m >_ 0. NOTE: This means that the variable step, variable order Adams method is stable for any step and order selection schemes. Proof Following the argument used in Theorem U, the 3* are solutions i,n of the equations m -1 1 k £ 3* (w. -co. , ., )to. = -— £ a* oj. , 1 < r < m+1, . _ i,n l l+l l r . . i x — — i=0 i=l (or 1 < r < m if 3~ =0 for an explicit method). Hence, the solutions — — 0,n 3? (and 3. for an explicit method) can be bounded above. Thus, the i,n i,n ^ conditions of the preceding theorem are met, and the method satisfies the stability condition. The order of the corrector will be at least one if m >_ 0; hence the truncation error can be bounded by I Id I I < h e(h), where e(h) ■+ as h ■* , so the conditions of Theorem 2.2 are met. Hence, the method is convergent and stable. Q.E.D. Ik Theorem 2.4 If a finite set of either explicit, predictor-corrector, or implicit methods are based on Adams formulas using the interpolation step changing technique, if the order of a corrector is equal to or one larger than the order of its predictor, if the step and formula selection schemes are such that there exists a finite constant M with the property that in any M or more consecutive steps there are at least p fixed steps using a predictor of order r, where p=r if all of the 's are the identity, or p=r+l if the difference method — equations (2.1) and (2.2) — is used, then the methods are stable and convergent with respect to those selection schemes . Proof A simple extension of the proof of Theorem 6 will be used to show that the method satisfies the stability criterion. Then Theorem 2.2 shows that it is stable and convergent. For the equation y' = 0, T is given by T M T = [I - I e„] AC =R C 0, n — n — 2 nn nnn where C is uniformly bounded by the definition of a step selection scheme, and is uniformly bounded (it is either I or as given in 2.2). If <_ m <_ M, |T is uniformly bounded. From the hypothesis of the theorem, we are guaranteed that in any M or more consecutive T 's, there will be at least p adjacent in which q C =0 = I (no change in step or order) and A and I are constant q q & * q. -Q. 15 (no change in order). Thus, T n + m = T n + m R p T q n q+p q n where q-n <_M. R is given by Si — — R JO R = q i R |0 i where R is the (r+l) x (r+l) matrix defined in (U3) and R is either all zero — if all are the identity, or contains a unit entry in the top right position — if the order increasing technique uses the A given in (2.1). From the properties of R given in [l], it follows that R* x = y e, . q — x —1 As in Theorem 6, y <_ C J |x| I . Since e_ is a unit eigen vector of R , C , , and hence T , we have ^ +m zl n — ' ^!> P T q z|| q+P q n — ' ' n n+m R P 'q+p q " R ±J I where 21 = T z. ^n+m q+P M el = |y I I |e. I I = |y I : C | |x| x — I 1 ' ' x 1 ' '— l 1 ' ' x 1 — ' '— ' = C |T q zl I < C[Max I IT, I I ] q " n 1 1 n _m _ M j M Hence, for m > M, I |T n+m | C[Max I |T. I I ] M . J Thus, the method satisfies the stability condition with respect to the step and formula selection schemes , and so is stable and convergent. Q.E.D. 16 k. GENERAL METHODS Theorem 7 showed that a fixed formula method was stable with respect to small step sizes for both variable step and interpolation methods. There is no direct, useful, equivalent in the sense that if the class of formula used provides one or a few of each order, there is no such concept as a "small" formula change. The closest similar idea is that infrequent formula changes do not cause instability. Theorem 2.5 If a variable formula method is suoh that: (i) each formula F. is strongly stable, (ii) each formula has order >_ 1, (Hi) the order ehanging operations . . are exact for the equation y ' = 0, and \\0 . .\\ is bounded, I'd (iv) the coefficients in the methods M. are uniformly bounded, is then, for each formula F . there exists an integer p. such that the method is Is is stable and convergent with respect to a step selection scheme 6 that produces small step changes such that hn+1 - =1 + 0(h) h n and a formula selection scheme I that produces infrequent formula changes in the sense that at least p. steps are taken without formula change after Is the formula F. has been selected. 17 Proof We will verify that the hypotheses of Theorem 2.2 are satisfied. Hypothesis (iv) of this theorem assures us that A and I are bounded uniformly, Hypothesis (ii) tells -us that there exists an e(h) + as h -> such that I Id I < h e(h) ¥ i_ n i i _ n n so we only need verify the stability condition, namely, that I It I is uniformly bounded. Although T is independent of the differential equation by definition, it does vary with step size and with order. First, we get rid of the step size variation by proving a slight extension of a result contained in Theorem 7- As there, we decompose T into T , the operator for constant steps , and h T , where T can be n n * * ' n n 9 n bounded. It follows directly from the proof of Theorem 7 that if T^ and T are uniformly bounded, then T is uniformly bounded. n n In order to bound 'x , we must look at the structure of the rr different formulas and step changing operations used. We have T = S. , n in where S. depends only on the formula F. used; that is, it is the amplification matrix for a constant step method applied to y' =0. Since the methods have order > 1, S. has an eigen vector with ones in — l positions corresponding to entries for y . in y and zeros elsewhere. This eigen vector, say x, > corresponds to the eigen value one, and it is simply the statement that the method is exact for y = 1, y' = 0. Since the formulas are strongly stable, all other eigen values of S. are less than one. x is also an eigen vector of corresponding to X = 1. Although T is a product of different matrices, if the changes are sufficiently infrequent, it has the form T™ = S. .. .S. S. .. .S. S. ...§. n i x x n x i 2 i 2 n 2 x 3 ^ n R 18 •m We will show that the operator T can he hounded hecause it does not change n x_ , and eventually decreases all other components if there are enough S.'s of the same index next to each other. This is shown in Lemma 2.1 "below (in t more generality than needed at the moment) . Q.E.D. Lemma 2 . 1 Let {S } and {0^} be sets of q x q matrices with the following properties : (i) If {A..} j =1,..., q, are the eigen values of S . , then there exist constants v and r\ such that 1= |X 1#1 | - |X 1>2 | =...= |A. jV l, i»n Ll* i>v+1 l.-"> l x i, 4 l' ¥1 - (ii) There exists a constant linearly independent set of vectors x n , x^,..., x of norm 1 such that — 1 -£ -u S. x . = A . . x.,j=l,...,v and ¥- i . i -J 1,J "J (iii) If Q.. is such that its columns have a norm of one and 0,7 S. Q. = J. is the Jordan canonical form of S. 1111 i (without loss of generality, we can assume that the first v columns of Q. are x , ..., x ), then there exist constants C , C such that IIQ- 1 !! <.c lS IJQ..II <_c 2 V i. (iv) 0. x. =x.,j=l,...,v and V i, and there exists a i -J -J constant C^ such that lo.l < C n V i . ' ' i ' ' — Then for any constant K > C C C there exists a set of integers {p i > such that N p n [ (s . ) J . ] | <_ K V- N >_ whenever p , > p . . J _ J Proof Hypotheses (ii) and (iii) imply that J. has the form hi IV Hence, S P = Q. J P Q7 1 ' i i l i X il \ Jordan blocks for eigenvalues of magnitude < n X P iv where, from hypothesis (i), U. IP 19 — — A P l ^. -\ . J_ u. ip QT 1 , |X^ I = 1, 1 <_ J <_v , and the elements of U. are bounded by terms of the form n ( .) for ip J '.L J ^L CL" V_ 1 a nd p >_ j . (These arise when a (j+l) x (j+l) Jordan block is raised to the power p.) Hence, p can be chosen to make the norm of U. as small as desired. iP 20 Because the first v columns of Q. are independent of i— hypotheses (ii) and (iii) — and "because these columns are unit eigen vectors of 0., QT 1 0. Q. n ^i i ^i+1 p u i2 and from hypotheses (iii) and (iv) p iiM ^ o iV P 1 2 H i C C l C 2 (Note that C >_ 1 because 0. has a unit eigen value and that C-. CL >_ 1 because 1 = | \0,. Q7 J | <. C C . ) By expansion i i N p. m = n [(s. ) x o. ] i=l X P l -1 P 2 P N-1 -1 „ P N -1 * Q l J l Q l °1 Q 2 J 2 ' ' ' J N-1 %-l °N-1 Q N V % °N = Q, N p. i N-l s p. n a, 1-1 : N-l i ( n a. 1 ) p , ( n u.- p.Ju - t • i x sl • ±i iP- i2 Np M s=l i=l i=s+l *i ^N N-l n (u.- p..)u - . . lp. i2 Np„ 1=1 *i ^N -1 % V A. is a diagonal matrix with unit length elements, hence |A P | = 1. If we choose p. such that l l|U ipH -C^C^-K = C 3' whenever p >_ p., we have 21 N-l „ ._ |M|| < I 1^1 I lltfll 110,11 [1* I C oCl C 2 (C 3 C oCl C 2 ) " " S C S=l ♦ (c c 1 c 2 c 3 ) M - 1 c 3 ] ■ C C c -(c c c c ) B C C 1 C 2 C 1 < c c r Ti + - 1 - Vi L 2 L1 1-WgC J C C 1 C 2 "oW = K as required. Q.E.D. Lemma 2.1 allows the matrices S to have more than one eigen value on n the unit circle, but they must have the same set of eigen vectors and simple elementary divisors for those values. Thus, weakly stable methods could be considered. However, the restriction that the eigen vectors match is usually difficult to achieve for other than the principle eigen value X = 1. If the eigen values do not match, instability can occur as shown in Example B Consider the two weakly stable methods based on F, : v =v + B h v' + 1 ^n+l y n-l P ln n-l ^n ■••' P«: v = v - v +v +8 h v 1 +. . 2 ^n+l J n y n-l °n-2 P ln n-l y n 22 (The $. can "be picked to get any order desired.) The vector of saved in values y 'will be *n = [ V y n-l' y n-2' h n-i y A"-- ]T ' where the maximum number of derivatives used in the two formulas must be saved. The corresponding amplification matrices are s i = 10 10 10 S 2 = 1-11 10 10 If the top lefthand 3x3 block blows up, we will not have a stable method, so we will discard the rest. Then we can show ,2k+2 10 10 10 k > , 23 M+l- 1 -1 1 1 1 k > 0. If we use a simple formula changing process such that 0. . = I, then we can consider a Uk+l Q 2£+2 S 2 b l 2 -1 1 1 k, I > 0, Since [s^ +2 sf +2 f N+l -N N -N+l N-l -N+2 N > 1 k,£ > it is evident that no matter how large k and I are, the product will still be unbounded. Theorem 2.5 states that some integers p. do exist, but does not give a very useful constructive way of finding the smallest values. However, we can show by example that these integers are greater than one in some cases . Example C Consider methods based on the formulas '< V y n+l = 2y n "IT^n-l + h y n-2 + I h v' +. 0,n n y n+l F 2 : y n + l = i y n-l + F y n-2 + B 0,n h n y n+l + ' ' " * 2k Both of these formulas are strongly stable. (The roots of F-^ and F g are 1, +1/2, +1/2, and 1, -1/2, -1/2, respectively.) -If ^ = [y , y n _ ±i ^2'"'^ and if we ignore the derivative terms by considering the equation y' = , we find 2 -5A lA s l = 1 0. i Hence , 3/k lA S 2 = 1 1 S 1 S 2 = -5A TA 2A 3/4 lA 10 Since I -7- -/^-l = 1 A6 1 4 2 1 The eigen values of S S are 1, and -3/k+_t— . exceeds one, (S S ) is unbounded, so alternating formulas each step will lead to instability. The p. were calculated for the backward differentiation formulas 1 using the interpolation step changing technique with the 0. . = I. Let S. be the stability matrix for the i-th order method and let Q.J.Q7 1 = S. , i = 1 6. 111 1 The Q. and J. are partitioned as follows: 1 1 * / 25 Q. = i X X X X X X Q- J. = Each Q. is chosen so that the columns corresponding to the zero eigenvalues X. . of S. induced by its extension from (i+l)-space to 7-space are of the ij i form e.. Q. is similarly partitioned to yield Q. . The p. . are then chosen -J i i ij as the smallest integers satisfying 1 i _ J. X X X X X X J. l 1 ' 1 < l, Each p. . is the number of constant order steps required after a change from order i to order j. The table of p. . below was calculated by Mr. Mike Ostrar. i x. 1 2 3 k 5 6 1 1 1 1 2 3 6 2 1 1 1 2 3 6 3 1 1 1 2 k 9 k 1 1 1 1 k 11 5 1 1 1 1 1 5 6 1 1 1 1 2 1 If an order change of at most one is used, the number of steps p. needed after a change to order i is not more than i-1. Although we did not calculate similar tables for the interpolation technique with differencing or the variable step technique, we believe the tables will be similar. 26 5 . CONCLUSIONS Again, on limited evidence, the variable step technique seems to "be more stable than the interpolation technique when formula changing is considered, but it does seem valuable to restrict the frequency with which the formulas are switched in all but Adams methods. Otherwise, the recommendations of reference [l] are unchanged. 27 LIST OF REFERENCES [1] Gear, C. ¥. and Tu, K. W. , "The Effect of Variable Mesh Size on the Stability of Multistep Methods," submitted to the SIAM Journal on Numerical Analysis, 1972; Department of Computer Science Report No. 570 •> University of Illinois at Urban a- Champaign, 1973. [2] Shampine , L. F. and Gordon, M. K. , "Local Error and Variable Order, Variable Step Adams Codes," to appear, SIAM Journal on Numerical Analysis . >rm AEC-427 (6/68) ;AECM 3201 U.S. ATOMIC ENERGY COMMISSION UNIVERSITY-TYPE CONTRACTOR'S RECOMMENDATION FOR DISPOSITION OF SCIENTIFIC AND TECHNICAL DOCUMENT I See Instructions on Reverse Side ) AEC REPORT NO. C00-lil69-0221 2. TITLE STABILITY AND CONVERGENCE OF VARIABLE ORDER MULTISTEP METHODS TYPE OF DOCUMENT (Check one): H a. Scientific and technical report j| b. Conference paper not to be published in a journal: Title of conference Date of conference Exact location of conference. Sponsoring organization □ c. Other (Specify) t. RECOMMENDED ANNOUNCEMENT AND DISTRIBUTION (Check one): [XI a. AEC's normal announcement and distribution procedures may be followed. |~l b. Make available only within AEC and to AEC contractors and other U.S. Government agencies and their contractors. J c. Make no announcement or distribution. . REASON FOR RECOMMENDED RESTRICTIONS: SUBMITTED BY: NAME AND POSITION (Please print or type) C. W. Gear Professor and Principal Investigator Organization Department of Computer Science University of Illinois Urbana, Illinois 6l801 Signature cJ2*Lo ^Q^r. Date May 1973 FOR AEC USE ONLY AEC CONTRACT ADMINISTRATOR'S COMMENTS, IF ANY, ON ABOVE ANNOUNCEMENT AND DISTRIBUTION RECOMMENDATION: PATENT CLEARANCE: LJ a. AEC patent clearance has been granted by responsible AEC patent group. U b. Report has been sent to responsible AEC patent group for clearance. LJ c. Patent clearance not required. I0IOGRAPHIC DATA rt|T le and Subtitle 1. Report No. UIUCDCS-R-73-5T1 STABILITY AID CONVERGENCE OF VARIABLE ORDER MULTISTEP METHODS 3. Recipient's Accession No. 5. Report Date May 1973 6. thor(s) C. W. Gear, D. S. Watanabe 8- Performing Organization Rept. No. I rforming Organization Name and Address 10. Project/Task/Work Unit No. Department of Computer Science University of Illinois Urbana, Illinois 6l801 11. Contract/Grant No. US AEC AT (11-1)11+69 ponsoring Organization Name and Address US AEC Chicago Operations Office 9800 South Cass Avenue Argonne , Illinois 60^39 13. Type of Report & Period Covered technical 14. )J upplementary Notes i. ibstracts The variable step Adam's method is shown to be stable for any order changing scheme. The Nordsieck form of Adam's method, however, is shown to be stable only if the step size and order are fixed for p steps following a change to an r-step method, where p is r or r+1 depending on the algorithm used to interpolate the necessary higher derivatives. Finally, general consistent and strongly stable multistep and multivalue methods are shown to be stable if the method is fixed for a certain number of steps following each method change and step size changes are small. This number is independent of the differential equation and the step sizes. Cey Words and Document Analysis. 17a. Descriptors multistep methods Nordsieck' s method order changing stability ' Identifiers/Open-Ended Terms ' < OSATI Fie Id /Group Availability Statement unlimited distribution ■VI NTIS-38 ( 10-70) 19. Security (lass (This Report) UNCLASSIFIED 20. Security (lass (This Page UNCLASSIFIED 21. No. of Pages 31 22. Price USCOMM-DC 40329-P7I