lllHInSiSl m Hi ■ Eg §8 IS§ BE! BBSS ■■Hi H HI H H I Sgngsjg B Era BBrsSSh B^BfflgifflBat|g BK U nigra lira HS3I BBSS §91 USh fBSIBnsSmfStSSKBStSSil ^^^BHBHHnBHr Bill P LIBRARY OF THE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN 510.84 U(or cop* Z. CENTRAL CIRCULATION BOOKSTACKS The person charging this material is re- sponsible for its renewal or its return to the library from which it was borrowed on or before the Latest Date stamped below. You may be charged a minimum fee of $75.00 for each lost book. Theft, mutilation, and underlining of books are reasons for disciplinary action and may result In dismissal from the University. TO RENEW CALL TELEPHONE CENTER, 333-8400 UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-OMAMPAIGN MAR 1 1 1998 When renewing by phone, write new due date below previous due date. L162 Digitized by the Internet Archive in 2013 http://archive.org/details/numericalsolutio813link n4^ ua uiucDcs-R-76-813 rn aJC4\ coo-2383-0032 NUMERICAL SOLUTION OF STIFF ORDINARY DIFFERENTIAL EQUATIONS USING COLLOCATION METHODS by BRUCE DAVID LINK June 1976 DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN URBANA, ILLINOIS Univwaty 0: .Hinois at urtena-ChampMgo UIUCDCS-R-76-813 COO-2383-0032 NUMERICAL SOLUTION OF STIFF ORDINARY DIFFERENTIAL EQUATIONS USING COLLOCATION METHODS by BRUCE DAVID LINK June 1976 DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN URBANA, ILLINOIS 6l801 This work was supported in part by the Energy Research and Development Administration under contract US ERDA E(ll-l) 2383 and submitted in partial fulfillment of the requirements of the Graduate College for the degree of Doctor of Philosophy in Computer Science. Ill ACKNOWLEDGEMENTS I am very grateful to my advisor Daniel S. Watanabe for his many valuable suggestions and comments during the course of this project. He was always available and willing to help with problems. I am also grateful to C. W. Gear for many helpful discussions. My wife, Regina Link, was understanding and extremely patient during the work. In addition, she prepared Figs. 1.1-1.3 and typed the manuscript. The work was supported in part by the Energy Research and Development Agency under grant US ERDA- E(ll-l) 2383 and the Department of Computer Science of the University of Illinois. This work is dedicated to my son, Hamilton Edmund Link. IV TABLE OF CONTENTS Page 1. INTRODUCTION: NONLINEAR MULTISTEP METHODS 1 1.1 Notation and Definitions 8 2. SIMPLE COLLOCATION METHODS 15 2.1 Order 15 2.2 Stability Regions 24 2.3 Implementation Considerations 30 3. BLOCK COLLOCATION METHODS 38 3.1 Order 38 3.2 Stability Regions 44 3.3 Stability and Convergence 58 4. QUADRATURE METHODS 72 LIST OF REFERENCES 79 APPENDIX I: LISTING OF CODE FOR C0LL0C1 FAMILY OF METHODS 81 APPENDIX II: LISTING OF CODE TO FIND AND PLOT STABILITY REGIONS ... 92 VITA 106 V NUMERICAL SOLUTION OF STIFF ORDINARY DIFFERENTIAL EQUATIONS USING COLLOCATION METHODS Bruce David Link, Ph. D. Department of Computer Science University of Illinois at Urbana-Champaign, 1976 A new class of methods, collocation methods, is suitable for the integration of stiff ordinary differential equations. The order and stability regions of these methods are characterized. The methods are stable and convergent for infrequent formula changes and small stepsize changes. Block multistep methods which are stiffly stable up to order 24 and several families of multistep methods stiffly stable up to order 9 exist. A similar class of quadrature methods includes a family with the same stability regions as Enright's second derivative methods but only requires first derivatives. 1. INTRODUCTION: NONLINEAR MULTISTEP METHODS Consider the ordinary differential equation initial value problem (ODE) y' = f(y,t) , (l.i) y(t ) = y , where f : R n+1 -> R n , y : [t Q ,T]-R n . We shall assume that f satisfies a Lipschitz condition in y and is continuous, so that a unique solution of the ODE exists and is in C [t n ,T], Later we shall further assume that f has bounded continuous partial derivatives with respect to the y variables and that the solution has as many continuous derivatives as is necessary. Differential equations which arise in the description of physical problems often have widely differing time constants. We call an ODE stiff if the set of eigenvalues of the Jacobian 3f/3y includes some members whose real parts are much more negative than those of other members. For example, in the equation yj = -500.5 y x + 499.5 y 2 , y' 2 = 499.5 y - 500.5 y 2 , (1.2) y x (0) = 2 , y 2 (0) = o , the exact so lut i on i s -t -moot (t) = e + c -t • - c The equation can he written in the form ■ ' - PAP y , where -moot 1 1 1 -1 , l\ •1 -1000 Because the eigenvalue -1000 is much more negative than the eigenvalue -1, the equation is stiff. Stiff equations often occur in the anlysis and simulation of net- works, in almost all :hcmical kinetic studies, and in control prohlems, especially when system states must he modeled (Rjurel et al., 1970). In the solution of a stiff equation the components corresponding to some eigenvalues hecome negligably small while others are still large enough to be of interest. Hi is \rould be true in the above example for t larger than 0.03. Methods for nui solving ODE's calculate a sequence of approximations, (y l , to the solution of equation (1.1) on a mesh of time values {t ) where t_ < t, < ... < t. t = T. The stepsizes are defined to be n 1 \ h = t - t . , n n n - 1 h = maxfh } . i! n Most methods operal and at eac p use the past r points, to calculate 1 Lmations, The specific values of r and s and the stepsizes h are often varied in order to satisfy an error tolerance. In solving a stiff ODE numerically, one would like to choose step- sizes according to the dominant components of the solution. Most methods not specifically designed for stiff ODE's must restrict the stepsize for stability reasons, with the stepsize depending on the eigenvalue with the greatest magnitude. For example, Euler's formula defines y in terms of y , as y n-l y = y . + hf (y -,t .) 7 n y n-l w n-l n-1 If we apply Euler's formula to the ODE (1.2) the solution is y n = (I + hPAP _1 ) n y = P(I + hA)V 1 y = P (l-h) n (1 - 1000h) n y We see that for the numerical solution to converge to zero as t -> °° (as the exact solution does), we must restrict h _< 0.001, due to the presence of the eigenvalue -1000. The stepsize must be small even after the compo- nent due to this eigenvalue is negligible, because roundoff errors will always reintroduce this component into the numerical solution. Thus the stability properties of Euler's method force the stepsize to be 1000 times smaller than necessary to accurately follow the component of the solution actually present for t > 0.03. This thesis presents some new classes of formulas suitable for stiff ODE's. To discuss these formulas and compare them, we need a few defini- tions. We say intuitively that a formula is of order k if the error satisfies e = y - y(t ) e = 0(h k ) ash+0 , 4 for sufficiently smooth f. This definition mixes the concepts of order and stability. To avoid this, the definition is usually stated in the one step form: a formula is of order k if, given past values y . = y(t ) , r * x / n-i ' v n-i the future values calculated for the next step satisfy c . = 0(h) , i = 0, ..., s-1 . n+i n+i/ ' ' A method is stahle for a stepsize selection scheme and (possibly) a formula selection scheme, both depending on h, if there exists an h > for each differential equation such that a change in the starting values by a fixed amount produces a bounded change in the numerical solution for all < n 1 n n- This definition, like that for order, is concerned only with the method's behavior for sufficiently small stepsize. In practice we are concerned with larger h, and would often like to make h as large as possible. A useful concept is that of the region of absolute stability , defined as the set of h and A for which a variation in a single value, y , will produce a change in subsequent values which does not increase from step to step when the formula is applied to the test equation X* = Ay with fixed stepsize h. In the formulas considered in this study, the regions of absolute stability will have the form {(h,A) | hA e B C - °°, with constant stepsize h and all nonzero A satisfying |Arg(-A) | < a. Thus the region of absolute stability contains the wedge cross-hatched in Fig. 1.1. Fig. 1.1. Region of absolute stability for an A(a) -stable formula A formula is called A- stable if it is A(tt/2) stable and strongly A- stable if it is A-stable and lim lim sup y /y , < 1 Li . . n y n-l hA ■* -«> n -*■ 0° where {y } is the solution of y' = Ay for stepsize h. Finally, a formula is called stiffly stable if there is a D such that the formula is absolutely stable in the region Re(hA) < D, accurate around the origin, and A(a)- stable for some positive a. The region of absolute stability will then contain a region such as that shown in Fig. 1.2. u = hA Fig. 1.2. Region of absolute stability for a stiffly stable formula Until recently, formulas suitable to stiff ODE's were limited to very low order. Before Gear's work (1969), most of these formulas had 6 order at most two (Bjurol, 1970). Methods which were stiffly stable with higher order were one step methods (e_.g. , implicit Runge-Kutta) , which were less efficient since they could not use past values to increase the order. Gear recognized that the backward differentiation formulas would be useful for stiff ODE's. Gear's formulas are a special case of the linear multistep formula a n y + --- + a An y +a ,h y 1 + ... + ou ,h y' =0. -r,Crn-r 0,Crn -r,l W n-r 0,1 n 7 n They are obtained by setting a_ 1 =...=a,,=0, a n , = -1, and ~ 1- y A. "" 1 y X. \J y X- choosing the a. _ for maximal order. These formulas were discussed by i , u Henri ci (1962), who dismissed them as not being useful. The formulas are called backward differentiation formulas because the a. „ come from i,0 differentiating the polynomial interpolating the values y , ..., y . The backward differentiation formulas are stiffly stable for order up to 6. The most popular codes for solving stiff equations are based on these formulas. The formula of order 6 has an a of only 17.9°, however, and many people use only the first five. The restriction to low order has stimulated a search for better formulas that give stiff stability for higher order. It has been an open question whether there exist high order stiffly stable formulas which are practical. Cryer's k-step multistep formulas (Cryer, 1973) have been shown by Jeltsch (1976) to be stiffly stable for arbitrarily high order. These methods are not practical, however, since the regions of instability become so small that the stepsize must be made very small to accurately follow the dominant part of the solution. The a's given by Jeltsch (1976) are also inferior to the a's of the BL0CK3 family given in Chapter 3 of this paper. 7 Jain and Srivastava (1970) found some multistep formulas stiffly stable up to order 11 by a computer search. The stability regions were promising but the methods were not tested so it is not known how they would perform. Formulas are given in Chapter 3 that are apparently A- stable with order 11 and stiffly stable up to order 23 (BL0CK3 formulas) . Bickart and Picel (1973) found block linear one step methods stiffly stable with order up to 25. However, the order is equal to the number of future points solved for at each step, so to attain order 25, a block of 25 future points must be solved for simultaneously. Since systems of ODE's are usually being solved, the total dimensionality would be multiplied by 25. This is usually impractical. The problem is caused by ignoring past values. Bickart and Picel (1973) give a's for methods up to order 10. These are inferior to the a's of the BL0CK2 methods in Chapter 3 (which solve for only two future points) . Enright (1974) discovered a class of linear multistep formulas using second derivatives. The formulas require the evaluation of the derivative of f, and solve for one future point at each step. Enright' s formulas are stiffly stable up to order 9. The stability properties are the same as a family of quadrature methods given in Chapter 4 which do not require evaluation of the derivative of f . This study presents several new families of formulas which avoid some of the problems mentioned above. The formulas use past values to obtain high order without simultaneous solution for an excessively large block of future values. Four families of formulas are stiffly stable with order up to 9 and have s = 1. Families of block multistep formulas with order up to 24 requiring at most four future points are also presented. 1 . 1 Notation and Definitions The formulas considered here (called collocation formulas ) use a polynomial to represent the past information being saved plus the future points being solved for. That is, given y , ..., y and assuming we wish to find s future points y , ..., y , we define a polynomial p(t) which takes on these values and possibly also collocates derivatives at H h n-2 n-1 'n+1 Fig. 1.3. Interpolating polynomial for (9 ,9 : 1 , 1 ;2,2) specified points (see Fig. 1.3). We denote a formula as follows: (e , 9 r ' s-1 -r' ' c -i ;c o' "' C s-1' ) (1.3) (The 9. will be explained below.) This notation indicates that the formula finds a polynomial p such that p(t .) = y . r n+i 7 n+i p'(t .) = y' . K n+i 'n+i (c.-l) (c -1) P (t .) = y K n+i 7 n+i i = -r, . . ., s-1 , where y n + i ~ f(y n + i'W ' y (j ? E D j_1 f(y .,t .) 'n+i n+i n+i (1.4) Note that the polynomial is a function of h and y . ,t . for i = -r, f ^ ™-n n 'n+i n+i . .., s-1, which is not explicitly shown in the notation. Let C denote s-1 th £ c. . The polynomial satisfies c. conditions at the i point and is i=-r thus of degree at most C - 1. Because the polynomial is defined using c_ + . . . + c , pieces of future information, we need c. + . . . + c n s-1 r s-1 equations, in addition to the saved values, to define p. The equations (1.4) above define the higher derivatives at future points in terms of the corresponding future values, so that s equations are necessary to define p, one for each future point. Most of the formulas considered here use a collocating condition and require the polynomial to satisfy the differen- tial equation at s other points: P'(t + e.h ) = f(p(t + e.h ) , t + e.h ) , i = o, ..., s-i. (1.5) r n in r n inn in For practical reasons we usually require each c. to be 1 or 2. Then the formula needs the right hand side of the ODE, f, but not its derivatives. These formulas have the same desirable stability properties as other formulas which do require the evaluation of Df. This is important because the exact form of Df is often unknown or too complicated to calculate and an accurate approximation to Df is in general too expensive to compute. Because the theory developed below is for arbitrary positive integers c. , collocation methods with c. > 2 can be used to solve ODE's which allow l easy computation of Df (for example, a linear time independent ODE) . An alternative source for the s additional equations is to require p to satisfy p(t + e.h ) = P (t j + r n in r n-1 t +9.h n in n-1 f(p(£),C)d£ , i = 0, ..., s-1 . (1.6) 10 In practice, a quadrature formula is used to replace the integral. The quadrature formula does not affect the stability properties as long as it is exact for polynomials with degree equal to C - 1 . If the integral equations are used then the formula will be denoted K8 . ., e s _ i: c_ r , .... c_ i; c , .... c ) The integral formulas are more complicated to implement than the differ- entiation methods and will only be briefly considered in Chapter 4. A summary of the properties of some families of formulas discovered is presented in Table 1.1. Here we use the notation 1 to mean a block of k successive c. all equal to 1. Formula Table 1.1 STABILITY SUMMARY Maximum Order with alpha > 89° Maximum Order for Stiff Stability BACKDIF (0:1 ;1) 2 C0LL0C1 (l:l r ;2) 4 C0LL0C2 (l:2,l r_1 ;2) 4 BL0CK2 (l/2,3/2:l r ;2,2) 9 BL0CK3 (l/2,3/2,5/2:l r ;2,2,2) 15 BL0CK4 (l/2,3/2,5/2,7/2:l r ;2,2,2,2) 20 INTEG1 I(0:l r ;2) 4 INTEG2 1(0:2, l r_1 ;2) 4 6 9 9 17 23 24 9 9 As an example of the above notation, Gear's formulas (1969) can be written as the class of formulas (0:1 ;1) , r = 1, ..., 6 11 This means that the formula finds a polynomial p which interpolates r past points and 1 future point: P(t .) = y , i = -r, .... r n+i 7 n+i ' ' ' and the future point is chosen to satisfy that is, p'(t ) = f(p(t ),t ) p'(t ) = f(y ,t ) r n w n n so hp'(t) - h y' =0 . n r ^ n n y n These formulas are A(a)- stable and stiffly stable with a and D as in Table 1.2. The regions of absolute stability are the exteriors of the curves shown in Fig. 1.4, which is included for comparison with other collocation formulas. Table 1.2 BACKDIF (0:l r ; 1) r order D a(°) 1 1 90 2 2 90 3 3 -0.083 86.0 4 4 -0.667 73.4 5 5 -2.327 51.8 6 6 -6.070 17.9 In order to write the collocating polynomial p it is convenient to define the basis functions, ((>.,, associated with the formula (1.3). If -r <_ j £ s-1, and _< k _< c.-l, then •H co 00 • H 13 polynomial of degree at most C - 1 satisfying jk t -t n+m n = k!6. & „ (1.7) for all £,m such that -r < m < s-1 and < I < c -1. For convenience we — — — — m define c. Then the interpolation polynomial p can be written s-1 c.-l i P(t) =1 I . j=-r k=0 jk t-t h k yM. k! (1.8) In the fixed stepsize case with h = h, equation (1.7) assumes the simpler form U) •r < m < s-1 , 3K jm K * < I < c -1 . — — m (1.9) s-1 c.-l In the rest of the text, £ denotes £ £ j,k j=-r k=0 As with linear multistep methods (Gear and Tu, 1974) , either an interpolation or variable step technique may be used to change stepsizes. To conform to the nomenclature introduced in that paper, the combination of a set of formulas, the technique used to handle variable stepsizes, and the schemes used to select stepsizes and formulas is called a method . In the variable step technique, the formulas are always used directly as described above with <}>., defined as in (1.7). Thus the different stepsizes are taken into account in calculating the interpolating polynomial. In the interpolation stepsize changing technique, the saved information is kept with equal stepsizes t - t . n = h . If the step- r n r n+j n+j-1 n r size is changed, an interpolating polynomial is determined by the saved values (including any saved derivatives) . The new saved values are 14 determined by evaluating the interpolating polynomial at the new points. That is, if the method is saving the values and scaled derivatives (c -1) (c -1) h y Vr> Vn-r> — (c -1) ! at V^Si ' /Vr 1] c c .-r« i . n n+s-1 , ,.. y„^ . , h y' ..., r-r- at t + (s-l)h , n+s-1 n'n+s-1 (c ,-1)! n n s-1 represented by the interpolating polynomial p, and it is desired to change the stepsize to R , these values are replaced by (c -1) (c -1) K P (t ) n-r" V v n-r" "" (c -1) ! -r p(t J , fip 1 (t J , ' n+s-r ' n^ v n+s-r ' (C S r 1} (c s r 13 where t = t n + (j - s + l)fi . If the saved information is stored n+j n+s-1 KJ J n in Nordsieck form (Gear, 1971, p. 148), then the interpolation stepsize changing technique reduces to multiplication by a diagonal matrix. When the saved data is stored at equally spaced times, the basis functions are defined as in (1.9) and are independent of h . Thus the methods reduce to a fixed stepsize method along with the interpolation technique. SIMPLE COLLOCATION METHODS 15 In this chapter we will consider methods using formulas satisfying (0:c_ r , ..., c_ 1 ;c Q ) p'(t + 0h ) = f(p(t + 0h ), t + 0h ) , r n n r n n n n i^e.. , methods solving (1.5) with s = 1. For convenience define a jk = 4>j k (6), 6 jk = jk (6). Then (2.2) b< >ecomes v, k (k) h y . ) a., — j-i" = h f ., nk k! n jk J . k (k) h y v . I e jl*±± 0h h. jk k! n n (2.1) (2.2) (2.3) 2.1 Order To define the order of the formula (2.1), let y = y(t .)> i = -r, ..., 0, where y(t) is the exact solution of (1.1). Define p as in (1.8) , and let D (y) = h p'(t + 0h ) - h f(p(t + 0h ), t + 0h ) n nn n nn nn n be the amount by which the exact solution fails to satisfy the formula. Then the formula is defined to be of order k if D n (y) = 0(h k+1 ) ,k+l for all solutions y e C t t n , ^-l ' ^ t ' ie P ro P erties °f interpolating polynomials and the Lipschitz property of f , C-l. p»(t + 0h ) = y'(t + 0h ) + 0(h ) , r n ir ' n n f(p(t + 0h ), t + 0h ) = f(y(t + 0h ) + 0(h C ), t + 0h ) r n n' n n J ' n n n n J = y»(t + 0h ) + 0(h C ) . J n n K J Thus in general the order of the formula is C - 1, and if is chosen 16 such that p»(t + 9h ) = y»(t + 9h ) + 0(h C ) , n n n n the formula is of order C. Example 2.1 (-1/2: 2;2) The Hermite basis functions for this formula are _! (t) = 2x 3 + 3t 2 *_! x Ct) = t 3 + t 2 4> Ct) = -2t 3 - 3t 2 + 1 ^ jd) = T 3 + 2l 2 + T . With = -1/2, we find «-i.o = " 3/2 B -i,o ■ l ' 2 a. M - -1/4 B -l,l • 1/8 a o,o = 3/2 B o,o ■ 1/2 Vi ■ - 1/4 g 0>1 = -1/8 . Thus given y - the formula chooses y to satisfy n-1 n -3/2 y . - 1/4 h y'' + 3/2 y - 1/4 h y» = y n-l n'n-1 / n n'n h f(l/2 y , + 1/8 h y« + 1/2 y - 1/8 h y' , t - 1/2 h ) . n 7 n-l n'n-1 7 n n y n n n J Because -3/2 y . - 1/4 h y» + 3/2 y - 1/4 h y' = h y' ,. + 0(h 5 ) J n-\ n 7 n-l 7 n n'n n'n-1/2 the formula is of order 4 = deg(p) + 1. If the formula is applied to y 1 = Ay with h = h we find = 1 + 1/2 y + 1/12 y 2 ( } y n 1 - 1/2 y + 1/12 y 2 y n-l " l J The rational coefficient of y , in (2.4) is the [2,2] Pade approximant to e . Thus the method is A-stable (Birkhoff and Varga, 1965) . 17 Suppose we apply the formula (2.1) to exact past values y . = y(t . ), i = - r > •••> -1» choosing y so (2.2) is satisfied. Then the local truncation error is defined to be d = y - y(t ) . n n n (2.5) A formula was intuitively defined in Chapter 1 to be of order k if k+1 d = 0(h ). The definition in this chapter, which is technically more convenient to verify, is equivalent to the definition in Chapter 1, provided the coefficients of the method are well behaved. Theorem 2.1 A formula (2.1) with a bounded away from 0, and a n , , 3 n v k+l bounded as h -*■ is of order k if and only if d = 0(h ). n Proof: Let p(t) be the polynomial of the formula, defined using exact values, and p be the polynomial using exact past values but with y chosen to satisfy (2.2) . Then hp'(t + 6h ) - h f(p(t + 9h ), t + 9h ) = D , n r n n n r n n n n n h p'(t + 9h ) - h f(p(t + 0h ), t + 8h ) = , n r n n' n r n n n n k+l where D = 0(h ) by the definition of order. Let y . = y(t .) for n v \ 3 'n+i J v n+i/ i < 0. Then we can rewrite these last two equations using (2.3) as a jk jk K k ( k ) r+ . , h y (t .) k! - h f n X (k) I 3. t>k V '(t .) i, i > L « jk k! eh n n = D n , (2.6) h k y« I * -^1 - h f f k 3k k! *k (k) h y v . I 3 J^ll , t ♦ eh .- jk k! n n bk = . (2.7) Subtracting (2.6) from (2.7) and applying the mean value theorem to find a matrix J with norm bounded independently of h, we have k=0 . k=0 (2.8) I «, 18 If we assume that the first c -1 derivatives of f are bounded, then yCW „ y W (V = D K-l f(Vn , tn) .^.y = 0( ||y n - y(t n )|| ) , k = 1, ..., c Q -l . Thus a n d = -D + 0(h lid II ) . 0,0 n n " n" Because a„ n is bounded away from as h •*■ 0, d n " a^ + °0> II4JI ) • and if we assume the formula is order k, then d = 0(h k+1 ) . n k+1 Conversely, suppose d = 0(h ) . Because a n , and 3 n v are bounded II U y K U y K as h -* 0, it follows from (2.8) that D = 0(h k+1 ) . n J Thus the formula is of order k. Q.E.D. To show conditions under which the hypotheses of the theorem are satisfied, we need the next lemma. First we introduce stepsize change variables h. - h. . ~ i i-I 0. = r . h i-l Lemma 2.1 The coefficients a., and 3-, can be represented in the form jk jk * a jk = a 3 °k + R jk (6 n-r + 2' — V ' S jk = 6 jk + Qjk (6 n-r + 2' ■"> V • where a?, , 3?, are the coefficients of the fixed stepsize formula and R.. jk jk jk Q.. are rational functions in 6 ...... 6 such that R., (0, ..., 0) = x jk n-r+2' n jk K Q jk (0, ..., 0) =0. 19 Proof: It is enough to show that the Hermite basis function d>., can be jk written as a polynomial with coefficients that are rational in 5., i.e., <$>-v = X q- C6 OJ ..., 6 it 1 jk -_n x n-r+2 n (2.9) where q. is a rational function. Then C-l a., = (J)!, (6) = 7 iq.(6 ,,, ik r ik . * n i n-r+2 J J i=0 , ye i-1 and C-l B., = ..(6) = J q.(6 jk jk i=0 i^ n-r+2' , sX Thus a., and $., are rational functions of 6. and we arrive at the lemma jk jk i by defining jk v n-r+2 C-l I i=0 , 6 ) = J" i[q.(6 , ..., 6 ) - q.(0, ..., O)]0 n . L n n i n-r+2 n n i i-1 and C-l Q.,(6 , ..., 6 ) = J [q.(6 ~, ..., 6 ) - q. (0, ..., OJje 1 . x jk v n-r+2 n . f_ Ln i n-r+2 n n i^ ' ' ' J i=0 The representation (2.9) is trivial if $., = 0, so assume that -r _< j _< and _< k _< c. - 1. From (1.7) (J)., is defined to be the unique polynomial of degree at most C-l satisfying t - t n+m n jm k£ (2.10) for all £,m such that -r < m < and < I < c m-1 Let C-l 3k i=0 x Define m t - t n+m n h n h y — 1 , h n+i i=m+l n Then (2.10) becomes 20 1 00 0) -r -r 1 2co -r 1 w o w o C-l J -r w (C-l)oo C-l C-2 (C-l) ! (C^)T% C-c, j C-l j,-r k,0 j,-r k,l V(A,o ( V 1)!6 j,o\ )V i (2.11) Because oo. are distinct, the determinant of the matrix is nonzero. This i follows from the uniqueness of Hermite interpolator/ polynomials (Davis, 1963). Now h -1 h -1 h . . h . . . l+o . n j=i n+j + 1 3=1 n+j + 1 It follows that a) , . . . , uj_ are all rational functions of 6 », ..., 6 -r ' n-r+2 n and since the rational functions in r-1 variables form a field, the q. are also rational function in 6 -.,..., 6 , n-r+2' ' n Q.E.D. We introduce the usual stepsize selection scheme and assume the stepsize is given by h = hq(t ,h) n n where for all h > 0, t < t <_ T, 1 > ri(t,h) > A > Then (2.12) h. - h.^ n(t.,hj n(t._ r h) 6 i = — h. i-l n(t i _ 1 ,h) satisfies 1/A-l > 6. > A-l > -1 — l — 21 Because the matrix in (2.11) is nonsingular whenever 6 „, .... to n-r+2' ' 6 are all greater than -1, the rational functions appearing in Lemma 2.1 are continuous on the domain {(6 .,..., 6 ) I 6 - > -1, ..., 6 > -1} . n-r+2 n ' n-r+2 n The stepsize selection scheme restricts 6. to lie in a compact subset of this domain. Corollary 2.1 If a method is described by (2.1) with stepsize selection scheme (2.12), then the coefficients a.. , 3-i are bounded for all h, n. J ' jk ]k ' The remaining hypothesis in Theorem 2.1 requires a further restric- tion on the stepsize selection scheme, as is illustrated in the following example. Example 2.2 (6:1,1;1) with variable step technique. Recall 6 = (h - h ,)/h . . With this notation the basis n n n-1 n-1 functions can be written *-2,o w = & ; 2 T ' T + 1} • n T + 1 . 13) n 6 + 1 22 If > -1. then (2.13) shows 6 < -1 to make ol n = 0. Since n 0,0 6 > -1 for all stepsizes, we see that cl _ ^ for all stepsize selection schemes as long as 6 > -1. If 6 < -1 and a° / 0, then by continuity a n _ ^ for <5 suffi- \J y \J \J y \J 11 ciently small, and we can apply Theorem 2.1 for a stepsize selection scheme that satisfies 6 = 0(h) as h ->0. n The behavior of the method in the example is typical of collocation methods. Theorem 2.2 Suppose a method described by (2.1) with stepsize selection scheme (2.12) has a n / when the stepsize is constant and one of the following conditions is satisfied: 1) 9 > 0, 2) 6. = 0(h), for all i. k+1 Then the method is of order k if and only if d = 0(h ) . ' n Proof: By Corollary 2.1 and Theorem 2.1, it is enough to show a n n is bounded away from as h -> 0. If 6. = 0(h), then the continuity of a. n as a function of 6. shows i v ' ' ' 0,0 l a _ is bounded away from for sufficiently small h. Suppose > 0. We need to examine the structure of n , since a o,o= *i,o (9) - Let Tp = >T. ... >I W 1" \t be the abscissas {(t - t )/h : m = -r, ..., 0} with the m abscissa 1 n+m n n occurring c times. Then using divided differences, a., we can write & m & ' i' C-l i-1 *o o (T) = I a n (T _ T u ' i=l j=0 J and thus 23 *oV t > ■ X a i 1=1 ri-l n (t - t.) j=o J J It is clear from the structure of the divided difference table that the a. for i > have the same sign. Rather than prove this formally we illustrate by an example which shows the signs as +, -, or 0: T 4 T 3 T 2 1 The lower diagonal gives a., so a = 1, a.. = 0, and a ?> a„, a are all less than 0. The derivative i-1 n (t - t.) ^j=0 3 J • K 1 -l i " 1 = I (T - T ) n (T - T.) m=0 j=0 is a sum of products of (t - t.). Thus if t = 8 > these derivatives are all positive. Hence C-l i-1 i=l m=0 -1 i" 1 V . n n (e - v 3=0 C-l > I |a.| ie i-1 i=l > and a n n is bounded away from independently of h, Q.E.D. 24 2. 2 Stability Regions To find the absolute stability region of formula (2.1), we apply it to y' = Ay, with constant stepsize h. Then (2.3) becomes (hA) k y jk n + J _ k! = hA I 3 U k (hA) k y jk n+j_ k! (2.14) where the a.,, B-, are defined by (1.9) and are independent of h, A. Writing u : . : - hA and collecting terms, we see that (2.14) takes the form (2.15) a y + . . . + a rt y = -r'n-r (rn where c.-l - i k=0 k+1 a e.. L jk k! M jk k! J Solutions of (2.15) are y„ = i=l 1 1 if the E,. are distinct roots of the stability polynomial 7T(y,5) = a + a .. £ + Kt-,^j _ r -r+1 ta o« If tt has multiple roots, then the solutions will include terms of the form n C n with k > 0, which will tend to if and only if |£. < 1. Thus the l ■ l ■ region of absolute stability is {u \ all roots, £, of Tr(y,5) have |?| £ 1 and roots of modulus 1 are simple}. For stiff stability we are interested in the component of the absolute stability region connected to u = °°. The formulas whose stability regions are shown in this paper are all strongly stable at u = °°, that is, the roots of tt(°°,£;) are all less than 1 in magnitude. The set of roots of a polynomial is a continuous function of its coefficients, so such formulas are strongly stable in the exteriors of the curves 25 {y | there exists £ with |£| = 1 such that tt(u,0 = 0} . These loci can be plotted by regarding the stability polynomial as a polynomial in y with coefficients which are functions of £ and plotting the solutions of y for £ = e , to e [0, 2tt]. Example 2.3 (l:l r ;2), r = 1, ..., 8 The stability polynomials have the form *Cu,0 = (a_ r>0 - e_ r>0 y)+ ... ♦ (a_ lj0 - 3_ lj0 y)? r_1 + (a o,o + (a o,i - Vo )y ■ e o,i y2)?r • As y ■> °° the roots E, approach the roots of the polynomial and so are less that 1 in magnitude. Hence the formulas are strongly stable at °°. The results of Section 2.1 show that the order of (1:1 ;2) is r + 1, and the stability regions of Figs. 2.1 and 2.2 show empirically that the formulas are stiffly stable for r _< 8. The corresponding D's and a's are shown in Table 2.1. Note that the stability regions are much better than those of the backward differentiation formulas show in Fig. 1.4, This family of formulas has been implemented as a variable stepsize, variable order package using the interpolation stepsize changing tech- nique, and is further discussed in Section 2.3. Table 2.1 C0LL0C1 (l:l r ;2) r order D a( ( ') r order D a(°) 1 2 90 5 6 -0.237 83.1 2 3 90 6 7 -0.820 72.5 3 4 90 7 8 -2.079 55.6 4 5 -0.028 88. 7 8 9 -4.554 29.5 26 err 9 Rj r id i CM e h O 10 vO s • ^ . • r— 1 8~ II E U * R5 f \ * s w rH u o ►j w ■j o ** c_) CM GO •H tu 27 S"9l o> 00 u U o 00 (NI u o o u •H 28 r-1 Example 2.4 (1:2,1 ;2), r = 1, ..., 7 The stability polynomials have the form 0 * (a_ r>1 - 3_ r>0 )u - 3_ r>1 y 2 ] ♦ (a_ r+1>0 - 3^^105 + ... + (a_ 1)Q - 3. 1>0 u)f _1 + [a o,o + (a o,i " e o,o^" e o,i y2] ^ • As y -> oo the roots £ approach the roots of the polynomial S 0,l? r + B -r-l " ° • For r < 7 these roots are all less than 1 in magnitude, and the formulas r-1 are strongly stable at oo } as in Example 2.3. The order of (1:2,1 ; 2) is r + 2 and the stability regions in Fig. 2.3 show that the formulas are stiffly stable for order £ 9. The corresponding D's and a f s are shown in Table 2.2. Although the stability regions are similar to those of Example 2.3, the need for the derivative at the last past point makes them much less attractive to implement. Table 2.2 C0LL0C2 (1:2,1 ;2) r order D a(°) 1 3 90 2 4 90 3 5 -0.038 88.1 4 6 -0.334 80.1 5 7 -1.119 66.2 6 8 -2.742 46.8 7 9 -5.809 21.9 29 £-91 i 10 u S-l O r- u o o u to CM •H 50 2.3 Implementation Considerations In this section we discuss several topics necessary in writing a computer program which implements the formulas described in this chapter. Computer codes need to handle several practical problems. These include data representation, choice of stepsize and order to achieve a user specified error tolerance, approximate solution of the nonlinear equation (2.2), and the use of the solution of (2.2) to move the solution of the ODE forward by one time step. For simplicity, we consider the specific family of formulas described in Example 2.3. (See Appendix I for a program which implements these methods.) If other formulas are used, then care may be necessary in choosing the data representation and the stepsize changing technique. For r-1 example, the formula (6:2,1 ;2) requires y' at the last past point. If the variable step technique is used to change stepsizes, then either this derivative can be recalculated when it is needed, or all derivatives can be saved. The latter action would reduce the number of function evalua- tions at the expense of almost doubling the storage requirements. If the interpolation technique is used to change stepsizes, the saved data is usually transformed to the Nordsieck form, which represents all the saved data by the scaled derivatives at a single point of the polynomial inter- polating the saved data. If the saved data includes only the values used r-1 in the formula (9:2,1 ;2), then when the solution is moved ahead at each step, the component of the data due to y' must be explicitly calculated and added to each data value of the Nordsieck vector. This additional calculation reduces the convenience of using the Nordsieck form. If all the derivatives are saved, and the Nordsieck vector has length 2(r + 1), then recalculation of derivatives is unnecessary but the storage require- ment is almost doubled. Several of the formulas considered in this 31 report have the property that the c.'s increase monotonically with i. In this case, only the data values used in the formula need be saved. Similar considerations apply to formula changing. The saved data must be such that the past values needed by the new formula are available and correctly spaced in time. The implementation of the formulas of Example 2.3 uses the interpo- lation technique. When used with the Nordsieck form of data representa- tion, stepsize changing and prediction of the next solution value can be efficiently done (Gear, 1971, p. 148). At the n step, the data defining the polynomial (1.8) is saved and is represented by the value and r + 1 derivatives at t . Thus rather than saving n b 4 = ( V Vn> Vl •'•' y n-r )T > we save , , v r h 2 h (r+1) . n . . , n ,, __n (r+1) V nV 2 V '"' (r+1)! y n a = This is possible because the c.'s increase monotonically as was mentioned above. In this notation where p is the polynomial interpolating y . Thus the representation is a linear transformation of the natural representation a = Ty . (2.16) The equation which defined y , h y f given a , is generally nonlinear and is dependent on the accuracy of an approximate value, y , for its ease of solution. To obtain y^ we evaluate the defining polynomial given for t , at t . With our data representation, this is equivalent to multiplying by the Pascal matrix, a P = Pa . (2.17) — n — n-1 wi ■ re 32 P = 1 1 1 ! 2 1 1 ' r + 1 r + 1] 2 - 1 ; 1" Vi is the matrix whose (i, j) entry is J , i , j = , . . . , r + 1 . With this data representation, equation (2.2"), which defines y , can be easily written. Let c. be the unit vector with i element 1. Then — l 12 can be written e • Pa - f(e_ • Pa , t + h ) — 1 — n -0 ~n n n (2.18; Thus the polynomial interpolating saved values need not be explicitly found. Values of other than 1 could be handled by replacing a by C(0)a where CfG) is the matrix with elements c. . = S..9 , for i, i = 0, -ii ij ij J .... r + 1. To solve (2.18) we need the first two elements of Pa as functions — n of the unknowns v and h y' . To find the elements of a after y and h y' ' n n n — n n n n arc found we need an expression for it also. Let P be the first two rows of P, and let T be the first two columns of T. Then using (2.16) and f 2. 17) , it follows that a = a? + T — n n 1 P \ y - e n • a r n -0 — n h y' - e, • a p n n -1 — n (2.19) P. a - P n a P + P.T, In 1 -n 11 P ^ y - e n • a r n -0 -n h y 1 - e • a p n' n —1 — n (2.20) 33 The program in Appendix I uses the 2x2 matrix P T to solve (2.18) and the (r + 1) x 2 matrix T. to correct a^ for a . Note both matrices are 1 — n -n independent of stepsize (but do depend on order) . To choose the order and stepsize to use, a code needs to estimate the error committed at the next step as a result of its choices. The program in Appendix I attempts to estimate the local truncation error and keep it less than a user specified tolerance, e. Suppose the past values y n-r- , y are exact and let p(t; y ) be the polynomial (1.8) defining the formula with the parameter y explicitly shown. Then with E = h p'(t + 9h ; y(t )) - h f(p(t + 9h ) , t + 9h ) , n n r n n J n n r n n n n the local truncation error satisfies -E d = n n a. 0,0 + 0(h ||d ||) , as we saw in Theorem 2.1 We next estimate E . Let n T. = t > T > n — 1 — be the abscissas {t : m = -r, n+m c times. Let m • * T C-1 " 'n-r t" \\ 0} with the m abscissa occurring C-l w(t) = II (t - x.) i=0 X and let y[t] be the divided difference (Isaacson and Keller, 1966) y[t] = rt C-l (C) y V* " T c-i' j + + t ft - T 1 + T 1^1 J dt, dt 1 " Then the exact solution y(t) can be written as y(t) = p(t; y(t n )) + w(t)y[t] , 34 so that y'(t) = p'(t;y(t n )) + a)'(t)y[t] + w(t)y'[t] . It is easily seen by induction that if |y | < M^ on an interval containing t n-r , t , t, then n In particular, the derivatives of y[t] are bounded if the appropriate derivatives of y(t) are. Since we are using the interpolation technique, the stepsize may be taken as h for the past r steps. Then C w(t) = h w t-t and u)'(t) = h C_1 S' n t-t where c_ c w(x) = (t + r) " r . . . (t) Then E = h y'ft + 8h ) - h C w'(6)y[t + 0h ] - h C+1 w(6)y"[t + Oh ] n n n n n J L n n J n n n J - h f(y(t + Oh ) - h C w(9) y[t + 6h ] , t + 9h ) n J n n n J L n n J n n = h y'(t + 6h ) - h C w'(6)y[t + 9h ] + 0(h C+1 ) n n n n J L n n J n - h y'ft + 6h ) + h f (y(t + 6h )) h C w(8)y[t + 0h ] n 7 n n n y w n n n n n J = -h C w'(G)y[t + 6h ] + 0(h C+1 ) n J n n .(C) "cT = -h c w'(e) y r ,^ * o(h c+1 ) , .(C) where t < £ < t + 9h . The latter holds because y[t + 6h ] = y (£)/C ! n-r — — n n n n 35 However y is bounded, by assumption, so any value of £ may be chosen and maintain the equality. The program estimates y (£) at some convenient value and uses the above formulas to estimate d . In this case C = r + 2 n and the highest approximate derivative saved is y , so the desired derivative may be estimated using either the difference y - y or the difference between predicted and corrected values of y , since hS c fo yJJ = y(t) + Me) -2^ * 0(h C+1 ) This is what is done in the program. The stiff system of equations y» = -By + Uw , y(0) = -U(l,l,l,ir , T - 1 where B = U(diag( 3- ))U*> w = (z. 2 ) , and z = U y, was solved using the code in Appendix I and Gear's DIFSUB in two cases. The exact solution is Uz where 3, z. = i l-(l+3 i )exp(3 i t) ' In case 1, R = 1000, 3 = 800, 3 = -10, 3 4 = 0.001 and U = 1/2 -1 1 1 1 1 -1 1 1 1 1 -1 1 1 1 1 -1 In case 2, 3, = 100 + lOOOi, 3 2 = 3,, 3 3 = -10, 3 4 = 0.01 and U = 1/2 1 1 1 1 1 1 -1 -1 i i 1 -1 i i -1 1 36 Roth methods automatically vary the stepsize to keep the estimated local error less than a given tolerance £. Tables 2.3 and 2.4 give the maximum error in the components of y at the first mesh point after t = 10 1 and the number of function evaluations required to reach that point. In both cases the tolerance e is 10~ . Table 2.3 Comparison for Case 1 Error Evaluations Time DIFSUB COLLOC DIFSUB 72 COLLOC 1 io-3 •45 10 -6 .16io-5 101 lio-2 .74io-7 .H10-6 173 229 li o-l .2610-6 .87io-9 314 377 lioO .48io-6 .15io-6 448 611 li o + l .36 i -5 .92io-6 591 797 110+2 .26io-5 .60io-7 693 981 110+3 .13] o-5 . 38 1 0-6 777 1161 The collocation program requires more function evaluations than DIFSUB in both cases, if the maximum order in DIFSUB is reduced to 3 in case 2. If DIFSUB is allowed to use its order 4 formula in case 2, it restricts the stepsize and uses more function evaluations than COLLOC. It is not surprising that DIFSUB takes fewer function evaluations than COLLOC. DIFSUB is a production program which has been extensively tuned and tested, while COLLOC is a test package designed to check whether the formulas developed in this report worked in practice. As a consequence, the code was designed for clarity and ease of debugging rather than effi- ciency. The efficiency of COLLOC could be increased in several ways. For example, changing the nonlinear equation solver to perform a rank-one update 37 on the Jacobian, J, of f and using J to calculate the Jacobian of the system (2.18) would save about 20% of the function evaluations used in COLLOC. Observation of Tables 2.3 and 2.4 reveals that COLLOC is solving the equations much more accurately than DIFSUB. Thus tuning of the error estimation scheme would also improve COLLOC relative to DIFSUB. Finally, the error tolerance used in these test problems is rather large, favoring the less accurate DIFSUB. When the tolerance, e, is decreased, COLLOC improves relative to DIFSUB. Table 2.4 Comparison for Case 2 Error Evaluations Time DIFSUB 1 DIFSUB 2 COLLOC 3 DIFSUB 1 DIFSUB 2 COLLOC 3 lio-3 .llio-4 .69io-5 .34io-5 87 61 105 lio-2 .95 10 -4 .10io-3 .33 10 -4 349 215 341 li o-l .25i<,-4 .32io-4 . 13 10 -4 1128 961 1501 lioO .10io-4 .36io-4 .13io-5 1647 2268 2129 li o+l .20io-4 .33io-4 .21 10 -5 1801 13177 2343 lio+2 .93io-5 .14 10 -5 1913 -- 2529 lio+3 .22io-6 .87x0-7 1995 -- 2725 1 maximum order 3 1 maximum order 4 ' maximum order 5 38 BLOCK COLLOCATION METHODS In this chapter we will consider methods using formulas (9 0' •••' 9 s-l :C -r' •••' C -1 ; V •••' C s-1 } (3.1) satisfying p'(t + e.h ) = f(p(t + e.h ), t + e.h ), i = o, ..., s-i, (3.2) n in r n inn in * k j ^.c^. , the most general case of (1.5). For convenience define a.. Ed>'.,(9.), S- • = ^.,(9.). Then (3.2) becomes ij Y jk 3 ij r jk i ^ ( k ) rk (k) h y v v k n'n+i , _ v n k n n+j n , ) a. . — r-j— *- = h f ) 3- • — t~t^ > t + 9.h K in k! n f, ij k! n in jk J \jk J J i = 0, ..., s-1. (3.3) k k Note that in the notation of Chapter 2, a... = a_ . and 3-, = 3„ . . We jk 0,i ik 0,j will consider the block of s future points as equally spaced, for nota- tional simplicity. However, the theory in this chapter remains valid when the points are unequally spaced. 3. 1 Order For the theoretical discussion it is easiest to rewrite (3.3) as a vector equation grouping the future points together. Let p = fr/s"|, R = ps and let n+s-1 , Y m-1 Vi n-s , Y m-p y. n-R+s-l n-R (3.4) be the solution grouped as s-vectors, and let Y. be the vector with exact values v(t.) replacing y. . J ' J Define the s x s coefficient matrices 39 A - v u, js+s-1 0,js a s-1, js+s-1 a 3 = -P, k = 0, ., , max(c )-l and B. j,k 0, js + s-1 s-l,jsj 0,js s-1, js+s-1 S-lJSj 3 = -p, k = 0, ., , max(c )- 1 k k where a. =3- = if -R < j < -r or k > c. . (This amounts to ignoring 1,] H 1,J - - 3 6 6 the points with j between -R and -r which do not appear in (3.3).) We do not write m as a function of n because different formulas may be used to integrate an equation, so the relation between m and n may depend on the formula selection scheme. It is understood that m corresponds to n in the manner depicted in equations (3.4). Let c = max{c.}, and let £ denote J J jk 5 £ for the rest of this chapter. With these definitions, the equa- j=-p k=0 tion defining the future point Y can be written as m hV k) h k Y (k) I' A JB-ti = h f f b.. ^Vr 1 . t * Gh r.jk k! mr.jk k! m jk J \jk J T where 0= (6, ..., 9 ,) and f(Y,T) means the vector m (3.5) (f^ • Y, ^ • T), ..., fCe^ Y, e. T)) With this notation the order of the formula (3.1) may be defined. Let y • = y(t .), i = -r, ..., s-1, where y(t) is the exact solution of (1.1) Define p as in (1.8) and let 40 DJy) - h p'(t + 9-h ) - h f(p(t + 9 n h ), t + fl h ) n r n On n V1 n ir ' n ir h p'(t +9 .h ) - h f(p(t +9 .h ), t +6 .h ) ir n s-1 n n r n s-1 n n s-1 n ■ I A , mm+j jk jk k! h f m , h k Y~« I B.. -^P- , t + 0h r. i k k ! m m (3.6) be the amount by which the exact solution fails to satisfy the formula, Then the formula is defined to be of order k if D m (y) = o(h k+1 ) .k+1 or all solutions y e C [t ,T] (this means either that each element of '0 k + 1. E (y) or equivalently ||E (y) || is of order h ) . As in Chapter 2, the order of the formula is in general deg p = C-l = £ c. - 1, unless the 6. are chosen such that p'(t + 6.h ) = y'(t + e.h ) + 0(h C ) , i = 0, ..., s-1 , r n i n n l n in which case the formula is of order C. To attain the higher order, one ceo calculates the coefficient of y i , in the expansion of p' about the ^n+O.h r r i n time t + 6.h as a function of 6. and solve for s distinct 6. to make this n i n l l coefficient zero. This is illustrated in the following example. Example 3.1 (-/3/3, /f/3:2;2,2) The Hermite basis functions for this method are: _j (T) = 1/4 T 2 (T - l) 2 + 3/4 T 2 (T - 1) 2 (T + 1) *_1 ^T) = 1/4 T 2 (T - 1) 2 (T + 1) Q (T) = (T - 1) 2 (T + l) 2 4> 0jl (T) = T(T - 1) 2 (T ♦ l) 2 i x () (T) = 1/4 T 2 (T + l) 2 - 3/4 T 2 (T + 1) 2 (T - 1) 4)^(1) = 1/4 T 2 (T + 1) 2 (T - 1) 41 With = -/5/3, 6 = /3/3, we find a -l = " 5/6 " 4/g/ * a o,o = 8/9 ^ a° = 5/6 - 4/9/3 u , i a° = -5/6 + 4/9/3 a°_ = -8/9VJ if? = 5/6 + 4/9/3 a a. a a a a 1 0,-1 1 '0,0 1 0,1 : 1 1,-1 1 0,0 1 1,1 ' ■ -1/9 - 1/18/3 -4/9 -1/9 + 1/18/3 ■ -1/9 + 1/18/3 -4/9 -1/9 - 1/18/3 6° ^ = 5/18 + 1/9/3 3^ j = 1/18 + 1/54/3 B o,o ■ 4/g l$J = -4/27^ . = 5/18 - 1/9/3 gj = -1/18 + 1/54/3 3° = 5/18 - 1/9/5" ,0 8 1,0 ■ 4/9 ,0 3" = 5/18 + 1/9/3 37 = 1/18 - 1/54/3 B !,o ■ 4/27 ^ B: = -1/18 - 1/54/3 1,0 1,1 The corresponding A., and B.. matrices are r jk jk '-5/6 - 4/9/5" o" -5/6 + 4/9/5" 0, -1/9 - 1/18/5" 0" -1/9 + 1/18/3 0, 5/6 - 4/9/5 8/9/5" .5/6 + 4/9/3" -8/9/5", -1/9 + 1/18/3" -4/9 .-1/9 - 1/18/5" -4/9, 1,0 1,1 0,0 0,1 0,0 0,1 '5/18 + 1/9/5" 0' 5/18 - 1/9/5" o. 'l/18 + 1/54/5" o' 1/18 - 1/54/5" '5/18 - 1/9/5" 4/S r .5/18 + 1/9/3 4/£ ) r -l/18 + 1/54/5" -4/27/? -1/18 - 1/54/5" 4/ 27 A 42 Note dct(A ) ~ -40/27 /5 / so A . is nonsingular. Because h p»(t n + O.h ) = h y'(t + 6.h ) + 0(h 7 ) n n in n n in n (g) the method is of order (-> (the coefficient of h y . , is l/50(-9./9 + n y n+6.h v i i n (9.) 5 ) =0 when 2 = 1/5) l l Let the past values, Y ., i = -p, ..., -1, be exact and choose 1 m+i Y so (5.5) is satisfied. Then the local truncation error is defined to m be d = Y - Y m m m A theorem analogous to Theorem 2.1 holds, Theorem 5.1 A formula (5.1) with A „ nonsingular and ||A || , ||A , || , and HlT , I bounded as h -»■ is of order k if and only if d = 0(h ). 11 0,k " ' m ' Proof: With Y . , Y m defined as above we have h k~ ( k) m ., jk k! h k Y« ii y r . hi ^ D = £ A JLJ^i _ h f k' b.. -^VT 1 > t + 0h ik k! (3.7) 0=[ I A h k Y~0O m m+j j=-p k jk k! h k Y^ h f I IB 4V XE1 + IB mi.- r l J = -P k ik k! h k Y ( k) 0,k k! mm £a h k Y (k) m m 0,k k! (5.8) Subtracting (5.7) from (5.8) and applying the mean value theorem we see that V A "L (y^ - Y°°) + D = 0(h ||Y - Y || ) f 0,k k! *- m m J m m " m m" 43 (using boundedness of B , and the first c - 1 derivatives of f) . Thus , K A n _d = -D + 0(h |ld || ). (3.9) 0,0 m m m " m " Because A _ exists and is bounded as h -+ 0, d m = "^V™ + 0( - h m H d JI ) > m 0,0 m m ' m " and if the method is order k, then d = 0(h k+1 ) . m k+1 Conversely if d = 0(h ) , it follows immediately from (3.9) that D = 0(h k+1 ) , m so the method is of order k. Q.E.D. k k Lemma 2.1 extends directly to a., and 3- • with only a change in notation in the proof (to add the extra index). We restate it as follows: k k Lemma 3.1 The coefficients a.., $. . are rational functions in the stepsize 13 13 change variables 6 _,..., 6 . n-r+2 n k k In particular a.., 0. . are finite continuous functions of 6 ., ..., 6 r ij ij n-r+2 n in a neighborhood of (0, . . . , 0) . In fact, the rational functions are continuous and finite on the domain {(5 _, ..., 6 ) | 6 > -1, ..., 6 > -1} n-r+2 n ' n-r+2 n as in Chapter 2, since the matrix in (2.11) is nonsingular on this domain. If we assume that h is chosen by the stepsize selection scheme (2.12) (with n replaced by m) we get the analog of Corollary 2.1. Corollary 3.1 If a method described by (3.1) chooses stepsizes by (2.12), then the matrices A., , B.. are bounded for all h, m. jk jk Finally, if A exists when the stepsize is constant, then A exists and is bounded for small stepsize changes, by continuity of the 44 elements of A . Thus wc have Theorem 3.2 Suppose a method dcscrihed by (3.1) and choosing stepsizes by (2.12) has A nonsingular when the stepsize is constant and satisfies 6. = 0(h) for all i . k+1 Then the method is of order k if and only if d = 0(h ). m 3 . 2 Stability Regio ns The rationale for using block multistep methods is to obtain improved stability regions, since the methods of Chapter 2 were all found to be stiffly stable only for order at most 9. In this section we develop the characteristic polynomial for block multistep methods and exhibit several families, some of which are stiffly stable with order up to 24. The theory in this and the next section can be applied to general methods of the form (3.5) and not just to those methods derived from (3.2). Theorem 3.3 A formula of the form (3.5) has a region of absolute stability {u all roots, £, of tt(u,£) have |?| <_ 1 and roots of modulus 1 are simple} where k k+1 H»,K) - ?P+J) • jk J Proof: To find the absolute stability region of formula (3.5) we apply it to y' = Ay with constant stepsize h and obtain (hA) k Y (hA)"Y jk k jk m+j ik k! jk (3.10) with A., , B., independent of h, A, m. Writing u = hA and collecting terms, jk jk we find that (3.10) takes the form AY + . . . + A„Y = , - p m- p m 45 (3.11) where c k k+1 A. = I A.. £-- B.. JU- 3 k f J k k! J k k! (3.12) is an s x s matrix whose elements are polynomials in u. We solve this by converting it to a one step equation. A (y) i s nonsingular except for finitely many values of y. Except for these values of y we may multiply (3.11) by A and obtain Y = -A "A ,Y m -1 m-1 - Al X A Y -p m-p If we write Z = m m-1 m-p then we have where Z . = AZ m+1 m A = -V-i -A *A ) -2 I A -p+1 a o 1a - ■p - (3.13) Thus the solution is m U where Z must be defined by some other means. By writing A = P J P where J is the Jordan canonical form of A, we can see that the elements of A are linear conbinations of 46 {£. , m£ , m r. r. is an eigenvalue of A with Jordan block of size r } , with coefficients independent of m. Thus for Z to remain bounded inde- m pendently of the choice of Z , all eigenvalues of A must be at most 1 in magnitude, and those eigenvalues of magnitude 1 must correspond to simple Jordan blocks. To establish the theorem it is only necessary to calculate det(£l - A). Since A is a companion matrix we see that :(£I - A) = det(£ p + A^A^^' 1 + ... + A~ X A_ ) . deti This polynomial has the same roots if we multiply it by det(A ), so the eigenvalues are the solutions to the equation det(A r p + A .F^ 1 + .. . + A ) = . 0^ -1^ -p The left side is the polynomial denoted by tt in the statement of the theorem. Finally note that the values of u which make A singular result in an infinite root E, and hence occur outside the region of absolute stability. For if A (u) is singular then a similarity transform (which preserves determinants) may be applied to A £ + . . . + A to replace A by a matrix with zero first column. This results in a stability polynomial of at most degree sp-1 in E, which thus has at least one root at °° (in the sense of algebraic functions) . Thus the use of A is justified. Q.E.D. Example 3.2 BLOCKS: (1/2, 3/2, 5/2:l r ;2, 2, 2) r = 1, ..., 18 This family starts at order 6 and avoids the problem that many families have of starting at very low order and having to take many small steps initially. The methods are almost A-stable up to order 15 (i_.£. , ot > 89°), which is probably sufficient for most applications. The 47 stability regions then worsen slowly through order 20, up to which a is still at least 80° . The results of Section 3.1 show that the order of (1/2, 3/2, 5/2:1 ; 2,2,2) is r + 5. The stability regions illustrated in Figs. 3.1-3.3 show that the methods are stiffly stable for r _< 18. The corresponding D's and a's are given in Table 3.1. Note that by taking three future points the stability regions are dramatically improved over the stability regions of methods with one future point. Table 3.1 BL0CK3 (1/2 ,3/2,5/2 l r ;2,2,2; r order D a(°) 1 6 -0.369 78.6 2 7 -0.118 86.6 3 8 -0.022 89.3 4 9 -0.001 89.9 5 10 90 6 11 90 7 12 90 8 13 90 9 14 -0.007 89.8 10 15 -0.025 89.4 11 16 -0.058 88.7 12 17 -0.106 87.7 13 18 -0.172 86.3 14 19 -0.264 84.4 15 20 -0.401 81.5 16 21 -0.710 76.6 17 22 -1.397 66.3 18 23 -7.515 39.4 48 i u o vO II / — \ HlflHW[' S CO D 00 U u I— I r\j LO to U O to to DO •rH 51 Example 3.3 BL0CK2: (1/2, 3/2:l r ;2, 2) r = 1, ..., 14 This family is stiffly stable up to order 17 and almost A-stable up to order 9. The stability regions are shown in Figs. 3.4 and 3.5 with the corresponding D's and a's in Table 3.2. Note that although the stability regions of these methods are much better than the methods in Chapter 2, they are distinctly inferior to those of Example 3.2. Thus if high accuracy were desired in the integration of the ODE, the methods of Example 3.2 would probably be preferable. Table 3.2 BL0CK2 (l/2,3/2:l r ;2,2) r order D a(°) 1 4 -0.101 85.7 2 5 -0.004 89.8 3 6 90 4 7 90 5 8 90 6 9 -0.007 89.8 7 10 -0.058 88.6 8 11 -0.180 86.2 9 12 -0.391 82.6 10 13 -0.715 77.8 11 14 -1.192 71.9 12 15 -1.921 64.8 13 16 -3.164 56.9 14 17 -5.620 47.7 52 BfrM £ ^1 u o II u •—t (Nl to w* u o J CO n o < 1* • to . 8 •H f» U. - 1 ? 53 i— i i u u o oo II u CM to • CN ■ rtf, a US u o r ? LO tO o GO •H T (X, 54 Example 3.4 BL0CK4: (1/2, 3/2, 5/2, 7/2:l r ; 2, 2, 2, 2) r = 1, ..., 17 This family is stiffly stable up to order 24 and almost A-stable up to order 20. The stability regions are shown in Figs. 3.6-3.8, with the corresponding D's and a's in Table 3.3. For very high order the stability regions are superior to those of Example 3.2, but the improvement is probably not enough to justify the computer time required to solve the implicit equations for one more future point. Thus unless extremely high accuracy is desired the methods of Example 3.2 seem more practical. Table 3.3 BL0CK4 (1/2, 3/2, 5/2, 7/2 :l r ; 2, 2, 2, 2) r order D a(°) 1 8 -0.705 71.1 2 9 -0.358 81.2 3 10 -0.162 86.2 4 11 -0.059 88.6 5 12 -0.014 89.6 6 13 -0.001 90.0 7 14 90 8 15 90 9 16 90 10 17 90 11 18 90 12 19 -0.011 89.8 13 20 -0.047 89.1 14 21 -0.873 83.4 15 22 -2.581 72.7 16 23 -5.565 56.6 17 24 -10.993 25.8 55 to i— i i 00 h O vO II U /— \ CM CM CM r- CM lO CM IO CM u o 00 to •H 56 o> i u U o cn ii u I — ^ CM 5 CN tt r-- a ri (N LT5 cm 9 M CN u o DQ r- to •H 57 ■>* i o CN u o u o II u CM Csl to -^ u o -J 03 00 to •H 58 3.3 Stability and Convergence In this section we obtain sufficient conditions for the stability and convergence of block multistep methods of the form (3.1). These condi- tions apply to the methods of Chapter 2 as a special case. The results in this section are an extention of results of Gear and Watanabe (1974) . Because stability and convergence are properties of a method rather than of the underlying formulas, we must consider the technique used to change stepsizes and the schemes used to select stepsizes and formulas, as well as the formulas themselves. Of these, the stepsize changing technique needs the most explanation. As mentioned above, either the variable step or the interpolation technique may be used. The variable step technique is simpler to handle theoretically because the saved derivatives are defined in terms of the values y . by (1.4). Thus to show stability of the method, it is sufficient to establish the stability of the values y ., assuming f and its derivatives are bounded. For this reason, the stability theory will consider only the values y . to be saved, even though a code would also save the derivatives. With the interpolation technique it is no longer true after a step- size change that the saved derivatives are defined by equations (1.4), since the new derivatives are those of an interpolation polynomial at noncol locating points. Thus we must examine the error amplification for these derivatives. For notational simplicity we will restrict c to be 2 with the interpolation technique. Furthermore, as was discussed in Section 2.3, the interpolation technique depends on the saved data used in the interpolating polynomial. Let (R + s) = max{(p+l)s} where the maximum is taken over all formulas used in the method. We will assume for simplicity that the saved data before step m is T fon-1' "•' y n-(R+s) ' h n y n-l' ••*' h n 7 n-(R+s) ^ ' max max 59 Fewer values and derivatives could be used in defining the interpolating polynomial in special cases. For example let r , r be integers such that the formula used at the m step satisfies -r < -r and c. = 1 for i < -r . r m — 1 m Then the values y for -r < i < s-1 and hy' for -r < i < s-1 could 7 n+i m — — y n+i m — — be used to define the interpolation polynomial for stepsize changing. The formula selection scheme and r , r must be chosen to ensure that the past mm r data used by the next formula are correctly spaced. For example in the method implemented in Appendix I, r = r, r =0 and the formula changing scheme allows r to increase by at most one after each step. This abbre- viated data structure would complicate the notation used in this section, but would not effect the validity of the stability and convergence results. Note that the polynomial used in the interpolation technique is in general distinct from the polynomial defining the formula. We first examine the effect of one step of formula (3.5), with variable step technique, on the error. Let Y , , Y , be the numerical r n ' m+1 m+1 and exact solution, respectively, grouped as s-vectors (c.f. (3.4)). Denote the error and its formal derivatives by E (k) = y (k) _ ~(k) m+j m+j m+3 Then by (3.5) and (3.6) o = i A hV k ? m m+j jk jk k! - h m Ii h k Y (k) m m+i -» — TT^- , t + 0h jk k! m m I B.. -^P- , t ♦ 9h v, jk k! m m jk J + D m Let J denote a matrix whose i row is m 60 M. Then we choose the £. such that hV k ) h k E (k) ^,jk k! m m r, i k k! m jk J jk J Similarly we can find linear operators K . with norm bounded independently st of m,j (assuming that the (k + 1) ' partials of f are bounded) such that = h k K k . , h k K k . I 1 A JLJli- h j I 1 B.,^1 jk k! mm., jk k! U k jk E + D . m+j m Define Then we can write where e = (E , .... E .) , m v m m-p+r ' d = (C _1 n D , 0, ..., 0) T . m m,0 m e = S e , - d m m m-1 m S = m - C_1 n C i - C_1 n C 9 m,0 m, -1 m,0 m, -2 -C C m,0 m, -p (3.14) with c = y a. h k K k . h k K k . m m 3 _ h j y B m m J jk k! mm £ K jk k! ' provided C n is invertible. If the stepsizes are chosen according to r m,0 61 the standard stepsize scheme (2.12) with 6. = 0(h), then A., and B.. are r l jk jk bounded, so C _ = A. _ + 0(h) . Thus for h small (which we will assume m,0 0,0 in stability or convergence proofs) , nonsingularity of C _ is equivalent to nonsingularity of A which is in turn equivalent to nonsingularity of A for the fixed step formula as was discussed in Section 3.1. Let S = S + h S m m mm where S = m .A A -A A A 0, 0-1,0 A 0, 0-2,0 " A o,o A - P ,o (3.15) Thus S = ^~ m m A 0,0 A -1,0 " C m,0 C m,-l A 0,0 A -p,0 " C m,0 C m,-p ... We wish to show § = 0(1), as h + 0. Consider one of the matrices making m up the first row: h m (A 0,0 A j,0 " C m,0 C m,j) C" 1 A" 1 m,0 0,0 m,0 j ,0 0,0 m, j h m V 1 C' 1 - C" 1 A" 1 * 0,0 m,0 m,0 0,0 m C m,0 A j,0 The first quantity in brackets is clearly bounded as h -*■ 0, since every term in the numerator has a positive power of h . The second quantity is 62 also bounded as h -*■ as a result of the following two lemmas. Thus S = 0(1) as claimed, m Lemma 3.2 IfC =A -hB with A , B bounded, A nonsingular and A~ mmmm mm m b m bounded, then for small enough h , C = A~ - h D with D bounded. mm m mm m Proof: oo C = [A (I - h A _1 B )] _1 = Y (h A _1 B )V 1 m L m mmm J .^_mmmm i=0 oo = A" 1 + h V h 1_1 (A _1 B 3V 1 m m . L ., m v m nr m i=l oo = A + h y h 1 (A _1 B ) X m m . L n m m m J i=0 = A" 1 + h [I - h A _1 B ]" 1 A" 1 B A" 1 . m m mmm mmm Thus we may let D = [I - h A _1 B ]" 1 A" 1 B A -1 . Q.E.D. mmm m J mmm x Lemma 3.3 IfC = A +hB with A , B bounded, then mmmm mm AC - C A = h D mm mm mm with D bounded, m The proof is a trivial calculation. Finally we note that by Lemma 3.1 we can write S = S + h f m m mm where S is the matrix (3 . 15) for the constant stepsize method and S~ is m K J r m bounded. Thus we have the following Lemma 3.4 Suppose a method described by (3.1) and choosing stepsizes by (2.12) with 6- = 0(h) has A „ nonsingular when the stepsize is constant. 63 Suppose the method is applied to solve equation (1.1) where f has c - 1 continuous partial derivatives. Then the error at the m step satisfies e = S e . - d m m m-1 m where S = S + 0(h) m m We next consider the effect of one step of the interpolation technique. As mentioned avove, c is restricted to 2. To include the effects of the method on the derivatives, we will extend the e vectors to m include derivatives. Because formulas using different s may occur in a method, all the saved values y . will be grouped together. It is convenient first to calculate a permutation of the error vector which groups the values and derivatives together in blocks of length 2s. As we shall see, the interpolation process introduces an extra past block into the error equation. To make room for this block, the error vectors will be further extended and the error amplification matrices S will have r m columns of zeroes added. Let the points t . be spaced equally with stepsize h , and let Z . = (Y ., h Y' .) T , m+j m+j m m+j ' m+j A. = (Y ., h Y' ., m m+jj T A. J>0 A. . 1 J>1 6. n I 3,0 t > B. = B. _ B. 1 3,0 3.1 6. -I J>0 64 and define H = Z - Z m+j m+j m+j Then (3.5) and the equations y* . . = f(y .,t . .) become n n+js+i n+js+i n+js+i T A.Z = h f( y B.Z .,t + 0h ) , • 3 m+ J m . j m+j m m J=-P J J J=-P J J (3.16) where now e = (e , ..., e s _ 1 , s-i, ..., o) If we extend D to be a 2s-vector by adding zeroes at the end, equations (3.16) and (3.6) yield = 7 A.H . L i m+i J=-P m y B.Z ., t + 0h I \J=-P J J J" B.Z . , t + 0h . u j m+i m m + D As above, we may choose J bounded independently of h, m such that = y A.H - h J J B.H . + D •_ 3 m+ J m m •_ J m+ J J=-P 3 = -p m Let C . = A. - h J B.. If we define m,j j m m j e = (H , . . . , H ) , m m' m-p' d = (-C'^D 0, ..., 0) T , m m,U m then we can write — 7T — m , £ = S £ -, - d m m m- 1 m (3.17) where S = m ■ C_1 n C i m,0 m,-l — m 65 -c'^c m,0 m,-p ... I J The superscript on e" indicates that the data points are spaced with stepsize h instead of the standard spacing corresponding to the st (m-1) ' step, h ,. Note C _ is invertible for small h because rJ ra-1 m,0 m 0,0 -A' 1 A A o,o o,i As before, where S = S + 0(h) m m S = m ■A _1 A A -1 A -P ... I To allow formula changes we permute the elements of the error vector e and define m e = (E , ..., E , h E', ..., h E' ) . mm m-p mm m m-p Then (3.17) becomes r. m , e = S e . - d , m m m-1 m where S corresponds to S and d is the permuted version of d . We have m r m m r m S = S + 0(h) m m 66 where S = m (-A A A o,o-i,o •• • " A o,o A - p, o l" A o 1 ,o A -i,i •• . -A. n A n , 0,0 -p,l I 1 1 • • • • ■ . 1 1 ... . I 1 1 I • • • • • . ... . I (3.18) With the interpolation technique, the saved data is preprocessed to change the stepsize before the constant stepsize method is applied. We now add the effect of this preprocessing to the error equation. At the st (m - 1) ' step before the change in stepsize we have saved data T (Y m-1' Y h Y' ' m-p-1' m-1 m-1 old > h i Y ' i) m-1 m-p-1 whose values correspond to t , + ih n . Denote this vector by T . , and r m-1 m-1 J m-1 the corresponding exact values by T m-1 Then e° \ = T . - T , is the m-1 m-1 m-1 error before the stepsize is changed. The polynomial which interpolates the 2R values and derivatives of T , is evaluated at the new time steps m-1 r t = t , + (s-l)h , + h i to give the past values after the stepsize m+i m-1 *• J m-1 m 6 F r change (denoted by T , with corresponding exact values T ,). This evaluation is a linear transformation which may be written C =Q _1 C(h /h JQ m m m-1 where Q changes the saved data into Nordsieck form at time t , + (s-l)h ° m-1 m-1 2R- 1 and C(a) = diag(l,a, ..., a ). 67 Thus T™ . = C T , , m-1 m m-1 P . = C T . + 0(h 2R ) , m-1 m m-1 e . = C e + 0(h ) . m-1 m m-1 Since 2R > C we can modify the discretization term d to include the — ' m 2R 0(h ) term above. Then the process of stepsize changing and application of the formula can be combined to give the error equation p = S C p . - d . tii m mTTi-l m If the stepsize changing scheme (2.12) is used with <$. = 0(h) for all i, the h /h = 1 + 0(h), so C = I + 0(h) . Thus S C = S + 0(h) and Lemma m m-1 m mmm 3.4 holds for the interpolation technique (with the symbols suitably reinterpreted, and replacing S by S C ) . Lemma 3.4 gives the effect of one step using either the variable step or interpolation technique on a fixed formula (3.1). Changing formulas can be allowed by extending the error vectors to include the greatest number of past points allowed, and extending the error amplifi- cation matrices suitably. If the variable step technique is used then the £ vectors are extended to save R values and the error amplification m max r matrices are extended to the R x R matrices max max S m where L and D are chosen so that the extended S matrix has ones on the s subdiagonal. With the interpolation technique the E vectors are extended to save (R + s) values and derivatives. If the error amplification max r matrix is partitioned into (R + s) x (R + s) blocks, 68 m b 0,0 a 0,l s i,o S l,l then S is extended to the 2(R + s) x 2(R + s) matrix m max max 0,0 0,1 L D 1,0 ■l.l L D 0) where L and D are as above. We assume this to be done with no change in notation. In addition, the interpolation technique is assumed to change all 2(R + s) saved values and derivatives to the new stepsize, which max r ' redefines C appropriately. As mentioned at the beginning of this section, when the interpola- tion technique is used, it is sometimes sufficient for stability and con- vergence for less than 2(R + s) values and derivatives to be converted max to the new stepsize. The requirements on the stepsize changing matrix C are that C = I + 0(h) for stability, and that the stepsize operation has error 0(h) for accuracy. This is achieved if the data used in the inter- polating polynomial is chosen using r , r as above. The formula changing technique is such that when we use a new formula which requires more past values (or derivatives) than the old one, we use actual past data rather than arbitrary values, such as zero. Formally, we are introducing a formula change matrix (0. . in the notation of Gear and Watanabe (1974)), which is the identity. The result is equivalent to using a matrix which updates all the derivatives in the Nordsieck representation when the formula is changed. We now prove the main theorem of this section. 69 Theorem 3.4 If a family of methods of the form (3.1) is used to solve the ODE (1.1) in a manner such that 1) each formula F. is strongly stable, 2) each formula has order _> 1, and 3) the interpolation technique with c = 2 or the variable step technique is used; then for each formula F. there exists an integer p. such that the method i r i is stable and convergent for a stepsize selection scheme (2.12) that produces small stepsize changes satisfying h /h m = 1 + 0(h) m m-1 and a formula selection scheme which produces infrequent formula changes in the sense that at least p. steps are taken after the formula F. has been selected. Proof: The proof will use several technical lemmas shown by Gear and Tu (1974) and Gear and Watanabe (1974). The key condition is the stability condition, i.e., that there exists k < °° such that satisfies for allO n. 70 The following lemma is proved in the same manner as Theorem 2 of Gear and Tu (1974) : Lemma 3.6 If a method satisfies the stability condition, if S (of Lemma n 3.5) is uniformly bounded, and if the truncation error d satisfies n lid < h e(h) for all n where e(h) -> as h -f 0, then the method converges and there exist constants k„ and k„ such that the global error e satisfies 2 3 n H £ JI - k 2 H £ oll + k 3 6(h) for0 i t „i T - Since each formula is of order > 1, the hypothesis on d is satisfied. — m Thus, showing the stability condition will prove convergence. Similarly by Theorem la of Gear and Tu (1974), the stability condition will imply stability. Lemma 3.7 If a method satisfies the stability condition, and S (of Lemma n 3.5) satisfies lis < k <°° for all < t < T, then the method is stable. " n" — 1 — n — The matrix S depends only on the formula F. used and is the ampli- fication matrix for a constant stepsize method applied to y' = and may be denoted S. . Since the formulas are exact for y' = 0, y(0) = 1, S. has an eigenvector x. corresponding to the eigenvalue 1 with ones corresponding T to entries y in (Y , . . . , Y ) or T (depending on the technique) 7 n-i m m-p m r n max and zeroes elsewhere. The formulas are strongly stable at hX = 0, so other eigenvalues of S. are less than 1. The following lemma, which is a special case of Lemma 2.1 of Gear and Watanabe (1974), shows than S can be bounded as long as there are enough S. *s of the same index side by side. Lemma 3.8 Let {S.} be a set of matrices with the following properties: 71 1) Each S. has one simple eigenvalue equal to 1 with the same eigenvector x , and all other eigenvalues are less than 1 in magnitude. _!• 2) If Q. is such that its columns have norm 1, and Q. S.Q. = J., i l 11 l the Jordan normal form of S. (with the first column of Q. chosen l x i to be x ), then there exist constants C , C such that llQ- 1 !! 1 c > HQiHl c i for a11 i- Then for any constant K > C C there exists a set of integers {P.} such that N qi . n S. J || <_ K for all N > 1 j=l M whenever q . > P . This establishes Theorem 3.4. Q.E.D, We note that the hypotheses of the theorem are satisfied for all the methods in the examples of this paper. Thus for sufficiently infre- quent formula changes and small stepsize changes, the methods are stable and convergent. 72 4. QUADRATURH METHODS Methods which solve equation (1.6) in practice use a quadrature formula to estimate the integral. We choose the weights w. . and abscissas ij y. • so that the quadrature formula, p(t ♦ eh) = p(t ) ♦ j w.-fCpCY^), y ti ). 1 - n s-l , integrates polynomials with degree C - 1 exactly. To find the absolute stability region we apply the method to y 1 = Ay, with constant stepsize h. Then P Ct ♦ o.h) . P (t ) ♦ l w.ap t, ) ■ f'Vi' + t +6.h n l n-1 Ap(£)d£ By (1.8), and changing the variable of integration to t = (£ - t ) /h , , k (k) h y . J n+j 8. r r x k ik) h v I. V e i } -ti - vi * I P V Th)T J -i jk J 3k v. J J J (k) k But y . = X V . , so the preceeding formula becomes 7 n+j n+j r b ' n-*-j I. V i } "irVi - Vi + I — ki jk • jk jk (x)d T y n+j , (4.1) -1 i = 0, ..., s-l . For s > 1, this can be written as a vector difference equation, in which case the discussion of order, stability, and region of absolute stability presented in Chapter 3 applies. The remainder of this chapter will be devoted to the special case of s = 1 . Then (4.1) has the form 73 a y + . . . + a.y =0 (4.2) -r / n-r On where a. 1 c.-l , 6. , . ( K r i k+H ^.(ej rr - 4>4v(T>dT ] k=0 "jk' 0' k! J Y jk' ' k! -1 + 5. . Here, as elsewhere, we have y - hX. As before, the region of absolute stability is {y | all roots £ of 7r(y,£) = have |£| < 1 (4.3) and |^| =1 implies ^ is a simple root} where 7T(y,0 = a_ r + a_ r+1 C + ... + a Q f ■ The specific methods whose stability regions are shown in this paper are all strongly stable at y = °°, that is, the roots of 7t(°°, £) are all less than 1 in magnitude. By continuity of the set of roots of a polynomial as a function of its coefficients, the methods are strongly stable in the exteriors of the locus {y there exists % with | £ | = 1 such that 7r(y,£) = 0} . To define the order of the methcd, let y . = y(t .), i = -r, ..., "n+i ' n+i s-1, where y(t) is the exact solution of the ODE (1.1). Let p be defined as in (1.8) and let F n (y) - P (t n * 6h) - p(t ) - j w f(p(T. ). Y ) j=l j j j be the amount by which the exact solution fails to satisfy the method. Then the method is of order k if D n (y) = 0(h k+1 ) for all solutions y(t) which have at least k + 1 continuous derivatives. As was shown in the collocating case, this definition is equivalent to the general definition stated earlier in this chapter. By the properties of 74 interpolating polynomials and the Lipschitz property on f, P (t n + W = y(t n + W + °^ - «-<) P (t n-1 } = ^Vl } ' t +e„h t +e n h n n f f(y(?)>0 + o(h L )d^ n n f(p(0,Qd£ = t 1 t ! n-1 n-1 t +6_h r n ° c+1 f(y(C),QdC + 0(h L ') . Vl Because the quadrature formula is assumed to be precision C - 1 , the order of the method is at least C - 1. If 6. is such that t + Oh is an n n interpolation point, then p(t + QA\ ) = y(t + QA\ ), so the order is at r n On ' n On least C. Two families of stiffly stable formulas are and I(0:l r ;2) r = 1, ..., 7 (4.5) 1(0:2, l r_1 ;23 r = 1, ..., 6 . (4.6) The stability regions of (4.5) are presented in Fig. 4.1. The corresponding D's and a's are given in Table 4.1. The stability regions of (4.6) are presented in Fig. 4.2 with corresponding D's and ex's in Table 4.2. The formulas of (4.5) have the same stability polynomials as Enright's second derivative methods (1974) even though they do not require calculation of Df . These two families were discovered by Watanabe (personal communication) . The integral formulas l(0:c_ i; c ) (4.7) when applied to y' = Ay give the difference equation 75 ll^Vl = f (y)jr n where deg(iT ) = c and deg(7T ) = c . The order of the method is c + c 0> so tt (y) c -l +c +1 y(t ) = U , , y(t J +0(h l u ) . 7 *■ n tt , (y) n-1 ' The exact solution satisfies y(t ) = e y(t n ), so that n n-1 e = jt-v + 0(u ) as h -»■ 0, a fixed. Tr (u) VM ^ Thus tt (u)/tt (y) must be the [c„,c ] Pade approximant to e (Wall, 1948, pp. 377, 378). Thus by results of Birkhoff and Varga (1965) and Ehle (1969), the methods l(0:c ;c ) are A-stable if c = c and strongly A- stable ifc=c +lorc=c +2. The formulas (4.7) are useful when the derivatives of f are easy to evaluate, for example when f is a time-independent linear function. In general, however, quadrature formulas are less attractive to implement than the collocation formulas in the multistep case, since for the latter one does not need to calculate quadrature weights and abscissas at each step. 76 Table 4.1 INTEG1 I(0:l r ;2) r order D a (°) 1 3 90 2 4 90 3 5 -0.103 87.9 4 6 -0.526 82.0 5 7 -1.339 73.1 6 8 -2.728 60.0 7 9 -5.178 37.7 Table 4.2 INTEG2 1(0:2, I r_1 ;2) order D a (°) 1 4 90 2 5 -0.274 86.1 3 6 -0.888 79.1 4 7 -1.878 69.3 5 8 -3.460 55.0 6 9 -6.157 28.6 77 e-et D C7> n K^ $H 0) T) Sh o O (0 H Z to rvm i *t 1- -3 o o C _ 79 LIST OF REFERENCES Birkhoff, G., and Varga, R. S., 1965, "Discretization Errors for Well-Set Cauchy Problems, I," Journal of Mathematics and Physics , vol. 44, pp. 1-23. Bickart, T. A., and Picel, Z., 1973, "High Order Stiffly Stable Composite Multistep Methods for Numerical Intergration of Stiff Differential Equations," BIT , vol. 13, pp. 272-286. Bjurel, G. ; Dahlquist, G.; Lindberg, B.; Linde, S.; and Oden, L., 1970, "Survey of Stiff Ordinary Differential Equations," Report # NA70.11, Dept . of Information Processing, Royal Inst, of Technology, Stockholm, Sweden. Cryer, C. W. , 1973, "A New Class of Highly-Stable Methods: A -stable Methods," BIT , vol. 13, pp. 153-159. Ehle, B. L., 1968, "High Order A-stable Methods for the Numerical Solution of Systems of Differential Equations," BIT , vol. 8, pp. 276-278. } 1969, "On Pade Approximations to the Exponential Function and A-stable Methods for the Numerical Solution of Initial Value Problems," Research Report SCRR 2010, Dept. of Applied Analysis and Computer Science, University of Waterloo. Enright, W. H., 1974, "Second Derivative Multistep Methods for Stiff Ordinary Differential Equations," SIAM Journal on Numerical Analysis , vol. 11, pp. 321-331. Gear, C. W. , 1969, "The Automatic Integration of Stiff Ordinary Equations," in_ Morrel, A. J. H., ed., Information Processing , 68 , North Holland Pub. Co., Amsterdam, pp. 187-193. , 1971, Numerical Initial Value Problems in Ordinary Differential Equations , Prentice Hall, Inc., Englewood Cliffs, N. J. 9 and Tu, K. W. , 1974, "The Effect of Variable Mesh Size on the Stability of Multistep Methods," SIAM Journal on Numerical Analysis , vol. 11, pp. 1025-1043. t and Watanabe, D. S., 1974, "Stability and Convergence of Variable Order Multistep Methods," SIAM Journal on Numerical Analysis, vol. 11, pp. 1044-1058. Henrici, P., 1962, Discrete Variable Methods in Ordinary Differential Equations , John Wiley and Sons, Inc., New York. Isaacson, E., and Keller, H. B. , 1966, Analysis of Numerical Methods , John Wiley and Sons, Inc., New York. Jain, M. K., and Srivastava, V. K., 1970, "High Order Stiffly Stable Methods for Ordinary Differential Equations," Dept. of Computer Report No. 394, University of Illinois, Urbana, Illinois. 80 Jeltsch, R., 1976, "Stiff Stability and Its Relation to A - and A(0)- Stability," SIAM Journal on Numerical Analysis , vol. 13, pp. 8-17. Wall, H. S., 1948, Analytic Theory of Continued Fractions , van Nostrand, New York. 81 APPENDIX I LISTING OF CODE FOR COLLOC1 FAMILY OF METHODS 82 COLLOC: PROC(N.T.Y.H.HMIN.HMAX.EPS.K.MAXDER.FAILFLAG.FAIL) REOROERi DCL (N.K.MAXDER.FAILFL AG) FIXED BIN. / < T.Yf*. *). H.HMIN.HMAX.EPS) FLOAT ( I 6 ) .F AI L LABEL; /* N NO. OF FIRST ORCER DIFFERENTIAL EQUATIONS T THE INDEPENDENT VARIABLE V CONTAINS THE DEPENDENT VARIABLES AND THEIR SCALED DERIVATIVES. V(J.I) CONTAINS THE J-TH DERIVATIVE OF r( I ) MULTIPLIED BV H** J /F ACTORI AL < J ) . WHERE H IS THE CURRENT STEPSIZE. ONLY YlO.I) NEED BE PROVIDED BY THE CALLING PROGRAM ON INITIAL ENTRY, H THE STEPSIZE TO BE ATTEMPTED ON THE NEXT STEP. H MAY BE ADJUSTED UP OR DOWN BV THE PROGRAM AFTER TAKING THE STEP HMIN MINIMUM STEPSIZE THAT CAN BE USED HMAX MAXIMUM STEPSIZE THAT CAN BE USED EPS ERROR TEST CONSTANT. SINGLE STEP ERROR ESTIMATES MUST BE LESS THAN THIS IN NORM. THE STEPSIZE AND/OR ORDER IS AOJUSTED TO THIS END K ORDER TO BE USED FOR ATTEMPTING NEXT STEP. IF K<1 AN INITIAL STEP WILL BE ATTEMPTED. OTHERWISE THE NEXT STEP WILL CONTINUE FROM THE LAST. UPON RETURN K HOLDS CURRENT ORDER AND INDEX OF HIGHEST AVAILABLE DERIVATIVE MAXDER THE MAXIMUM DERIVATIVE THAT CAN BE USED. EQUAL TO THE HIGHEST ORDER. SHOULD BE <= 9 FAILFLAG A COMPLETION CODE WITH THE FOLLOWING MEANINGS: STEP WAS SUCCESSFUL FAIL A LABEL TO WHICH CONTROL WILL BE TRANSFERED IN CASE A SUCCESSFUL STEP CANNOT BE TAKEN */ /* A PROCEDURE DI FF UN ( T.H • Y • N) IS REQUIRED WHICH SETS Y( 1 .*) = H*F(T,Y<0.*) ) WHERE THE DIFFERENTIAL EON BEING SOLVED IS V»»F(T.Y) *• DCL (X(N).D. ALPHA. S AVE( tMA XDER .N ) . FI N M FLOAT< 1 6) . < ALPHAD.DD.ALPHAU.DU, ALPHAM) FLOAT ( 16 ). H_OLD STATIC INITIO) FL0ATC16). ERRCOEF(2:9.3 ) FL0ATI16) STATIC INIT{ *• 5E0. 1.48E0. 2.50E0. .971E0. .4**EO. 2.96E0. .392E0. .219E0, 6.76E0. .233E0. .147E0. 22.7E0. .161E0. .110E0. 99.9E0. .121E0. .0869E0. 543. 5E0. .0955E0. .0716E0. 3516. EO. .0783E0. * }• INITIALIZE STATIC FIXED BIN INITIO). /* INITIALIZED TO REFRESH H, =1 IF OLD H EXISTS */ (I.J.STEP_CT STATIC IN IT ( ) • FFLAG 1FIXED BIN; 83 IF K<2 THEN DO ; CALL DIFFUN< T. M.Y.N) ; do 1=1 to n; save(o.i) = rco.i) ♦ hmin*yii. r j/h; eno; CALL 0IFFUN(T+HMIN,H.SAVE.NI ; OO 1=1, n; Y{2.1) = (SAVE! 1. I )-Y( 1. I) )*H/HMIN/2E0; END; k=2; h_olo=oeo: initialized ; end: IF h -= H_OLD C H_OLD -*a 0E0 THEN DO; ALPHA ** H/H_OLO; CALL change_h; initialize = 0; end: h_old a h; do 1 = 1 to n; DO J=0 TO k': save< j.i ) = y( j.i) ; end; end; try: call difstep*x( n; end; d = s0rt(d)*errc0ef(k,2» : alpha = (eps/d)**( 1e0/(k+1 ) ) *.9e0; if d > eps then do; /* step faileo — repeat */ if h=hmin then go to fi; T = t-h; do i=i to n; do j=o to k; y< j. i) = save! j.i i ; end; end; h = max(h*alpha.hmin> ; alpha = h/h__old; call change_h; go to try; end; else if step_ct<0 then do: /*too soon to change k or h */ STEP_CT = STEP_CT*i; h_old =» h; end; else do; /*calculate step £ order changes */ if k>2 then do; /* calc alpha to lower k */ dd = oeo; DO 1=1 to n; dd = dd ♦ y(k. i)*y(k.i ); end; dd = sort (dd) *errcoef(k. 1); alphad = ceps/do)**< 1e0/k)*.9e0; eno; else alphao=oeo; if k*.9e0: end; else alphau = oeo; alpham = max( alphad. alpha. alphau) ; 84 IF ALPHAM>l.lEO THEN DO:/* CHANGE STEPSIZE t/| ORDER *• IF ALPHAM=ALPHAO THEN K=K-i; ELSE IF ALPHAM = ALPHAU THEN DO; oo 1=1 to n; YCK.I) = (V(K-l,I)-(H/H_OLO)**(K-l)« SAVEIK-1. I) )/x; eno; end; m_olo = h; h = min(h*alpham.hmax) » end; else oo; /* change too small to bother with */ h_olo = h; end; end; IF STEP_CT>=0 6 Khmin then do; /* intercept nonlina failure *• t = t-h; oo 1=1 to n; do j=o to k; y( j.i ) = savei j.i> ; end; end; h = maxih/ 2e0.hmin1; alpha = h/h_olo; call change_h; initialize = 0; GO to try; end: else go to f2; 85 OIFSTEPCPROCCN.T .Y.H.K ) DCL (N.K) FIXED reorder; BIN.(T.H,Y<*.4) ) FLOAT! 16) ; /* AT ENTRY Y(J,I) CONTAINS AN APPROXIMATION TO THE J-TH DERIVATIVE OF Yd) TIMES H**J/F ACTOR I AL ( J ) . CORRESPONDING TO TIME T. THE PROCEDURE TAKES A STEP OF SIZE H AT ORDER K AND RETURNS UPDATED VALUES OF T,Y. IN ADD I T ION. X ( 1 : N » IS RETURNED WITH THE DIFFERENCES BETWEEN CORRECTED AND PREDICTED VALUES OF V AT NEW TIME T<= OLD T«-H), I.E. X=YC-YP */ DCL (YTHETAIO: X .N). YTP<0:i.N|, YPRED I CTEDI : 1 . N ) . DEL Y( : 1 . N ) I FLOATC 16) . ( I.J.L) FIXED BIN. TTI2I9. 2:9.0: 1) FLOATU6) STATIC INITC /* ORDER /* 2 */ -I -1.000000COOOOOOOOEO. (14) OEO. 1 */ 1.000000 00 00 0000 OEO. /* ORDER 3 */ /* -7/4 -3/4 -1 .750 00 000000 00 00E0. -7.5 00 00 000 000 00 00E-1. (12) OEO. 3/2 1/2 */ 1 .S00000OO00OOOOOE0. 5.00 00 000 0000C00 0E-1 . /* ORDER 4 */ /* -85/36 -5/3 -1 1/36 -2.361 1111111111 11E0. -1 .666666666666667E0. -3.0 5555 5555555556E-1. (10) EO • 11/6 1 1/6 */ 1.83 33 333 33333333E0. 1.00 00 00000000000EO. 1 • 666666666666667E-1 • /* ORDER 5 */ /* -415/144 -755/288 -1 19/144 -25/288 -2. 88 194 44444444 44 EO. -2. 621 52 7777777 778E0. -8. 2 63 88888 8888889 E-l. -8.6 8C55555 5555556E-2. (8) OEO. 25/12 35/24 5/12 1/24 */ 2. 08 33333 3333333 3E0. 1. 45833333333333 3E0» 4. 166666666666667E-1, 4.166666666666667E-2. /* ORDER 6 */ /* -12019/3600 -343/96 -2149/1440 - 133/480 -137/7200 -3.33861 11111111 11E0. -3.5 7291666 6666667E0. -1 .4923611 1 111 1 1 11E0. -2.770833333333333E-1 • -1 .902777777777778E-2. (6) OEO. 137/60 15/8 17/24 1/8 1/120 */ 2. 2833333 333 33333E0. 1.8 750 00 00 00 00 OEO. 7. 08 333333333333 3E-1. 1. 2500000000 00 000E-1. 8. 333333333333333E-3. 86 /* ORDER 7 */ /* -13489/3600 -16219/3600 -6503/2880 -1631/2880 -1 009/14400 -49/14400 -3. 74694 44 4 4 44 44 44 EO. -4.5 052 7777777 7 778E0. -2.25798611 111 1 I HE 0. -5. 6 63 1 944 4 444 4 4 44 E- I • -7.0 06 94 44 44444444E-2. -3.4 02777777777778E-3. (4) 0E0. 49/20 203/90 49/48 35/144 7/24 1/720 */ 2 • 45 00 00 00 00 00 00 0E0. 2. 25555555 55 55556E0. 1. 02 08 333 333 33 33 3 EO, 2.4 30 555555555556E-1. 2.91 66666 6666666 7E -2. 1 .36888888 88 88 889E-3. /* ORDER 8 */ /* -726301/176400 -9743/1800 -31 1821/1 00800 -17/18 -8069/50400 -179/12600 -121/235200 -4.1 17352607709750E0. -5.41277777 777 7778E0. -3. 093 46 23 01587 30 1E0. -9. 4 44 44 444 44444 44 E-l • -1 .6 0099 206 349 20 63E-1 • -I .4 20 63 492 06349 20 E-2. -5. 1445578 2 31 29251 E-4. (2) OEO. 363/140 469/180 967/720 7/18 23/360 t/180 1/5040 */ 2.59 28571 428571 43E0. 2. 60 55555 5 555555 6E0. 1 .34 3055555555556E0. 3. 88 88 8888 88 8888 9E-1, 6. 38 88888888 88889E-2. 5.555555555555556E-3. 1.984 1269 84 I 2698 4 E-4. /* ORDER 9 */ /* -3144919/705600 -17763211/2822400 -5 34731/1344 00 -753029/537600 -19637/67200 -1 379/38400 -6779/2822400 -761/11289600 -4.457084750566893E0. -6. 29365469 1043084 EO. -3. 97865 327 38095 24E0. -1 .400723586309524E0. -2.9 22 1726190476 19E-1. -3.591 145833333333E-2. -2.401 856575963719E-3. -6.740 7171201814 05E-5. 761/280 29531/10080 267/160 1069/1920 9/80 13/960 1/1120 1/40320 */ 2.71 78571 42 8571 4 3E0. 2. 9296626984 12698E0. 1 .668750000000000EO. 5.56 77 08333333333E-1 • 1.12 50 00000000 00 OE-1. 1 . 3541 666 666 66 66 7E -2. 8. 92 85 71 4 2 85 71 42 8E -4, 2.480158730158730E-5 )• /* PREDICT */ DO L=0 TO K-i; DO J=»K-1 BY -1 TO L! do 1=1 to n; y = ypredictedi !•*>; do j=p to k; do 1=1 to n; YTP(O.I) a YTP(O.I) ♦ YfJ.I); YTP(l.I) = YTP(l.I) ♦ J*Y(J.I); end; end: t = t+h; x{*) = oeo: call nonlinaco.n.eps • max ( 4*n • 1 6) .fa2 .fflag .computef) : DO 1=1 TO n; do j=2 to k; y(j.i> = y ♦ ttck, j,0)*delyio»i ) + TT(K. J, I )*DELYt 1, I J; end; eno; return; computef: prociequs.floc) reorder; dcl ecus fixed bin.floc label; dcl (i.j) fixed oin,t_theta(2:9.0:l.0:i ) fl0ati16) static init( /* ORDER 2 */ /* -2 0.000000000 00 0000 EO. -2. 00OOO0OO0OOOOOE0. 2 3 */ 2. 000000 000000 00 OEO. 3. 00 00 00 000000 OEO. /* ORDER 3 */ /* -3/2 -23/4 -I .500000000000000EO. -5. 750 00 00 0000 000 EO. 3 11/2 */ 3. 000000 00000 00 OEO. 5. 50 00 000 00 00 00 OEO. /* ORDER 4 */ /* -10/3 -197/18 -3. 333333333333333E0. -1 .09444*44 4 44 44 44 El. 4 25/3 */ 4. 00 00000 000 00 00 OEO. 8.333333333333333E0. /* ORDER 5 */ /* -65/12 -2501/144 -5. 41 666666666 6667E0. -1.736805555555556E1. 5 137/12 */ 5. 00 00000 00 0000 OEO. 1. 141 6666 666 6666 7E1 • 88 /* ORDER 6 */ /* -77/10 -4973/200 -7. 7 0000 00 00000000 EO. -2.4 8650 000 000 0000E1. 6 14 7/10 */ 6. 0000 00 0000 0000 OEO. 1 . 4 700 000000000 0E1. /* ORDER 7 */ /* -203/20 -13327/400 -1.015 00 00000000 00 Eli -3.331 7500 0000 0000E1 • 7 363/20 *• 7. 00 00 0000 0000 00 OEO. 1 .81 50000 0000000 0E1. /* ORDER 8 */ /* -446/35 -208903/4900 -1 .2 74 2S571428S714EL -4.263326530612245E1 • 8 761/35 */ 8. 00 00 000 000 0000 OEO* 2. 1 742 85 71 428571 4E1. /♦ ORDER 9 */ /* -4329/280 -4134649/78400 - 1.546 07 I 428571 4 28E1 t -5. 2 73 7869897959 I8EI1 ON OVERFLOW GOTO FLOC; 7129/280 */ 9. 00 0000 00 000000 OEO. 2.546071428571428EI ) do 1=1 to n; Y(O.l) = YPREDlCTEOiO.il ♦ X(I|. end: call diffun(t.h.y,n»; DO 1=1 TO n; DELY(0. I) = X( I >; dely(l.i) = y( 1 , i )-ypredicted( 1. i) ; end; do 1=1 to n: ytheta(o.i) = ytpio.i) ♦ t_thetac k . . ) *del y< * i ) ♦ T_THETA(K.O.i )*DELY<1 ,1); end; call diffun(t+h.h.ytheta.n|; do 1=1 to n; F(I) = + T_THETA(K»1 .0>*DELYI O.I > ♦ T_THETA(K. 1. 1)*DELY< 1. I) - YTHETAIl.II )/MAX(lEO.H) ; end; end computef; end difstep; 89 nonl i na:proc< equs. order ,tol , maxf .fa i l. type. computef ) reorder ; /* basic algorithm from comp j vol 12 no 4 nov 1969 pp 406-a08 *• /* by c.g.broyden */ /* this procedure solves the nonlinear simultaneous equations f( i i (x< 1 ) ,x(2) x(n)) = 0* 1=1.2. ••• .n. it assumes that two n vectors. x and f. and a procedure computef(equs.fail) have already been declared. using the contents of array x as data the procedure computef should calculate the appropriate residuals ano assign these to the array f. before calling nonlina initial values of the elements of x should have been assigned, of the two formal parameters of computef the first. an integer* indicates which set of nonlinear equations is to be solved. i.e. which particular mapping of x onto f should be carried out by computef this enables any one of an arbitrary number of sets of nonlinear equations to be solveo. the second parameter. fail. isa label to which control should be transferred in the event of failure during execution of computef. the formal parameters of nonlina are as follows: (1) equsi integer) • fulfils the same function as the first formal parameter of computef. (2) order! integer ). the number of equations and unknowns in the set of equations selected bv equs. (3) tol(float( 16) ). the largest acceptable value of abs(f)**2. (4) maxfifixed bin), the maximum acceptable number of evaluations OF F. * (5) FAIL(LABEL), THE LABEL OF THE FAILURE EXIT. (6) TYPE(FIXED BIN). SEE BELOW. THE INTEGER TYPE IS A GUIDE TO THE KIND OF FAILURE THAT MAY HAVE OCCURRED. IT ASSUMES THE FOLLOWING VALUES: 0. NO FAILURE. 1. MAXIMUM NUMBER OF FUNCTION EVALUATIONS EXCEEDED. POSSIBLE CAUSES: TOL OR MAXF TOO SMALL. PROBLEM TOO NONLINEAR. INITIAL MATRIX TOO INACCURATE. POSSIBLE ACTION: INSPECT ABS(F) AND IF SMALL INCREASE EITHER MAXF OR TOL. IF ABS(F) LARGE USE NONLINB. 2. DIVISIGN BY ZERO WHILE UPDATING H, STORE ELEMENTS OF F IN DIFFERENT ORDER AND IF THIS FAILS USE NONLINB. 3. FAILURE IN COMPUTEF. USE N0NLIN8. 4. DIVISION BY ZERO WHILE INITIALIZING H. COMPUTE ELEMENTS OF F IN DIFFERENT ORDER. 5. FAILURE IN COMPUTEF WHILE INITIALISING H. CHOOSE IMPROVED INITIAL ESTIMATE OF SOLUTION. 6. OVERFLOW. IMPROVE INITIAL ESTIMATE *• DCL {EQUS, ORDER. MAXF. TYPE) FIXEO BIN; DCL COMPUTEF ENTRYIFIXEO BIN. LABEL)! DCL TOL FL0ATI16). FAIL LABEL! DCL (SA.SB.SX) FL0ATI16); DCL II. J.K.FCOUNT) FIXED BIN! DCL (OLD_ORDER) STATIC INITIO) FIXED BIN! /* OLD_ORDER HOLDS DIMENSIONS OF CURRENT H */ DCL <: ocl cfi.f2) label; DO 1=1 to order; xc i > = x(i ) ♦ pc II ; vci) = F(i 1 ; end; call computef(equs.fl j ; fcount = fcount+i; oo 1=1 to order; z(i) = fcii-vci); end; sa=o: DO 1=1 to order: sb=o; DO J=l to order; SB = SB ♦ HC I, J)*Z< j>; end; V( I ) = SB-P(l); SA = SA + SB*P(I); END /+CALCULATION OF HZ-P AND PHY ♦/ ; IF SA=0 THEN DO; DO 1=1 to order; sa = sa*f( i )*f(i| ; end; if sa 1e-70 then go to f2 ; end; return; /* no modification of h */ end; DO J=l to order; sb = o; do 1=1 to order; SB = SB ♦ P( I>*HC It j> : end; sb = sb/sa; do 1=1 to order: hci.j) = h( i . j)-sa*v( i ); end; /^modification of h */ end; end step; if order -.= clo_order then do; old_order = order; FREE H; allocate h; initialize =o; end; restart: type = o; call ccmputef(equs.f5) ; if initialize = then do; /* initialize iteration matrix h */ do 1=1 to order; pci) = o: hc i »i> = ieo; do j=i+i to order; hc i* j).hc j» i ) = o; end: end; do k=i to order: pck) = eps*maxceps.abscvco.k ) ) |;call stepcf5.fa); pck> = oeo; end; end; FCOUNT = i ; do 1=1 to order; sa = o; do j=i to order; SA = SA-HC I. j)*fc j) ; end; pc I ) » sa; END /* CALCULATION OF STEP VECTOR P *•; 91 repeat: call step(F3.F2); sb = o; sx=o; do 1=1 to order; sa = o; do j=i to order; sa = sa-h(i,j)*f(j); eno: pc i ) = sa; sb = max (sb.abs(sa) ) ; sx = max(sx.abs( x( i) 1 ) ; end; sa=o: do 1 = 1 to order; sa *= sa ♦ f(d*f(d; end: if dflag>=10 & t>40eo then if sa <1e-10& sb/sx < .01e0 then goto exit; if fcount >= maxf then goto fl s goto repeat; f6: type = 6; goto fail; f5: type=5; goto fail; fa; type=4; goto fail; F3: type=3: goto fail; F2: type=?; goto fail; fi: type=i; if initialize -•= then do ; initi alize=0; X(*) = oeo; go to restart; end; else go to fail; exit: IF FCOUNT < MAXF/2 THEN I NIT I ALI ZE=1 ; ELSE INIT I AL I ZE=0 ; end nonlina; END COLLOC; 92 APPENDIX II LISTING OF CODE TO FIND AND PLOT STABILITY REGIONS 93 INPUT : PROC; C(O). .... C(S-l)) 1 *• dcl ( rr.ss.rs.cf *> > fixed bin, method char<*); dcl(R_max,s_max) fixed bin; dcl c_max fixed bin,theta<*) float( 16) .descr char(400)varying; dcl string chari400) varying, char char ( 1 ) • template char(aoo) varying. buffer char! 80) varying, i fixed decimal. (j, k,l,m,num,r, s| fixed bin; read: get listidescr); PUT PAGE EDIT(DESCR) ( a) ; METHOD = descr; get string(descr) data(r,s); string=«»: /*in case of error in q.r.s */ if r<0 | s<1 | r>r_max | s>s_max then call err0ri5); rr=r; ss=s; /*cant get data with parameters ♦• rs = ceil(rxs) ; j= index (descr, • ( • >; if j = then call errordj; string=substr(descr. j) | | • s« : /* $ TO INSURE STRING NONEMPTY */ /* GET THETA( I) */ DO 1 = end; IF CHAR THEN CALL ERRORC2); TO S-l ; IF SUBSTRISTRING. 1 . I ) = »: STRING = SUBSTR( STRING, 2) ; K = VERIFYISTRING, • 1 23 456789 . E+- • } ; IF K<2 THEN CALL ERR0RC3); BUFFER = SUBSTR(STRING,1 ,K-1) ; STRING = SUBSTR(STRING,K) ; CHAR = SUBSTRISTRING.l ,1 ); IF CHAR -.= •,• t CHAR -.= •:• THEN CALL ERR0RI4); GET STRINGIBUFFER | | • •• ) L 1ST ( THET A< I ) ) ; -.= •:• THEN CALL ERROR<6); /* GET C(J) */ string = substr( string, 2) ; /* check syntax */ templ ate=translate( string, • dddddddddd* • , • 1 23456 789d* ) ; char = substritemplate. 1 . 1) ; m=o; /*no ; seen */ l=0; /*number of c(i)s */ num=0; /*1=seen num; -i=seen comma or semi*/ 94 next_num: oo while1 then call errorcs); IF L-»=R THEN CALL ERR0RI7); IF NUM<0 THEN CALL ERROR(II); num=-i; end; else if char=*d* then do; if num>0 then call error(a); NUM=1 ; L=L+1 : oo whilec char=*d* >; call bump; eno; end; else if char=*.* then 00; IF NUM<1 THEN CALL ERROR(ll); NUM=-1 ; CALL bump; end; else if char=*)* then 00; if m=0 then call err0r(8); if l-.=r + s then call err0r<9); if r=0 then string=translate(string. • •«••;)•}; else string=translate( str ing. *.♦ • . •;)• ); get string(string) list((ccj) do j=-r to s- i ) ) ; C_MAX = C(-R) ; DO I=-R+l TO S-i; C_MAX = MA X ( C_M AX ,C ( I ) ) ; END; return; end; . else call ERR0R(4); goto next_num; bump: proc; t fmpl a t e= sub str( template. 2) ; CHAR=SUBSTR{ TEMPLATE.l tl) • end; error: proccmess_number) ; dcl mess_number fixed bin; dcl message(12) static char(30» varying init« • NO ( • • • • TOO FEW THETAS. 1 • •VACUOUS THETA.'. •IMPROPER DELIMITER.*. 'BAD VALUE FOR O. R. OR S.». •TOO MANY THETAS. •» •WRONG NUMBER OF PAST C(I)S.». •WRONG NUMBER OF ;•• •WRONG NUMBER OF FUTURE C(IlS.*t *. •VACUOUS C< I ) •* • •C_MAX TOO LARGE* l; PUT SKIP(2) EDIT{ MESSAGE! MESS_NUMBER|. • INPUT I GNOREO. * ) ( A ) ; PUT SKIPI2) EDIT( STRING)! A» ; GOTO read; end; end input; 95 stabpol: proc(r.s.rs.c.c_max.theta.p.mu_max.xi_max.next_method) ; DCL THETAI*) FLOAT! 16>. /*OFF-STEP COLLOCATION POINTS */ . /* METHOD INTERPOLATES TO (C(I)-DST DERIVATIVE AT XIN + I) */ RS. /* NUMBER OF PAST BLOCKS*/ C_MAX. /*MAXIMUM C(I) */ MU_MAX. /*DEGREE OF STABILITY POLY IN MU */ XI_MAX /*DEGREE OF STABILITY POLY IN XI */ ) FIXED BIN. /♦PERTINENT DIMENSIONS OF C ARE <-R:S-l> */ P(*.*) FLOAT! 16) ♦ /*HOLDS COFS OF STABILITY POLY*/ /* STABILITY PLOYNOMIAL IS SUM P ( I , J ) *( MU ** I ) *( X I ** J ) 1=0. .... MU_MAX; J=0. .... XI_MAX */ NEXT_METHOD LABEL; DCL N FIXED BIN; /* DIMENSION OF INTERPOLATING POLYS*/ /* ELEMENT(I.J) IS COEF OF ( X I **I ) ( MU** J ) */ dcl l a(o :s-i .o:s-i >. 2 eli o:c_max.o:rs> FL0AT<16>. poly(0:s*c_max.0:s*rs» fl0ati16) ; a = oeo; poly = oeo; call set_up_a(r .s.c.rs. n) ; call find_det_a(r.s.rs.c_max,p,mu_max.xi_max) ; return; set_up_a: procir.s.c.rs.n); dcl i r, s,c( *) .rs.n) fixed bin; dcl ( i « j.l.k, jj , index, x(0:50) ) fixed bin; dcl (taei 0:50) ,v.d) floatu6): dcl fac(0:io) fl0ati16) i ni t( 1 eo . 1 eo .2 eo .6e0 . 2ae0 • 1 20e0 • 720E0.504 0E0.4 032 0E0.362 880E0.362a800E0> ; if c_max>10 then do; put skip edit('c_max too large.') (a); goto next_methoo; end; n=-i; /*find degree of interpolating poly */ DO I=-R TO s-i; N=N+C(I ) ; end; if n>50 then do ; put skip edit! 'degree of interpolating •. •polynomial too large • ) ( 2 a); GOTO next_method; end; 96 INOEX»0; /* SET UP ORDINATES FOR DIVIDED • DIFFERENCE TABLE *• DO I=-R TO s-i; DO J = l TO C( n; x(index) = ii index=index+1 i end; end; DO J = -R to s-i; K=RS+FLOOR( J/S) ; JJ=MOD( J.S) ; DO L = TO C( J)-l : CALL TABLE( J.L.N.X | ; DO 1 = TO S-l ; CALL DERIVSCTHETAd ) .V.D ); /* LET(A(I,J) = A( I« J)+P( 1 )/FAC(l_) *MU**L -P< 0)/FAC = A( t. JJ).EL(L.K) +D/FACCL»; A(I . JJ) .ELCL+1 .Kl = A(I.JJ).EL(L+1.K) - V/FAC(Ll; end; end; end; return; table: proc< ii # jj.n.x) ; dcl f i. j. ii • jj.n.xf*)) fixed bi n. ( cor * last » fl0atc16); /* computes table of divided differences to characterize phi(iitjj). at return, t ab( j ) , j = . . . . , n is a formac array which contains the lower rising diagonal of the table with the ordinate x(0) at the bottom */ do i=n by -1 to ; DO J=0 TO n-i: IF X(II=X(I+J) i. J«JJ fc X(I)=II THEN CUR=i; ELSE IF X(I)=X THEN CUR=0; ELSE CUR=(LAST-TAB( J-l) )/ ( x( i*j)-x( i )> ; IF J = cur; end; end; end table; 97 derivs: proc ,v .d. last. cur) float; /♦USING CURRENT INSTANCE OF TAB COMPUTE ASSOCIATED POLYNOMIAL AND I DERIVATIVE EVALUATED AT XX */ /* V IS VALUE AND D IS DERIVATIVE */ W(0)=1 ;v=tab ; DO J=l TO n; w(j) = w( j-i )*(xx-xcj-i )) ; v = v + tab( j)*w( j) ; end; LAST = W( i) ; w< i ) = w( o) ; D = W(l )**TAB(1 ) ; DO J = 2 to n; cur = w< j-i )*« xx-x( j-i) >+last; LAST = W(J) ; w(j) = cur: d = d+vm j)*tab( j) ; end; end derivs: end set up a; FIND_DET_A: PROC(R, s.rs,c_max.p,mu_max.xi_max) ; dcl (r.s,rs.c_/4ax) fixed bin.p(*.*) float{16). ( mu_max ,xi_max) fixed bin; dcli i . j.k, jmax) fixed bin; /* FORM F>OLy=DET{A) */ /* DO 1=0 TO s-i; DO J=0 TO s-i ; PUT SKIPI2) EDI T(»A(« .I.J,» ) = ») FL0ATC16) IF Sal THEN DO 1=0 TO C_MAX; DO J = TO rs: POLY CI. J) = end ;end ; else if s=2 then call cross c . . 1 . pol y > ; else if s=3 then begin; dcl (cof0 .cof1.cof2) ( 0:2*c_max.o call crossc i . 1 . 2.cof0) ; call crossc 1 .0.2. cof1 ); call crossc 1 .0. i. c0f2); call sumc poly.0.0.cof0 j ; call difcpoly.l ,0*.cofi 1 ; call sumcpoly.2.0.cof2) ; end: else if s=4 then begin; DCL CCl.C2.C3t C4.C5.C6 ) C : 2*C_M A X . 0: 2*R S) FL0AT(16). ( COFO.COFI ,C0F2 ,C0F3) CO : 3*C_MAX » : 3*RS > FLO AT { 16 » ; CALL CROSS(5.0,I ,C1); CALL CROSSC2.0.2.C2) ; CALL CROSS(2.0.3.C3); CALL CROSS! 2, 1 ,2.C4>; CALL CROSS(2.1 .3.C5 ) ; CALL CR0SS(2.2.3,C6) ; cofo=oeo; cofi=oeo; COF2=0E0; COF3=0E0; CALL SUMCCOF0.1 CALL DIFCCOF0.2 CALL SUMCCOF0.3 CALL DIFCCOF1.0 CALL SUMCCOF1.2 CALL DIFCCOF1.3 CALL SUM(COF2.0 CALL OIF(COF2.1 CALL SUM(COF2.3 CALL DIFCCOF3.0 CALL SUM(C0F3.1 CALL DIFCC0F3.2 ELSE do; CALL CALL CALL CALL end; PUT SUMCPOLY.O SUM! POLY. 1 SUM(POLY,2 SUMIPOLY.3 l .C6) ; 1 .C5> ; l .C4) ; l .C6); i pC31 ; l »C2l ; l .cs); l .C3) ; l .ci ) ; i ,C4»; l tC2i ; l ,ci) ; o.cofo) ; o.cofi) ; 0.C0F2) ; 0.COF3) ; SKIPI2) E0IT('S>4 NOT I MPL I MENTED • ) (A) ; GO TO next_method; end; XI_MAX a S*RS; MU_MAX = S*C_MAX; DO 1=0 TO MU_MAX; DO J=0 TO XI_MAX; pci .j) = polyci .ji ; end; end; PUT SKIPC2) EDITC»MU_MAX * ».MU_MAX, C A.FC2) .A.FC2) > ; put skipc2); do k=o by 5 to xi_max; j.max = minck+4.xi_max) ; PUT SKIPC3) EDITC (• XI** • , J DO J=K TO J_MAX ) ) C X C 6 ) , C J_MAX-K+ 1 ) CX(8) ,A.F(2) ,XC8) )) ; XI_MAX >XI MAX} 99 PUT SKIP<2) EOIT( CHU**' . I, (P( I, J) DO J=K TO J.HAX} 00 1=0 TO MU_MAX) ) (A,F(2) .( J_MAX-K«-I ) E( 22 ,14 ) .SKIP) ; end; cross: proc(kk ,i i, jj, result) ; /* form determinant result = a( i i ,kk) *a ( jj*kk+1 )-a( jj.kk)*a( i i.kk+1 ) dcl resultc*,*) float < 1 6 ) * ( 1 1 . j j » kk ) fixed bin; dcl < i . j.k,l ,m1 .m2.kkp1 . 12, j2) fixed bin; KKP1=KK+1 ; DO L=0 TO 2*RS; Ml =MAX( OiL-RS) ; M2 = MIN* a( jj.kkp1 ).el(i2.j2) - a( jj,kk).el( i . j>* a( i i.kkp1 ) .el( 12* j2) ; end; end; end; end ; end cross; sum: proc< aa, i i * jj,o ; dcl (aa.ch*,*) floatc16); • * form a =a+b*c as polys in 2 variables */ dcl ( i , j.k.l .ml ,m2. i i « j j. add, 12. j2.b1 ,b2,c1 ,c2)fixed bin* dcl temp float! 16); A0D=1 ; common:bi=c_max; b2=rs; c1=h80und(c, 1 ) ; c2=hb0un0(c,2) ; if hb0undi aa ,1 kbh-c1 | hbound c a a, 2 xb2 +c2 then do * put skip edit(«a too small * . bl • b2 ,c 1 • c2 ) { a . 4 f<10>); stop; end; DO L=0 TO C2+B2; put skip; m1=max( 0.l-c2) ; m2=min; DO K=0 TO Cl+Bl ; TEMP=OEO; DO I=MAX( 0.K-C1 ) TO MIN(K.Bl); I2=k-i; DO J = M1 TO M2 ; j2=l-j; temp = temp + a( i i, jj) ,el< i, j )*c( 12, j2) ; end; end; if add=1 thfn aa ( k , l ) =a a( k ,l ) +temp; else aa(k.l) = aa(k,l)-temp; end;end; return; dif: entry( aa, i i , jj,c) ; add=o ; goto common; end sum; end find_det_a; end stabpol; 100 findrts: proc ( p. mu_max , x i_ma x . x. y . num_po i nts .0 .ps i .next_ method ) ; dcl p(*.*) float<16). / *cofs of stability poly*/ (mu_max. /*degree of poly in mu */ xl_max. /+oegree of poly in xi *• num_points)fixed bin.(o.psi) float. -.=0e0 then goto test_mu; end; end; test_mu:do io=o to mu_max; do j-jo to xi_max; if p(io.j) -.= oeo then goto reduce; end; end; reduce: if io=o & jo=o then return; do i=io to mu_max; do j=j0 to xi_max; p(I-io.j-jo) = p(I.j>; end; end; mu_max = mu_max-io; xi_max = xi_max-j0; if xi_max=o then do; put skip edit<»xi_max = 0. method skipped* ha) ; go to next_method; end; end input; 101 xi_at_inf:proc (p,mu_max.xi_max) ; dcl p<*.*> float* 16) . < mu_ma x , xi_max ) fixed bin; dcl traubz entryifixed b i n ( 3 1 ) t ( * ) c omplex flo at ( 1 6> . ( *) co mple x FLOAT ( 16 ) . <*)FLOAT(l6). (*)COMPLEX FLOAT! 16). (*> COMPLEX FLOATC16) .< *)COMPLEX FLOAT ( 1 6) • ( * ) COMPLE X FL0ATI16). (*)COMPLEX FLOAT! 16) .FIXED BIN(31)> OPTI ONS (FORTR AN) ; DCL (CI21 ) .ZCALI20) . DUM2 ( 20 ) . OUM3 I 2 ) . DUM4( 19 1.DUM5I 19), DUM6I19)) COMPLEX FL0AT(16). DUMM20) FLOAT( 16) . (N.NRTFND) FIXED BIN(31|; DCL (I,NUM_ZERO) FIXED BIN; /* XI «S AT INFINITY ARE THE ROOTS OF THE COEFFICIENT OF MU**MU_MAX. WHICH IS A POLYNOMIAL IN XI */ DO N=XI_MAX BY -1 TO WH ILE ( P ( MU_M AX .N ) =0E0 ) ; END; IF N<1 THEN GOTO STABLE; DO 1=0 TO n; cci+11 = p( mu_max.n-i) ; end; do i=n by -1 to while(c(i+1 )=0e0) ;end; num_zero=n-i ; n=i ; if n1 .000001E0 THEN GOTO UNSTABLE; ejjd; do i=i to nrtfnd; if abs( zcali i) )>0 .999999e0 then goto marg_stable; end; stabletput skip(2) edit! 'stable at i nf i n i ty • ) ( a) ; return; marg_stable:put SKIP(2) EOIT( • marginally stable AT INFINITY* )I A) ; GO TO next_method; UNSTABLEIPUT SKIP(2) EDIT( • unstable at INFINITY. METHOD SKIPPED. «MAl ; goto next_method; end xi_at_inf: 102 find_roots:proccp.mu_max.x i_max,x.v,num_roots,o.psd ; dcl p(*.*) float( 16) , ( mu_ma x , xi _max ) fixed bin. ( x( *.*) .y( *,*) ) float. num_roots fixed bin; dcl traubz entry(fixed b i n( 3 1 ) . ( * ) compl ex flo at < 1 6 ) . < *) co mple x float! 16) , ( *) flo ati 16) .(*) complex float ( 16) .(*) complex float (16) .( +jcomplex flo at ( 1 6 ) . ( * ) comple x fl0at(16|. (*)ccmplex float( 16) .fixed bin(31>> opt i ons (fortr an ) ; dcl (c(21).zcal(20) , dum2c20 ) .dum3(20) .dum4< 19) ,dum5( 19) • dum6u9)) complex fl0at(16). dumk20) float( 16) . (n.nrtfnd) fixed bin(31>; dcl (theta. del_theta.di st, dtemp ) fl0at(16). ( d.psi , ptemp.pi init(3. 14159) ) float (6). (total. iterations. calls) fixed bin initio). (z.xi) complex fl0ati6) • ( i. j. inoex.k.l) fixed bin; /♦have eliminated marginally stable at infinity cases so we know infinity is not a value of mu for |xl|=l. thus WE KNOW SUM P(MU_MAX. J) *; end; /* general case */ do k=3 to num_roots: theta = theta+del_theta; call set_up_c( complex ( co s( theta) . sin( theta) ) ); DO 1=1 TO n; z = complex(3*x( k-l ,1 )-3*x(k-2, i h-x(k-3» i ) ■ 3*y(k-1 . i >-3*y(k-2.i )*y(k-3.i )); complex (x(k, i ) .y(k, i ) ) = newton(z); end; end; * /* FIND D, THETA */ D=1E70; psi=oeo; DO 1=0 TO NUM_ROOTS; DO J=I to n; IF Xd.JXD THEN D=X(I,J); if x( i, j)**2*y( i . j)**2 > 1e-12 then do; ptemp=atan( abs( y( i.j)),x(i,j)); if ptemp>psi then psi=ptemp; end; end; end; PSI = (3. 141 5926536-PSI ) +180E0/3. 1415926536; PUT SKIP(2> EDIT(»D = '.D, 'THETA = », PS I »• AVERAGE NEWTON*. • ITERATIONS = '.TOTAL/CALLS) (A.F(14.8)tX(2).A,F(10,6),X(5).2 A,F( 9,6) ) ; /* PRINT OUT ROOTS */ DO K=l BY 4 TO N; L=MIN(N.K+3) ; PUT SKIP(3) EDIT( ( •ROOT* ,1 DO I=K TO L) J (X( 11 ). A.F(2) .X( 13) ) ; put skip; do 1=0 to num_roots; put skip edit((x(i.j).y(i.j) do j=k to ll) (2 F(14.8) ,X(2) ); end; end; return; 104 set_up_c:proc = temp; end; end set_up_c; newton:proc( z) returns (complex float(6)>; dcl z complex float<6»; DCL ( B( 21 ) .DERIV.Z0.Z1) COMPLEX FL0AT(16)t I FIXED BIN ; CALLS=CALLS+1 ; zi=z: i terations=0; loop: iterations=iterations+i : * zo=zi ; B(i ) = C(i); DO 1=2 TO N+i; B( I ) = B( 1-1 )*Z0+C( i); end; DERIV= B{ 1) ; DO 1=2 to n; deriv = deriv*z0+b( i); end; Zl = ZO-BIN+1 )/DERIV; IF ABS( Zt-ZO)/MAX(ABS(Zl) ,ABS(Z0) » > 1E-6 & I TERAT I ONS<20 THFN GOTO LOOP; IF ITERATIONS=20 THEN DO; PUT SKIPO) EDIT( 'NEWTON FAILED'XA); GOTO next_method; end; total =tot alliterations; RETURN! Zl ); END newton; end find_roots; end findrts; 105 plotrts:proc(x,y.num_points , num_roots, method) ; dcl { x( *. *) .y< *.*) ) float. ( num_poi nts . num_roots> fixed bin; dcl method char(*)5 dcl(ccp1pl entry(float, float. fixed bin(31i). ccp4sc entry! (* )float, float, fixed b i n ( 3 1 ) ,fi xed b(n(31). (2) FLOAT). CCP5AX ENTRYIFLOAT. FLOAT. CHAR{*) .FIXED BI N{ 3 1 ) . FLOAT . FLOAT. ( 2)FLOAT) • SYMEOL ENTRY(FLOAT. FLOAT, FLOAT, CHAR(*) .FLOAT. FIXED BINOII) )OPTIONS(FORTRAN> ; DCL ; SCX(1> = FLOOR! X_MIN/SCX(2) )*SCX( 2) ; X_DIST = CEIL( ( X_MAX-SCX( 1 ) )/SCX(2) ); Y_DIST=8E0; IF X_DIST>13E0 THEN DO; * ext< i ) = x_min; EXT(2) = X_MAX; CALL CCP4SC (EXT. 13E0.2. 1 .SCX); SCY(2> = SCX(2>; SCY( 1 ) = oeo ; X_DIST = 13E0; y_dist = ceil(y_max/scy(2) > ; end; call ccp5ax( oeo.oeo. • re ( h *l ambda ) • ,-12. x_d i st * oe 0. scx) ; call ccp5ax(-scx( 1 ) /scx (2) .oeo. • i magi h* lambda ) • . 14. y_dist.90e0.scy) ; i=index(method,» ( • i ; Ml = SUBSTR(METHOD. I .46) ; M2 = SUBSTR( METHOD, 1+46.46) ; CALL SYMBOL! OEO ,-.75E0, .2 5E0.M1 , OEO. 46); CALL SYM3 0L(OEO.-1.125,.25EO.M2.0E0.46); DO J=l TO NUM_ROOTS; CALL CCP1PL( (X(0, J)-SCX(l) )/SCX(2) . ABS ( Y ( . J ) ) /SCY ( 2 ) , 3) : DO 1=1 TO NUM_POINTS; CALL CCP1PL (