K LIBRARY OF THE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN 510.84 I£6r nOciaO-170 cop. St The person charging this material is re- sponsible for its return to the library from which it was withdrawn on or before the Latest Date stamped below. Theft, mutilation, and underlining of books are reasons for disciplinary action and may result in dismissal from the University. UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN NOV 15 Pff< m L161 — 0-10% Digitized by the Internet Archive in 2013 http://archive.org/details/hybridmethodsfor164gear 510.84- no. 164- o^ - ^VS - \ oi^_ cop. 3 UNIVERSITY OF ILLINOIS GRADUATE COLLEGE DIGITAL COMPUTER LABORATORY REPORT NO. 16k HYBRID METHODS FOR INITIAL VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS by C. W. Gear June 19 ; 1964 Abstract Methods for the integration of initial value problems for the ordinary differential equation -^- = f(x,y) which are a combination of one step procedures (eg., Runge-Kutta) and multistep procedures (eg., Adams' Method) (3) are discussed. A generalization of a theorem from Henrici proves that these methods converge under suitable conditions of stability and consistency. This, incidentally, is also a proof that predictor-corrector methods using a finite number of iterations converge. Four specific methods of order k,6,ti and 10 have been found. Numerical comparisons of the first three of these have been made with Adams', Nordsieck's and the Runge-Kutta methods. 11 1. Introduction The original motivation for this work was a desire to achieve some of the flexibility of Runge-Kutta type methods, which allow easy starting and step size changing procedures, and at the same time achieve the increased speed due to the fewer function evaluations and the higher order possible with multistep methods such as Adams' method. Multistep predictor-corrector methods are also characterized by readily yielding a number (the difference between the predictor and the corrector) that is a reasonable estimate of the local truncation error. This number can be used to automatically control the step size. Runge-Kutta methods typically require a further function evau- lation to get such an estimate. [e.g., see the Kutta-Merson process, Fox^ page 2k] . (h) One approach to this problem has been made by Nordsieck who recasts Adams' method in such a way that the derivatives of the approximating polynomial, rather than information at several function points, are retained. These are independent of step size so that it can be changed at will. The original approach made by this author was the following. 1) Reduce the amount of information (the number of points) being retained to minimize the start-up and step change problems. 2) Calculate information at the mid points as part of the process. Doubling the step size requires no interpolation. If the mid- points are available, halving would also require no inter- polation (although it is true that interpolation requires very little arithmetic in comparison to the rest of the job). Therefore, the investigation started with the class of methods described by k r n+ 2 i= a_.y . + hp-.f . Oi n-i Oi n-i " ± = f (x v y 1 ) n+g n+- . n+- y: n+l k Z i=0 L li n-i li n-i + 7 io hf 1 n+ 2 f n + l " f ^n + r y n+l) y n + l : k L i=0 2i n-i 2i n-i + h 7 20 f 1 + 7 21 f n+l n+ 2 (1.1) applied to the differential equation g-r the extraneous roots of the linear difference equations for the error are dis- cussed. There are 2k + 1 non-principal roots , of which k + 1 are zero if h x— = 0. For k = 1 the other non-principal root can also be made equal to zero by a suitable choice of the parameter. When k = 2, a 6th order method can be found. It also has one free parameter in the corrector which can be chosen to minimize the extraneous roots of the difference equation. Zero roots can only be achieved by returning to a 5th order method. The predictor y , also has one free parameter if it n+l is 5th order (which will only cause a 7th order error in the corrector). This parameter can be chosen to get 6th order in the predictor, but it then has -2- undesirable stability effects on the method. In Section h, various choices of these parameters are discussed, and the method is compared with the Adams 6th order method, Nordsieck's method and the Runge-Kutta method on the integration of Jwr(x) for x = 6 (h) 6l38 ; with h = 1/32, l/l6 and l/8. Section 5 discusses higher order methods that can be achieved. Coefficients of 8th and 10th order methods which are stable are given. The 8th order method was also used to integrate J Ax), x = 6(l/l6) 6138 with good accuracy. The 10th order method was not used as it almost certainly has df undesirable stability properties when ^— is non-zero. .(2) A recent paper by S-ragg and Stelter deals with a generalization of this method where the non mesh-point may be any point fixed in relation to x . The generalization taken in Section 2 of this paper allows a number of non mesh-points to be used. A theorem is proved giving sufficient con- ditions for the convergence of these methods. These conditions are that each of the predictors (the estimates of the solution at a set of points) is at least of order zero, that the corrector (the final estimate of y and the next mesh point) is at least of order 1 and that the corrector is stable. This general formulation contains Runge-Kutta and multistep predictor-corrector methods as subsets. The formulation is explicit since, in practice, only a finite number of iterations of the corrector can be used. All but the last can be viewed as a set of predictors. 2. A General Formulation and Proof of Convergenc e Consider the sequence of operations and k P 0,n + 1 - a_.y . + hR .f . Orn-i Oi n-i i=0- k r p. -,= Z ,a..y . + hB..f *j,n+l ± ^ Q |_ ji J n-i K ji n-ij + hZ 7 ..f i=0 lp ± r ( 2.1) for j = 1,2, ... ,J ■3- where h is the step size (x = a + nh) and f = f (p. . ) (the quantities n p. i,n+l' s ^ L y, f and p are assumed to be vectors representing "both the independent variable x and the one or more components of the dependent variable). p, , , will be ' J-l,n+l taken as the predicted value of y and p T . will be taken as the corrected n+1 J, n+1 value. To avoid a final evaluation of the derivative f, we define f . by ' n+1 W^J-l^' If k = 0, this method is the general explicit Runge-Kutta method. On the other hand, if the coefficients are such that a. . = a. _ . J . . — \-> . -I y = y j,j-l VJ-l,j-2 \ for i - 0,1, ... ,k and j = 2,3, ... ,J J and all other y. . are zero then this method represents the use of a multistep predictor followed by J applications of a corrector formula. To discuss the error and convergence of this method we introduce the usual notation e = y - y(x ) where y(x) is the solution of the differ- n n n ential equation, and y is the value at x = x calculated by equations (2.1) from a suitable set of starting values y r .,y 1 , • • • ,y, • We use the definition (3) of convergence from Henrici , that is, the method is convergent if, for every problem -^- = f(x,y), xe [a,b], y(a) = A, satisfying Lipshitz and contin- uity conditions for xe [a,b] and ye(-<» + oo ), g -*0asx -+ xe[a,b] and y. -► A for i = 0, ... ,k as h -> 0. The method is said to be stable if each of the roots of the poly- nomial equation p(0 = i " ^ £ C£..=0is either inside the unit circle 1=0 ' J1 or on the unit circle and simple. The method is consistent if the corrector is of order 1 or greater . of the predictors, p. l , n- greater. Formally this means that and if all of the predictors, p. , i = 0, . . . ,J-1, are of order zero or k 1 - Z a.. = (2.2) 1=0 J1 for j = J-l, J and for those j such that y ^ (those p. that are not used in the corrector do not even have to be of order and if the B T . >i = 0, ... ,k and 7 are 0, (2.2a) need not hold for j = J-l) J, i J , J — -L and k k J-l k + 1 - Z (k-i) a = Z p. + Z 7 , „ J i . ~ i . _ Ji i=0 i=0 i=0 (2.2b) The theorem to be proved is that the Method Converges if it is Stable and Consistent." The proof closely parallels the proof of a similar theorem (o) in Henrici v ; (Theorem 5.10). Proof. Define P \ _ +1 = Z (a..y(x .) + hp f(y(x ))) + h Z y f(p ) (2.3) j,n+± i=Q l ji n-i ji n-i / i=Q ji i,n+l for j = 0,1, ••• ,J- Let and N L p ^(x„ y) = p T n „, n - y(x _ ) h n/ J-l,n+l n+1' L h (x n/ } = pT J,n + l " y(x n + l } y (2A) Thus L , and L, are the truncation errors in the final predictor and corrector h h respectively. We will first show that consistency implies the L and L,/h-» as h -+ uniformly for xe[a,b] and then shown that these conditions and stability imply convergence. -5- Define X(6) = max | y' (x*) - y' (x) |x-x* I s 5 x,x*e[a,b] X(5) exists and -> as § -> since -~- = f(x,y(x)) exists and is continuous in the closed interval [a,b]. Hence and where But y'(x .) ^ N n-i y'(x n ) + 0. X(ih) y(x . ) = y(x ) - in n-i n y'(x ) + e'.x(ih) \ I < 1, . < 1 and x , x . £ [ a , b 1 . l ' — n n-i How L. (x ,y) = p T , , - y(x ,) h n ' *J-l ; n+l J v n+1' - £<* i Cy(x n ) " ih ty'( x n ) +0' i X(ih)] i=0 ' k + h Z p i=0 J-l,i y' U) + 9 ,X(ih) J ^ 2 T + h Z 7 f(p ,n+l) 1=0 J " ± ' 1 1 y(x n ) - n y (x n ) + e^xCh) Z a T . = l i=o J" 1 ' 1 Therefore L, (x ,y) = 0(h) - as h -» h v n Similiary \(\,y) = Pj T , n+1 - y(x n+1 ) - 7\) Z a T . - 1 + h - y*(x) - Z i a y'(x ) + Z p i/'W n J,i n 1=Q J,i n J-l k + Z 7 T .f( Z a. y(x ) + 0(h)) i=0 m=0 + hX(kh) B where B is bounded as h -» 0. If 7 T . 4 0, then Z a. = 1. and f satisfies a Lipshitz condition, so that m=0 the last term can be replaced by J-l Z r T .y'UJ + 0(h) i=Q J,t n Nov using 1 = Z o_ . 1=0 J ' x and l + Zia - Zf3 - Z 7 = k + l - Z(k-i) a . - Zr - Z 7 - k(i-Z^ , ) = we get L h (x n ,y)= hBX(kh) + 0(h ) L, /h -» as h - h * * / \ Define the error in the predictor as € , that is, £ = p_ . - y(x ). Then T € , = p T , - p T , + L (x ,y) (2.5) n+1 J, n+1 r J,n+1 n n' ' v ' md € , = P T -, , - P T , , + L ,(x ,y) (2.6) n+1 J-l,n+l J-l,n+l Y\ n v y T tfhen we substitute for the p-p terms in (2.5) and (2.6), terms of the form T 1 7(f(p T •) " ^(p t •)) w iH ^ e included, which, by the Lipshitz condition, can oe replaced by h/L(p . - p _ . ) where L is bounded. Since the formula is explicit, J,i J , 1 the p T . - p T . terms can be substituted for repeatedly. The process stops after a maximum of J steps, and results in a polynomial in h with bounded coefficients Chus (2.5) and (2.6) can be rewritten as k k * e . = L, (x ,y) + Ea T . € . + h Z(e P .(h,x ) + g .P .(h,x )) (2.7) n+1 kr n' J y . _ J,i n-i . A n-i 1 i v ; n' n-i 2 i x ' n J ' 1=0 1=0 md i=0 ' i=0 . = L p ^(x ,y) + Z a T . .e . + h Z(g .P-.(h,x ) + e* .P, .(h,x )) (2, n+1 h v n'* 7 ' J-l,i n-i . Ji n-i 3i v ' n y n-i h± x n y ' v here the P, . , P„ . , P~ . and P, . are polynomials in h with bounded coefficients li 2i 31 ^1 (3) Henrici v , lemma (5*5) shows that if ^ k+1 - I a, /- i=o J ^ s the polynomial of a stable method, and if ■8- 1 i - Z a | i=o J ' x i+l 00 p=0 P (2.9) then 3" a constant r < <» such that \b \ < P, p = 0,1, (2.7) hy cL. . and sum for n = k to N-l to get v | i N-n-1 Multiply equation •N 5 + € n-l< 5 l " a j,0 & ) + + + S + l (5 N-k-l " a j0 6 N-k-2 ~ °" ^A^k^ N-l k * = Z 6. T n L n (x ,h) + Z (Bounded multiples of e. and e . ) N-n-1 h n . _ i i n=k i=0 N+l k + h Z 5 f Z . N-n-1. n n=k i=0 e .P_ .(h,x ) + € .P Q .(h,x ) n-i li ' n n-i 2i^ n' (2,10) From (2.9); o = 1, and terms like 5 - a T _6 - a Tn a . . . a Tn 5 = m JO m-1 Jl m-2 Jk m-k-1 Therefore, the left hand side of (2.10) = g . The last term of the right hand side involves each e. and e . no more than k+1 times. Therefore (2.10) gives the bound N-l ej < T(N - k) Max|L, (x ,y)| + M Z ( | € , | + |e , | ) + hC Z ( | 6 | + | € | ) n n i=0 n=0 n ' n where M and C are hounded for all h < some h_. Now N-k < -~, and — — h L. (x y)/h| < u(h) - as h - 0, h n -9- N-l * |ej < (b-a)rv(h) + s(h) +hc Z (|e n | + | € J ) (2.11) n=0 here S(h) is a function of the error in the initial conditions which -»• as -> 0, and C is "bounded for h < h . ubstitute (2.11) in the la .e . term of (2.8) to get * k * n " 1 / , * \ l«H ' * LP h (x N-l' h) + (T-«)lU(h)^ o |a J . ljl | + hC ^ (UJ + U n \\ (2.12) here C is "bounded for h < h . et E = Max |e N l, |e n and compare (2.11') and (2.12) (noting that ' (x , ,h) -* as h -* 0) to get Ir n-l N-l _ |eJ < h Z b|e | + v(h) N ' n 1 n=0 lere B is bounded and v(h)-*0ash-*0. litially e. = e . = E. for i = 0, 1, ... ,k. Let these be bounded by K -» 3 h -> 0, then |E N | < ( V (h) + K)(l + hB) N < ( v (h) + K)e (b " a)E - as h -* lerefore stability and consistency imply convergence. As in most discussions of convergence, the error bound given by (2.13) (3) > of little practical use. Techniques similar to these in Henrici .Section 5*3) can be used to get better asymptotic expressions for the error i particular cases. (2.13^ ~ 10 ~ UNIVERSITY Of ILLINOIS LIBRARY 3. A Fourth Order Method If k = 1 is used in equations (l.l) and the coefficients are chosen to give the maximum stable order possible, the equations become y n+ i ' Vi + 1 <9 f n + 3 W + 0(h4) ^n + l - 2 \ ' Vl + | ^ 1 " *„ - Vl> + °^ <3-l) n+- Vl ■ yP n + l " fa ^n " Vl> + Qh(f n + 1 " ^ 1 + 7f n + 2 Vl> + °^ n+ 2 •rtiere < a < — for stability. [f it is assumed that ^— = \ is constant (if f and y are vectors,, \ is a matrix, oT In which case this analysis must be viewed formally), the equations for the ;rror growth are, in the case of general k, e , n = Z (a.. . £ . + h\p n . e . ) + h\7 n Z (a_ . e . + h\p . e . ) . _ v li n-i li n-i '10 . ^ Oi n-i Oi n-i n + l _ . . _ „ . _ n + l = * ( a 2i e n-i + ^i^n-l* +h ^ 2 /n + l + h ^20 Z n (a 0i £ n-i + ^0i 6# n-i ) 1=0 i=0 These are a pair of simultaneous difference equations with constant oefficients, each of degree k + 1. Looking for a solution of the form = | , e = A£ , a (2k +2) th degree polynomial equation for % is obtained. |.t \ = 0, k + 1 of these roots are and the remainder are the roots of the tability polynomial „k+l v .k-i i - h OL A =0 i=0 >*„ In the fourth order method given above, these roots are 1 (the principal root) and 1 - 6a. This method does not have sufficient accuracy to justify its use in most circumstances , but a simple comparison was made between it with a = ?, the 4th order Runge -Kutta and the 4th order Adams -Bashf orth-Adams Moulton predictor-corrector method. The equations yn y, y 2 = -y x ; y 3 = y^ y^ ■y k y x (0) = 0, y 2 (0) =1 = y (0) = y^(0) were integrated for x = 0(.l)50, and it was found that the method lies between Runge -Kutta and Adams method in accuracy. This is to be expected since Adams used the largest spread of points and Runge-Kutta the smallest, and since the :oef fie lent in the error terms is proportional to the distance of the various points used from the interpolated point. The 5 digit results for y(50) are shown in Table 1. ■ METHOD SINX ERROR cosx ERROR 22 10 EXP(X) ERROR 10- 21 EXP(X) ERROR Fortran Library -.26237 .961+97 . 5181+7 . 19287 Runge - Kutta -.262^1 - k .961+95 - 2 . 5181+5 - 2 .19288 + 1 Hybrid Method -.262^5 - 8 .961+91+ - 3 .5181+3 + 6 . 19289 + 2 Adams -.26228 + 9 .96507 + 10 .51850 + 3 19283 + h Table 1. Value at x = 50 using points x = 0(.l)50 in equations (3-l) calculated on the IBM 709I+. ■12- k. A Sixth Order Metho d Using equations (l.l) with k = 2, and choosing coefficients to get a 5th order method, we arrive at the equations 225 25 153 y 1 = "128 y n + l6~ y n-l + 128 y n-2 + h n+ 2 225 15 , i+5_ 128 n + 32 n-1 128 n-2 P 17 15 y n+1 ~ IS y n y n-l + l£ y n-2 + L" ll f n + 12 f n-l + IS f n-2 + 3 ^ + a z Vi - y n+ i + K - V z + pw (iKl) where Z = 6l 29 32 ^ + y n-l + 32 y n-2 + h l f ,^i f . it3 f 2_ 32 n 24" n-1 I5o n-2 15 1 n+- J w = ^(y n - y n _ 2 ) + h It + i6f ,+Sf „ + i^f 4 n n-1 20 n-2 5 1 n+1 n+^ and a , a „, P are constants that must satisfy additional conditions for stability, Each of these equations is 5th order, and the truncation error in y , assuming that y , y and y _ are exact, is n n-1 n-2 hV^/29 2 ^2 -oT-lir + ^- 63P J + o(h 7 ) (4.2) Thus along the line 504p - 23a = 58 in the a - p plane, the corrector is 6th order. c\f If we assume that \ - -*r- is constant, as in Section 2, then the error oy satisfies a difference equation of order 2k + 2 = 6. At \ = 0, 3 of these roots -13- are 0, the principal root is 1, and remaining two can "be seen to satisfy 1+5P 1 \ , J 29 a ^5P . 15V n (k *\ For the method to be stable, these roots must either lie in the unit circle , or lie on the unit circle and "be mutually unequal and unequal to 1. Figure 1 shows the region of stability in the OC - P plane. Inside the triangle with 1 7 1 vertices A = (-2, + z) , B = ( + 2, + j-p) and C = ( +2, + -) ; the method is stable. The lines AC and BC except for the points A and B correspond to simple / 59 \ roots on the unit circle. The point D = (1, ^Z^J is the point at which both ro of (^-.2) are 0, the point of "maximum stability." Also shown in Figure 1 is the line of 6th order accuracy. This crosses the interior of the triangle ABC, neaning that points on the line segment PQ correspond to 6th order stable nethods. It is not possible to get a 7th order method by picking a particular point for two reasons. One is that it can be shown not to be stable, even if f , and f . are assumed to be at least of 6th order, the second reason is 1 n+1 n+ 2 ;hat f can only be of 5th order, and hence y hf will contribute 7 ! (~\ \ )(\h y ) error. In choosing a method from the line PQ, one could minimize the largest ion-principal root (which then has the value . C4). Because many of the pre- r 9 \ i.immary calculations were being done by hand, the point E = (1, r^) which is >nly a short distance from the minimum, was chosen. This leads to simpler .umbers for the human computer. At this point the largest non-principal root s .19, The choice of OC does not affect the results when \ - 0. However, hen \ £ 0, OC does modify both the error and the stability properties. In rder to decide on a suitable OC , the 6 roots of the difference equations for I and e were calculated for various values of CC, and X,h with 0^ = 1 and •n q n 12 = ^z. The difference between the principal root and the exp(Xh) is a crude sasure of the truncation error in one step. hX . , the smallest value of mm ^ for which the largest non -principal root was smaller than the principal root n( i h\ , the largest value of hX for which the largest non-principal root max lis less than one, were calculated for various 0C . -Ik- Figure 1. Region of Stability and 6th Order Line ■15- Also the differences between the principal root and exp (\h) for \h = +.1 and -.1, A and A respectively, were calculated for values of a . (Note _rO 1 that at a = -pT~> the predictor is also of 6th order, so one expects better accuracy—but not stability—in this neighborhood.) The results are shown in Figures 2 and 3« As was expected, the error decreases as OC moves down towards - — , but the range of Xh for which the equation is stable decreases. Any Q! between -.5 and +.k would appear to be a reasonable choice, the former being preferred if the equation is unstable, the latter for stable equations. In order to compare different parameter combinations for k = 2, the equations y 1 = y 2 > y 2 = -y 1 ; y x (o) = o, y 2 (o) = 1 were integrated for x = 0(.l) 200 on an IBM 709^. Four parameter combinations were used, corresponding to the points D (5th order, "maximally stable") and E (a 6th order method) with a = and 1 in each case. The results are shown in Table 2. As expected, the 6th order method using OC = is slightly superior. Nordsieck gives the results of integrating J Ax) from x = 6 to x = 6138 by his method which is a variant of the Adams 6th order process. Therefore, this equation has been integrated from the same starting values that Nordsieck used. (j , (6) = .000 001 201 950; dJ l6^ ' /&x = .000 002 986 ^80.) The integration was carried out by Runge-Kutta, Adams -Bashf orth- Adams -Moult on 6th order (with single correction where it was stable (h = — ) 1 and double correction where single correction was unstable (h = rrr and h = o)); by the 6th order hybrid with k = 2 corresponding to the point E in i Figure 1 with OC = 0, and by an 8th order method discussed in the next section. The step sizes l/8, l/l6 and l/32 were used in order to make meaningful com- parisons with Nordsieck 's method. The results that he quotes, although for variable step size, took average step sizes of l/8 and l/l6. Since this method involves two evaluations of the derivative, it seemed pertinent to use Adams ; single corrector with one derivative evaluation for h = 1/32. All of the 1 methods (except for R-K) were started by an R-K integration on l/8 the step I size in order not to introduce starting errors into the comparison. The results are summarized in Tables 3 and k. -16- h\ or 10^ A 4 " max a-, Figure 2. A and h\ against a, , H1Q.X J. -\h . or lCr A min ► a i figure 3- A" and -h\ . against QL . mm ° 1 VO D~ 1 H K c— rH t- H | O H i H ! H 1 i ! 1 1 rH ! ^ t— -d- o D— On UA o\ VO CO t- fr- r- [— to o H o C— o CO o -d- CM II X « o CO CV] UA CM ffi OA O -d- CD S on on -tf en H OO UA H CO rH t- t— C— CM -ct OA UA VO UA VO S H CM CO on CO i w o UA c— _± -=h p^ on CM -d- cn H H H H H VO rH ON CM CM CO UA ua rft UA H o O O o CQ O oo o CM vo o CO o H II X K O 0\ D- en O UA rH oa VO rH H H W vo VO C— OA on UA o UA CO -d- S2J H vo UA UA -4- UA on 1 to vo o ir\ i H h" P? H -d- fn J-I Fh Jh oa^-- 0) tu (U CD O 2 T3 T3 t3 t3 = ^ — ct3 rH rH m O rH rH h O H J-i O O o O 3 ^ ^ II II II II « eq ^h nC ^ ,Ci .a EH H O -P H -P H -P H -P H = wpH VO O >> • Ph rH OO O 0) U CO rH 0) ir\ rQ TJ ctf rH -=*■ -P o t- OA CO ,d i p -p CO < t— oo -4- EH • • • F-H H On 0> CM vo U"\ pD t- i/\ 00 -4 UA D— 5 vo H O D— ir\ CVI Lf\ oo CM LTA o H O -4 -4- OA LP\ S t— c— VO P ON o\ CA K i 1 1 M CVI o> c— -4 O ON ro UA C— H D— VO rH H LT\ H -4 W P -4 -4 K c~- [— o ON 0A s 1 i id ■H in rQ 00 o O & H H OA CO CO C^- oo i OO i o\ VO rH 00 00 oo LP\ 0) LT\ LT\ vo i Tj ^H -=J- -4" -4 o o\ o\ OA rS i 1 i P VO o [— C*- • • • o\ CM -4 VO * rH rH o CM o\ PO CQ m 1 CO 0O -4 s CO UA OO CM <£ LT\ p -4 -4 < ON OA i i 1 & (D *■— s VO P? pr N CM P3 r-\ O CO o •H oo o ^\^ pi v, *x s pj CO ^^ Ph rH Ph rH s i — i s H m ft H V_^ P CO CO T3 V0 rH U o P) o •H -P O •H U p -=h & LT\ CO O 0) H ^H -=F ,Q (D OJ ct3 TTJ -P ?H vo 01 o oo H t3 ,d -p CO ON H t— rH VO jv^ CT\ _± VO CO OO o rH i _=h D— VO vo H i £5 VO LT\ OO OJ a oo OO OJ H D H rH H W w -tf H ON VO o OO LT\ rH vo H -d- 1 00 vo H OJ OA OJ a vo LT\ 1 K oo OO O <-\ H S Tt •H Ph P 1 i>> LT\ o\ VO w • • • LT\ O 0A LT\ t- OO fn 00 O OJ t— c^ (D 3- LT\ + H vo T3 0J CM -=}■ H Ph • o vo oo vo oo VO OO £ H rH ^-\ P VO oo VO CO c « * * VO H l/A OA o\ IT\ CO 00 t- l o c- s 3 -rj- -=!• 1 n <3 OJ CVJ CVJ vo vo vo oo OO OO rH H r-\ * ^ CD ^^ ^-^ ,-^ N K K K •H O O o CO CO OO B VO H 1 CO g Ph ""■■s^ H ^^ w CD rH ■*. ^ H » — -- H P CO co H a ctf VO rH o -p o CD rH rH o o O Q CO oo H VO ,3 VO o > >> >> • ^H OJ O oo *»"»„ CI rH O •H II +3 ai ,CJ rH M rH -p PJ o •H CD rH >s u £> O u — v CO CD oo H H bo VO a ^- ■H co vc w 1 — ai ON > o H co tK ^ -=f ^H 0) rH r^ cd EH -20- The values of J , (6138) and J, (6136) quoted by Nordsieck are 10~ y x 1362^85 and -10 ' x 97^5831 respectively. The errors shown in the tables are based on these values and the calculated values rounded to 7 significant digits and thus can be in error by + 1. The results were obtained by single precision floating point arithmetic on ILLIAC II, which represents about 13 decimal digits (22 base k digits). Therefore the roundoff in the integration should not be significant in the comparisons. 5. Higher Order Methods For k = 1 and k = 2 it was possible to achieve (2k + 2) th order accuracy with equations (l.l). This relation can also be seen to hold for k = when the equations represent the second order Runge-Kutta method. The question of whether this is true for all k naturally arises. Unfortunately the answer is no, but it is true for k = 3 and h, giving rise to 8th and 10th order methods respectively. Parameters for both cases are given below, but the 10th order meth method has undesirable stability properties due to large roots of the stability polynomial and hence it is not recommended. For k = 5^6,7,8, and S>> methods of order 2k + 2 are not stable. These results were obtained by a numerical investigation of methods for |k= J>,^r, ... ,9 on ILLIAC II. For each k, the coefficients of the method were calculated from the 2k + 3 linear simultaneous equations obtained by a Taylor series expansion to 2k + 3 terms. Since there are 2k + k parameters in the corrector, these coefficients are linear functions of one parameter, which was taken as 7 p , , the coefficient of f . The roots of the stability polynomial i k+1 - I 7 20 (7 21 ) i*^ - i=0 were then studied. :If 0; (7 ) goes to °°, as |7 p , f goes to 00, the largest root is asymptotic to a ?n ^?i )° ^e remaining roots are 0(l) as y becomes infinite. Therefore, jit is only necessary to examine values of 7 p in a region near the origin to try to find stable methods. The searching method was crude, consisting of a > series of runs, each calculating the roots of the stability polynomial for a I regular mesh of points on the 7„ n axis . The first run took a fairly large 21 interval between mesh points in order to locate the region where the dominant -21- root would be small; successive mesh refinements narrowed in on this area until it was possible to plot the roots of the equation. When k is larger than h, the largest root of the equation fails to get below +1 as y is increased from -«> before a second root of the equation exceeds +1. These two roots coalesce and become complex conjugates, remaining outside of the unit circle, then coalesce again on the negative real axis and become real, one of them then goes off towards -°o. Particular cases of the method for k = 3 and k are given in Table 5- The largest non-principal roots of these methods are .27^- and .695 respect- ively. The latter is so large as to make that method unstable for values of \h much different from 0, so it has not been used in any further work. The 8th order method was used to integrate J Ax) . The results at x = 6136 and x = 6138 are shown in Tables 3 and k for a step size of l/l6. The method is unstable for h = 1/8, and h = 1/32 was not tried because of the accuracy at h = I/16. As in the case of k = 2, there is a degree of freedom in the pre- dictor which is not shown in Table 5- It is possible that the 8th and 10th order methods could be made more stable for some values of \h by a better choice of the predictor. For example, a multiple, OL , of hf ' ± + 28.3311 n+ 2 6319^ y n + IT.6367 1875 y n-l - 37.32^2 1875 y n-2 - 8.6^36 6319^ y n-3 - II.2565 ioki6 hf n - ^3.3398 ^375 hf . n-1 - 27.1523 ^375 hf n-2 - 2.19k) I04i6 hf _ n-3 may be added to the k = 3 predictor without changing the order from 7. The ! error term of the predictor is proportional to -22- Half Point (j=0) Predictor (j=l) Corrector (j=2) °J0 -3-98T6302083 -.280303 1.34504545 P J0 2.392578125 -.840909 - .09766363 a H -2.392578125 -9.613630" -.31295454 V 7.17773^375 7.159090 -.19366363 °J2 P j2 6.029296875 4.30664o625 8.159090 7.377272 -.04686363 l k=3 ^8th .01360909 ' Order "03 1. 3509114583 2.734846 .01477272 P J> .341796875 .71753246 1.4961038 .oo4i4i.556 , 44 .763012987 7 J0 .148200000 r jl °j0 -6.5608 97827 -2.2091 62227 1.8125 58911 P oo 3.0281 06689 -1.1450 26643 - .3513 5364 V -16.1^99 02344 -45.7510 95327 - .5903 09059 p jl 16.1499 02344 24.4532 85971 - ,6io4 06157 °J2 8.7209 7.1545 29309 - .4139 11190 ^2 21.8023 68165 53.2631 61639 .0758 11013 k=4 VlOth Order °03 13.5131 8359^ 37.0961 51572 .1644 99704 < P o3 6.9213 86719 20.3444 81098 .1059 25400 (Rounded to 9 V 1.4766 69312 4.7095 76673 .0271 61634 digits ) P J4 .3364 56299 1.0920 36708 .0063 09216 7 JO 1.6767 85926 .8161 28202 y ,, .l4i6 00000 jl J Table 5- Stable Methods for k = 3 and 4. -23- in large vector problems, then the 8th order hybrid method is more accurate but still requires fever additional points. If one is prepared to use more function evaluations and/or use other points besides the half point, prob- ably higher order stable methods exist, and would pay off in special cases, but it seems unlikely that one would want to go beyond order 8 or 10, as it appears to become harder to control the larger number of extraneous roots, or to have much knowledge of the high order derivatives . (2) Gragg and Stetter consider correctors similar to the corrector used in this paper with J = 2. They choose the point of evaluation of the first predictor to optimize the (stable) order of the corrector. This intro- duces an additional degree of freedom into the corrector, which, with the degree of freedom already existing in the correctors used here allows a corrector of order 2k + k rather than 2k + 2 to be chosen. Their surprising result is that for k < 3 (the k of Gragg and Stetter is one larger than this one), these optimal- order methods are stable. The result for k > is not yet known. Because the new corrector order is greater, the predictors must be of a correspondingly higher order, meaning that additional mesh points must be used. Since these additional points are needed, it would seem desirable to look for methods which make use of them to reduce the error term or to put the additional non mesh point, or points, at a desirable location. -24- ^h 8 dx 8 576 + 288.75 (a +1.^9 61038) Positive CC would improve the stability slightly since it tends to increase ql n and for h = t-t« Each of these methods takes 32 evaluations per unit step in x. Nordsieck's method differs from Adams's method only in the predictor and in the effect when the step size is changed but this apparently introduces a consid- erable amount of error. Automatic step size changing probably would pay off in cases where there is a sudden change of behaviour in the function; for those cases one expects Nordsieck's method to be superior. A step changing mechanism can be programmed on the basis of the difference between the predictor and corrector of the hybrid methods presented. Although this is a term of order l2k+2) (2k+2) _,,2k+3) .,..,.,. _ .. . . . _ ., „ y + 0(h , it is indicative of the behaviour of the average value of the error term which includes terms in h y (x) and ,2k+3 . (2k+2) , N h J \y v ; (x). The hybrid 6th order has about three times the error of Adams double corrector in one case, and about 1/3 in the other. It has the advantage of using two additional points instead of h (5 if Adams -Bashforth is used as a corrector). If storage space and hence the number of points is a criterion -25- BIBLIOGRAPHY L. Fox, L. Numerical Solution of Ordinary and Partial Differential Equations. Pergamamon Press. London, (1962). 2. Gragg, W. B., and Stetter, H. J. Generalized Multistep Predictor Corrector Methods. JACM. 11, 2. (April, 196^) pp. 188-209. 3. Henrici, P. Discrete Variable Methods in Ordinary Differential Equations. Wiley, New York. (1962). I Nordsieck, A. On Numerical Integration of Ordinary Differential Equations. Math. Comp. 16,7. pp. 22-^9, (Jan. 1962). -26-