CENTRAL CIRCULATION BOOKSTACKS The person charging this material is re- sponsible for its renewal or its return to the library from which it was borrowed on or before the Latest Date stamped below. You may be charged a minimum fee of $75.00 for each lost book. Theft, mutilation, aid underlining of books are reason) for disciplinary action and may result In dismissal from the University. TO RENEW CALL TELEPHONE CENTER, 333-8400 UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN DEC 1 4 1998 JAN 1 MA V 1 nji When renewing by phone, write new due date below previous due date, Li62 yu 933 UIUCDCS-R-78-933 y%^ci itf ^ UILU-ENG 78 1725 C00-2383-0049 August 1978 AN EFFICIENT NUMERICAL METHOD FOR HIGHLY OSCILLATORY ORDINARY DIFFERENTIAL EQUATIONS by Linda Ruth Petzold WT^ DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN URBANA, ILLINOIS Digitized by the Internet Archive in 2013 http://archive.org/details/efficientnumeric933petz UIUCDCS-R-78-933 AN EFFICIENT NUMERICAL METHOD FOR HIGHLY OSCILLATORY ORDINARY DIFFERENTIAL EQUATIONS by Linda Ruth Petzold August 1978 DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF ILLINOIS AT URBANA- CHAMPAIGN URBANA, ILLINOIS 61801 for the degree ofVctor ^ ' the "«»"«■«. of the Graduate College ill ACKNOWLEDGMENTS I would like to thank Professor C. W. Gear for his support, encouragement, advice and enlightening comments during the work on this thesis and my studies at the University of Illinois. I am also indebted to Mitchell Roth and Richard Roloff for their useful advice, and to Barbara Armstrong for the special effort put Into producing the final typed copy of this thesis. Finally, I am grateful to the Department of Energy for their financial support. iv TABLE OF CONTENTS Page 1. INTRODUCTION . 1 1.1. Oscillating Problems 1.2. Summary of Relevant Literature .' .' 1 2. SOLVING THE DIFFERENTIAL EQUATIONS .... ....... y t.l. Formulas for Solving Diff 2.3. Variable Period Formulatio 2.1. Description of the Step- Advancing Method llll^t i°L S °}Z ±ng ? iff erence Equations. . 10 . 13 3. DETERMINING THE PERIOD OF A NEARLY PERIODIC FUNCTION .... 15 3.1. Description of the Procedure 3.2. Formulas for the Newton Algorithm.' .' .' .' .' .' .' [ [ [ ' jj 4. THEORY .... 22 4.1. Convergence of Period-Finding Algorithm. ... 22 4.2. Smooth Solutions of Oscillating Systems. .....'.' .' 30 5. ERROR PROPAGATION. . 36 5.1. Error Due to Small-Step Integrati 5.2. Error Due to Large-Step Integrat 5.3. Summary of Error Considerations on. ion. 37 39 5.4. Effects of Changing the Choice of'perlod '. .' [ [ ] ] [ 54 6. IMPLEMENTATION CONSIDERATIONS. 6.1. Data Organization for Outer Integration. . S7 6.2. Details of Implementation. ... ' 6.3. Computational Results. ... 6.4. Suggestions for Improved Programs.' .' .' .' \ \ '. [ [ ' ' g7 7. STIFF OSCILLATING PROBLEMS 90 7.1. Stiff Problems 7.2. Stability Regions. ..." 91 7.3. Techniques for Stiff Oscillating Problems! .' .' .' .' .' .' 93 8. CONCLUSION . 104 LIST OF REFERENCES 106 APPENDIX . 108 VITA . . 132 1 . INTRODUCTION 1 - 1 - Oscillating Problems One cf the open problems in initial-value problems for ordinary differential equations Is that of problem with highly oscillatory solutions. Some authors (see [1, 11, 17 , 18]) have called these ^^ "stiff oscillatory" problem because they usually have large eigenvalues. However, it could be a misleading name for the problems because it invites comparison with non-oscillatory stiff problems although the difficulties inherent in these two classes of problems are different. In the conventional stiff problem, the large eigenvalues correspond to components which are not present in the solution (except possibly in some transient regions where the problem Is not considered stiff [7]). The solution has to be found over an interval which is long compared to the time constants corresponding to the large eigenvalues. In an oscillating problem, the solution exhibits a high frequency oscillation superimposed on a long term slowly varying signal. We can distinguish two problems here. The first is to find the actual solution, and the second is to find the long term behavior of the solution without following the oscillation closely. In fact, there are many ^^ classes of problems that need to be studied in this area. The first classification concerns the source of the oscillation. We might classify them in the following categories: I- a linear oscillation caused by a pair of large imaginary eigenvalues, a. in a linear system b. in a nonlinear system 2. a nonlinear oscillation, 3. an oscillation caused solely by a driving term. In case la, the oscillations can be damped by some means so that the resulting long term behavior of the non-oscillatory components can be followed. This can be done by any method which is damping along the imaginary axis. Methods based on the first subdiagonal Pade approximant have this property. Thus, the implicit Euler, implicit Runge-Kutta, or modified Obrechkoff methods can be used. (It does appear that the order of a one-step method will be limited to 2q-l where q is the number of stages or derivatives used.) In case lb, damping the oscillations may lead to erroneous results because the amplitude of the oscillation may affect the behavior of other components via nonlinear coupling. A number of people have studied the existence of formulas which conserve the energy in the oscillation. For first order equations the only such formulas are variants of formulas based on the diagonal Pade approximation, namely, the trapezoidal, implicit Runge-Kutta, and Obrechkoff methods. See, for example, Lambert and Watson [13]. As an eigenvalue tends to infinity along the imaginary axis, these methods "slow down" the oscillation. For example, if the problem y' = iqy, y(0) = 1 is integrated using the trapezoidal rule with step size h, where hq is large, the result is a sampling of the numerical solution every half cycle less 4/(hq) radians. As hq becomes very large, the solution tends to (-l)**n*exp(-4in/hq) where n is the step number. It is true that after enough steps the complete wave will have been sampled, but the number of steps is so large that a local "average" of the effect of coupling of the oscillation to other components cannot be estimated with the desired accuracy. Therefore, it is necessary to estimate the effects of the oscillatory terms by tracking their amplitude and shape rather carefully. In case 2, nonlinear oscillations, the problem can be even more difficult. Nonlinear oscillations are often caused by rapidly varying eigenvalues. These values lie in the positive half plane when a component is small, allowing it to grow quickly, then they move into the negative half plane, causing it to decay again. A standard example of this is the Van der Pol equation x" - A (1 - x 2 ) x' + x = which has eigenvalues in the positive half plane when |x| < 1 and in the negative plane when |x| > 1. Therefore, it is not sufficient to use methods that compute or estimate the system eigenvalues to determine the oscillations. Oscillations due to periodic driving terms may be the easiest to deal with because their frequency is known, although they probably occur least often in practice. In this case, it appears simple to estimate the effect of a cycle of the oscillation periodically provided that the system response to the rapid components in the oscillation can be determined in a few cycles. This thesis explores a method that, in principle, can deal with linear or nonlinear oscillations. The underlying idea is to define an "envelope" and attempt to compute that. Unfortunately, a specific solution to an ordinary differential equation does not have an envelope, it is simply a space curve as shown by the solid line in Figure 1.1. However, if the frequency of oscillation is very large, the uncritical observer would have no difficulty with the statement "the dotted line in Figure 1.1 is the envelope." Thus, the objective is to define a smooth curve z(t) which passes through the points [t Q + nT, y( t + n T) ] for : 0, 1, 2,.... We will call this the quasi-envelope. It has the property that the pointwise solution of the differential equation can be recovered from it and the differential equation by integration from Lt Q + nT, z(t Q + nT)] for no more than one cycle. It also has the property that, for autonomous systems, a similar solution, for example, the segment PQ in Figure 1.1, can be obtained by starting from any point P = [t, z(t)]. Note that the slope of the envelope is approximately (z(t + T) - z(t))/T which is (y(t + T) - y(t))/T where y is a solution of the differential equation passing through [t, z(t)]. The basis of the method is to use this, or higher order approximations to the slope of z, and thus derive an approximate differential equation for z which can be solved using a large time step. Details are given in Chapter 2. quasi-envelope z(t) Figure 1.1. ODE Solution and Quasi-envelope This method assumes a knowledge of the period; so we first need to know approximately how long one cycle is and if the period is changing, we need to be able to correct our estimate of it. Chapter 3 describes a method for correcting an estimate of the period, which works given a reasonably good initial guess. This m ethod is based on minimizing a norm of the difference of the periodic part of the function and the sa,e part dispiaced by the period. Some theoretical results are proved in Chapter 4. The error propagation properties of this method are considered in Chapter 5. The concepts of order and stability of methods for solving ordinary differential equations are extended to the methods considered here, and some results concerning the order of these methods are given. A program was written which implements this method with variable stepsize and order. Implementation considerations and some computational results are discussed in Chapter 6. Chapter 7 considers a class of oscillating problems which seem to be especially hard to solve. These are stiff oscillating problems. Some analogies are drawn between these problems and conventional stiff problems, and the concept of a region of absolute stability is extended to methods for solving oscillating problems. Formulas which are generalizations of backward differentiation formulas for solving ordinary differential equations are derived. Some questions related to solving oscillating problems are mentioned, and the main results of this thesis are solarized in chapter 8. l ' 2 ' Summary of Relevant Literature Several different approaches for dealing with problems of these types have been attempted. A brief review of the relevant literature will be given here. The methods which are most closely related to the ones presented here are the multirevolution methods, and were first introduced by astronomers in 1957 for calculating the orbits of artificial satellites. References to these early papers are Mace and Thomas [15] and Taratynova [20], and an overview is given in the review paper by Velez [22]. The first step in the multirevolution methods is to determine a reference point on the orbit, for example, the node, apogee or perigee. Then a small-step algorithm is used to integrate the differential equations from one reference point to the next. The values at the reference points are used in formulae such as the ones derived here in section 2.2 to step the integration ahead over an integral number of orbits. There are several ways in which the multirevolution methods differ from the algorithm presented here. In those papers, the authors were concerned with computing future orbits (i.e., oscillations) accurately, whereas here we are mainly concerned with "ignoring" the details of the oscillation. The key to this is the introduction of the smooth "quasi-envelope" z(t). For some problems, however, (for example, those with high frequency periodic forcing functions) it is necessary to know the phase of the oscillation although it does not always seem possible to do this while skipping a large number of cycles. Some aspects of this problem will be discussed later in section 5.2. The second principal difference is that the multirevolution methods require a reference point which was determined by the physical properties of the orbital problems, while with the method in Chapter 3 we can, in principle, solve any problem whose solution is a slowly changing oscillation. No information about the nature of the problem from which the equations arose is necessary, and the method need not be changed for different types of problems. Very little theoretical justification for the multirevolution methods has been attempted. Modified multirevoiution methods were derived by Graff and Bettis [9] to integrate exactiy products of iinear and periodic functions In a context having nothing to do with numerical methods, Minorsky [16J used essentially the same idea which motivates the method presented here to study the existence and stability of periodic solutions of ordinary differential equations. A mapping is defined by sampling the solution at periods of the oscillation, and then used as a theoretical tool to study existence and stability of periodic solutions (for example, if the period is known exactly, a periodic solution maps into a single point). The asymptotic methods of Bogoliubov and Mitropolsky [3] are important analytical techniques for obtaining approximate solutions to differential equations which depend on a small parameter. In the numerical analysis literature, Miranker and Wahba [18] deal with the equation y" + X 2 y - f(t), and replace y by averages of y over intervals of length A to construct a method with coefficients depending upon A which advances the averages of y. Information from the system eigenvalues is used by Amdursky and Ziv [1] to transform an oscillating linear system into another linear system which has highly oscillatory coefficients, but of small amplitude so they can be neglected. Miranker and Veldhuizen [17] deal with the model equation dx _ Ax f0 -l" dt ' "T g(t ' x) ' A = |_1 0_ with e small. The solution is approximated by a series of functions of | (this is like a Fourier series). The coefficients of these functions are functions of t. Kreiss [12] discusses ways to choose initial values 8 so that the resulting solution is smooth. The methods of Gautschi [5] integrate exactly trigonometric polynomials of a given order if the frequency is known in advance. This method is applicable for stepsizes the order of the period T. 2. SOLVING THE DIFFERENTIAL EQUATIONS In this chapter a method will be presented which "solves" the differential equation by stepping over many periods (or near-periods) at a time. The idea is explained in section 2.1, formulas are derived in section 2.2, and the method is extended to problems with slowly varying periods of oscillation in section 2.3. 2 ' 1 ' Description of the Step-Advancing Method First we will explain the basic idea and try to motivate it. Suppose we are given a problem (2>1 - 1) y' = f(y, t), y(0) = y Q , where the solution y is periodic or nearly periodic of period T. Then from (2.1.1) it follows that f(y(t), t) is nearly periodic. Integrate (2.1.1) from to kT, where k is an integer, kT kT . , ,.,.>_ (2.1.2, / ,. dt . , f(y(t)> t)dt . y(H) . y(Q) . " d+l)T f(y(t)> ^^ 1=0 iT Due to the near periodicity of f, for different values of i these integrals are nearly the same. If we know the period, we can use the initial values to do a small-step integration over one period to find the first of these integrals (1=0). If f is changing slowly enough, we can approximate the sum in (2.1.2) by T k / f(y(t), t)dt. Another way to write this would be (2,1 - 3) 7(H) - y(0) = Hf(0) where H = kT, and f(t) is the average over one period starting at t, or 10 1 T f(0) = i f f(y(t), t)dt. T Replacing by t, (2.1.3) looks like Euler's method applied to the initial value problem 1 T (2.1.4) (a) y'(t) - £ / f(y(t+s, t), t+s)ds; y(0) = y(0) where y is defined by (b) ^ y(T+s, x) = f(y(T+s, x), t+s); y(T, t) = y(x) In a geometrical sense, what we are doing here could be construed as finding the "envelope" of the periodic function y. It is easy to see this by looking at Figure 1.1. We can integrate through one period to calculate an estimate of the slope of the secant line from A to B. We then use this, instead of the tangent, to project to P, which is many cycles away. Note that P may not be anywhere near the solution to the original equation, in the usual sense. Starting with P as the initial value we will integrate for one period along the chain line using small steps (this solves equation (2.1.4b) for y) . In the picture, the curve that we are finding looks like the upper half of the envelope of the oscillating function. From this it seems likely that we could use nearly any method for problems with slowly varying solutions to solve equation (2.1.4) (e.g., Adams methods), whose solution should be slowly varying as long as the period is known accurately enough. 2.2. Formulas for Solving Difference Equations Not all of the reasoning above is very precise, and we will try to correct that to some extent here. 11 To this end, define a function z(t) in terms of its values in [0, T] by z(t+T) = Z (t) + Tg(z, t), 2 (0) = y(0), where (2.2.1) g(z , t) =1 [y( t +T, t) - y(t, t)], and y satisfies ± y(t+s, t) = f(y(t+s, t) , t+s) , y(t, t) - z(t). Note that we have not yet specified the values of z on (0, T) , so the above defines z only on the set {kT>, k a positive integer. Note also that z(kT) = y(kT) because Y(T) - y(0) = T [| / f(y(t), t)dt] 1 T ~ = T[- / f(y(t, 0), t)dt] = Tg(z, 0) = z(T) - z(0) = Z (T) - y(0) so y(T) = z(T), and it follows by induction that y(kT) = z(kT). If 2 is equal to y on [0, T] , then z is equal to y everywhere. However, that is not what we want, rather we would like a z(t) that is very smooth so that it can be computed on a coarse mesh. We will see later how to define z on [0, T] so that it is smooth (slowly varying). We would like to express the backward difference of z in terms of g and differences of g. This will give an analogue of the Adams- Moulton formula. 12 We will use the following relations: Vz(t) = z(t) - z(t-H) V = I - e~ HD (D is the differentiation operator) Az(t) = z(t+H) - z(t) A = e - I Ez(t) = z(t+H) HD E = e Proceeding formally, we have TD (2.2.2) g N = C^—f > Z N (2.2.3) z N - z N _i = (I " e > Z N From (2.2.2) we have e TD - I -1 (2.2.4) z N = (— t } % * Substituting (2.2.4) into (2.2.3) we obtain _ HD e TD _ i -l e TD _ x _! (2.2.5) z N - z N _ x = V(^— ^ ) g N Now HD = -log(I-V), so TD = - | log(I-V), so that " I 1°8(I-V) TD T n „ H - T -1 c^-V 1 - (^ = -> 1 T 1 ,_ _2., m . «-l - ( -iiog(i-v) + i^ 2 V°g z a-v) + --->' Plugging this into (2.2.5) we obtain 13 * N - Vl ■ Wtloi(M)]- 1 (-1 + | i log(I -v) - (|)2 £ log 2 (I . v) > VHflogd-V),-! (-! - l._L log(M) . J_ ( T,2 log2(I _ v) Then (2.2.6) = H{ - V[los(I - 7)rl -ir^rn 2 viog(i-v) + ... } + ...} g Note that the terms In square braekets are the Adams-Moulton coefficients, and the others become small as £ heco.es saall. Thls then gives an analogue of the Adams-Moulton formula In terms of the g[| . Other sueh formulas may be derived similarly. These forties , applied to y, have also been derived In [8, 15, 20 and 22]. Section 1.2 describes these papers. Following Oraff [8] , Ke wlll call ,_ ^^ ^^ Adams formulas. Note that the first order method In (2.2.6) Is just the backward Euler n ethod applied to (2.1.4). Another lnterestlng prQperty q£ the generalized Adams methods (except for the first order method In (2.2.6)) is that as T + H, the coefficients In the formulas approach those of the forward Euler method, which Is « a « <„„ . , wnj.cn is exact (up to errors made by the small-step method) for T ■ H. 2 - 3, triable Period FormnlaM™ There are many problems which may be solved using the technique of skipping over cycles which have solutions that are composed of oscillations with a slowly varying period, modulated by slowly varying 14 terms. (Here "period" is defined as the solution to the minimization problem in Chapter 3.) In any reasonably general implementation of this method, we would like to deal with these problems without having to distinguish them from problems with a constant period of oscillation. This is accomplished here by means of changing the independent variable so that in the new variable the period of oscillation is a constant. Then the problem is solved with constant-period formulas, like those that were just derived. The appropriate change of variables will now be described. The change of variables from t to £ should have the property that the period T(t) of the oscillation becomes. a constant T in the new variable 6. (T is just a scaling factor.) This is expressed as e(t + T(t)) - ect) = t, e(t ) = o, or as (2.3.1) t(u + T) - t(£) = T(t(£)), t(0) = t Q . We will define y(£) to mean y(t(6)), and z(£) to mean z(t(£)). Since we would like to be solving (2.3.2) z(£ + T) = z(t) + T g(z, t), in order to use the constant period formulas, it follows that g(z, t) must be defined as (2.3.3) g(z, t) =lMm g (z(t(£)), t(€)) To effect this change of variables, we will be solving one extra difference equation. The equations we will be solving are summarized below. (2.3.4) z(t + T) = z(t) +Tg(z, t), 2(0) = z(t Q ) t(t + T) = t(t) +T( T(t j e)) ), t(0) - t Q • 15 3. DETERMINING THE PERIOD OF A NEARLY PERIODIC FUNCTION The procedure to be discussed here does not actually determine the period or almost-period of a function. For a periodic function, given a reasonable initial estimate for the period, say T Q , it produces a sequence {y of estimates to the true period T, which should converge to T as N - oo. In this chapterj the problem Qf what . s meant here by the period of a nearly periodic function will be discussed, and an algorithm to find the period for systems of functions with a common period will be outlined. 3.1. Description of the Procedure First suppose we are given a nearly periodic function y(t) defined on an interval I = [0, L] , which is the sum of a slowly-varying function g(t) and a periodic function h(t) which has period T. We will also assume that an initial estimate T Q to the period T is given. By requiring g to be slowly-varying we mean that g does not change much over any interval of length T, or, more precisely, that ,m T m .!_£ dt m is small. What we would like to do is to look at the function y over several periods of h, and use this information to estimate the period of h, and to determine the behavior of the underlying function g over this interval. As an example to motivate the algorithm, suppose first that Y(t) is periodic with period T. Then y(t) = y(t + T) for all t in I. Another way of saying this is that ||y(t) - y(t + T) | | = on I. This 16 suggests that one might be able to find the period T by minimizing | | y(t) _ y (t + T)| L over all T in I which are bounded away from zero. Now if y is the solution of a differential equation, it would be impractical to minimize || y (t) - y(t + T)|| 2 = / (y(t) - y(t + T)) 2 dt , with the integral taken over all of I, as that would imply that we know y over all of I. Instead we will try to minimize over the interval from zero to the last estimate of the period. That is, we find T N 2 min / (y(t) - y(t + T*)) dt , T*>£>0 T*£l and T is defined as the value of T* for which the minimum is attained. N+l As there may be many such values, T N+1 will be the one to which the algorithm converges (this will probably be the one closest to T N >. There is some notation which will be used below which should be explained before we can proceed. A function f with a bar over it will mean f(t + T ) (the subscript N will change after each iteration and N should be clear from the context). A function f means f(t). The integrals will all be over the interval [0, 1^] although in practice they may be over any interval of length T N with the left endpoint fixed throughout the procedure. In general, we will have y(t) = g(t) + h(t), with g t 0. In this case, we will start out with T Q as before, and with a k-dimensional space of functions P (k < °°) in which we will approximate g. The most obvious space P to take is the space of polynomials of degree < k. Since a constant shift of a periodic function, that is, c + h(t), is also periodic, the space P should not include constant functions. It is 17 necessary that g may be approximated rather closely by functions in P. At each step of the iteration we would like to find the function P N+1 which most closely approximates g on [0, ^] , and a new estimate T N+1 of the period. Thus, it will be necessary to solve two minimization problems at each step. First we find T N F N+1 u When P N+1 is ex P^ssed in terms of the basis functions of P, this just leads to k linear equations to determine the coefficients. Then we find T (3 ' 1-2) Tjl">0 l ^ " P « " ' 2 * • T N + l eI So, at each step of the algorithm a pair {p N , T[) } f quantities which describe g and h are produced. It is possible to obtain several different algorithms from this strategy by interchanging the order in which the two minimization problems are solved. 3 * 2, Formulas for the Newton Alg orithm The previous section presented the basic idea. Now the computations will be described more precisely. Let a ± (t)}, i = 1,..., k be a basis for P. Write p (t) = k i=1 "i.N £ i (t) * Then to s °lve the minimization problem (3.1.1), take the 18 partial derivatives of the function to be minimized with respect to the coefficients a. , and set them equal to zero. i,n j = 1,.... k This leads to T T k T N _ N _ (3.2.1) I a. _., / (l. - A.)U, - A )dt - / a - *,)(y - y)dt , i=1 i» N+1 i i J J 3 J j = 1,... k which is a set of k linear equations that we can solve for the a i>N+1 > i = 1,..., k. These equations are nonsingular as long as P excludes constant functions. To determine the minimum in (3.1.2) take the partial derivative of the function to be minimized, with respect to T N+1 , and set that equal to zero T (3.2.2) / (r^ +1 - y')(y - P N+1 - y + i N+1 )dt = 0. This is a nonlinear equation in the variable T N+1 which we can solve by Newton's method if the initial estimate T N is close enough to the minimum. This is partly why T Q needs to be a good estimate, and why the period may be allowed to change only slowly. This leads to the iteration 19 f Q " N+l J ' " ^N+l ' T ^N+l- N Vi = T N " [( { ( p' n +i - y*)(y - p N+1 - y + P M ^)dt)/ T N (/ % +1 -y")(y-? ml -y + i N+1 )dt o T N " A, N+l N + / (5^ - y') 2 dt)] o In practice, first an initial estimate for the period is given, then (3.1.1) and (3.1.2) are done in succession until T converges (N increasing) to some value, within some specified tolerance. This is not Newton's method for (3.2.2) because at each step N changes, so that p N changes. It would be possible to fix p^ in (3.2.2) and iterate that equation for T N+1 until it converges each time, but this seems slightly less efficient. Note that if T N converges, the coefficients of the linear equation (3.2.1) also converge, so p converges also. The integrals needed in the iteration can be computed using any sufficiently accurate quadrature formula. They are easily computed using Riemann sums. It will be shown later that if y can be expressed as the sum of a polynomial and a function of period T, then this algorithm is stable with respect to small perturbations to T or p. This procedure may be extended in an obvious way to a system of n functions. That is, if v. is a vector of functions, each of which is the sum of a slowly varying function and a function of period T (note that the period must be the same for all functions in the vector Z ) , then we can find a vector of polynomials p_ N which approximate the slowly varying functions, and an approximation T N to the period T. In this case, the first minimization would be 20 T N _ _ 2 , ,q (3.2.3) min | | / (y_ - p - y + E N+1 > dt | | <1 > 1 %U«* ° where P is analogous to P and contains vectors of functions which are in P. This will be minimized if each component of the vector is minimized. So for each component of y_, the set of linear equations (3.2.1) will be solved. For the period we want to find T N _ _ 2 q min || / (Y "£ N+1 " Z + £ N+1 ) dt|| T N+1 >e>0 T = mm £ ( / (jr - p_ - Z + p ± ) dt) T N+1 >B>0 1-1 where the superscript (i) denotes the i-th component of a vector. If we again take partial derivatives and set them equal to zero, the result will be a nonlinear equation in T -, which can be solved by Newton's method. For the case q = 1 (this seems to be the simplest), the iteration becomes (3.2.4) T N+1 - T„ - [< I / ( 2 (1) - £ - i a) + &><&»- i' (1 W i=l 1=1 ♦ *&>*.>!. Then (3.2.3) and (3.2.4) can be done in succession until the period converges. 21 The procedure we described above determines p except for the constant term. We can determine the constant term by solving T N , min / (y - p - a) dt . a 22 4 . THEORY In this chapter some results of a more theoretical nature will be stated and proven. First we will prove a theorem about the stability of the algorithm of Chapter 3 for finding the period, and then some properties of smooth solutions to equation (2.2.1) will be explored. 4.1. Convergence of Period-Finding Algorithm THEOREM 4.1. If y = p + u, where p £ P and w is periodic with period T, the iteration defined by equations (3.2.1) and (3.2.2) is stable if we start within a sufficiently small neighborhood of p and T. (4.1.1) PROOF. Let T N = T + 6 N , p N = p + 6p N> and p N+1 = ? N + c N+1 %, where I is the set of basis functions for P, and c N+1 is a set of coefficients (that is, c N+1 I is in P) . First we solve a minimization problem for P N+ -i > T, N 2 min / (y - P N+1 " (7 " P N+1 » dt . C N+1 ° Differentiating with respect to c N+1 to find the minimum, we have T / (y- p n+1 - (y- p n+ i ))(ji - * )dt = ° • Introducing perturbations 6 to T and 6p N to p, we have (4.1.2) 6 [y - p N - c N+1 I - (y - i N - c N+1 l)](l- D\ t=T N N+l T - « N / «y' - 5J- c N+1 £•)(£- i) + [y-P N - Vi % - (y- p"m- -M4-1 ^)^ ,}dt N N+l -/ [«p n + «s + i t-«5,-««T M i][* i «**-° • i) 23 Next we need to solve a minimization problem for T N+l T N -In / (y - p N+1 - ,2 }dt - £ {6i W y " P N+1 " G - P N+1 )] + (y- - f' +1 )(«p m - 5?N+i )} dt . At the "steady state" we have P N+1 = P N = P» c N+ i = y-p = y-p = to, 0) a periodic function. We will need some new notation. Let j> be a basis for the space to which P is restricted (e.g., £ . [t , ,2, fc 3_ ^ ^ ^ ^ = ^ Let L be a k x m matrix such that K \ is a basis for the gpace fcQ ^ P N + 1 " P N is ^stricted. (Changes to p are of the form p = p + z \ c P N+1 P N ■& — » where c is an m -vector.) Note Chat slnce p ls determlned only up u ^ arbitrary constant by the two -mirations, the basis , will not contaln a constant ten,. Also, it Is necessary that £ be ordered in the sense that « *(t) E P, z(t) - i(t) can be expressed as a linear combination of the first k - 1 basis functions plus a constant term. (This is just saying that in z(t) - Z (t) the coefficients of the k-th basis function must always 24 cancel. This occurs with any polynomial basis with functions whose highest degree increases by exactly one upon the addition of a new basis function, for example.) Now if z £ P, we will express z in terms of the basis elements T as z(t) = £ T z. Define g_ as the row vector which has one as its first element, and the first k - 1 elements of £ as its last k - 1 elements. Now if i(t) = z(t + T), then z"(t) - z(t) =|z-£Z= (I - £ ) Z . Now from the assumption above, i(t) - z(t) can be expressed in terms of the 1 T "1 T elements of £ , so for some matrix B we have z(t) - z(t) - £ Bz, so T I T " £ T = £ _1 B as operators. (Note that z(t) is arbitrary.) We will assume that B is nonsingular. This is easy to show if the space spanned by {g Q ,..., g k > (the elements of £ plus a constant function) is the same as the space spanned by {l, t,..., t }. See Lemma 4.1 at the end of this theorem on page 28 for a proof of this. Note that y - p - (y - p) - W - -U> - at the steady state periodic solution. At the steady state, equation (4.1.2) gives -*»/«' (/L - i T L)dt - / (6p N+1 - op N+1 )(iL T L - I T L)dt = . Represent 6p as £ T 6p where 6p_ is a k-vector. Then we have T T / (V(_g T - £ T )Ldt - / - 6 N / a)»6P N+1 <& ~ i^ Ldt = ° T -T T -T _ X T or - 5 W / 0)' £ BLdt - / £ B 6p K - BLdt = . N i T Now £~ B 6 £ is a scalar, so we can transpose it and pull constants out of the integral to obtain 25 T T - 6 N / U' jf 1 BLdt - 6^ + R T [/ ^V^dt]: 0)' £^ BLdt - 6^ +1 B x | -N+1 B U &■ &. dt J BL ■ T T -1 -1 Let N = / £ j> dt. Then we have (4 - X - 4) « H t/ «' if^dtjBL + « £ J +1 B T NBL - . Equation (4.1.3) gives T T After some rearranging this becomes T (4-1.5) 6 N+1 / ( U ') 2 dt + 6 £ J B T / «. i-lflt . o . Now set % +1 - 6j£ + 6c T L T . def C4.1.6) &J = ^T^^T.T From (4.1.4) we obtain T (4-1.7) fir/ W ' e" 1 HMW. * * „V « N [/ W fi dt]BL + 6 £l jB T NBL + 6c T M = , where M = L B NBL. Assume M is nonsingular. (For the algorithm as described in Chapter 3 this is certainly true.) Then solving (4.1.7) for 6c T we have (4 - 1 ' 8) 6 ^ = -l^l a.' g'^dtjBLM- 1 + ^NBLM" 1 ] . Equations (4.1.5) and (4.1.6) give T T T T 'S, ~)dt - . (4.1.9) 6^ / K) 2 dt + S£ T B T J (ulg -i )dt + ^ TiV T ^^.^ Substituting (4.1.8) into (4.1.9) and (4.1.6) we obtain 26 T T 6 , N+l 6 N _ 6 ^N+1_ >j I. T T T ([/ o)'^" 1 dt]BLM _1 L T B T [/ <*>'.£" dt]) (4.1.10) / (o,') 2 dt II. t T [-B /<*)\g~ dt+B NBLM L B /w'g dt] _ o III. -1 T -1 T - [/ w'£ dt]BLM L IV. T -IT I - B NBLM L / (w'Tdt Call this matrix E. Note that we have kept the limits on all the integrals fixed at [0, T]. This is all right when t is not changing from iteration to iteration (which is the case when it is coupled to the methods of Chapter 2 to find the period) . In case one might want to use this in a mode where t changes by an increment of T after each iteration, it is necessary to shift the origin of 6p to the beginning of the new interval. This gives N+l N+l ^ ^6 ~ N 10 T A E, where A = "l 2T 3T 2 41^ 1 3T \ 1 X Thus far equation (4.1.10) applies to cases which are more general than the algorithm explained in Chapter 3. We will now conclude the proof of the theorem as stated by considering a special case which includes our algorithm. Suppose now that L is square and nonsingular (this is true if you recompute the polynomial every time) . Choose the basis for & and £ _1 to be the ortho-normalized Legendre polynomials over [0, T]. Then N - I. (Recall tht M = L T B T NBL) . We would like to show that the eigenvalues of E have modulus less than one, as then the algorithm would be stable. 27 From Lemma 4.1 (page 28) we know that B is nonsingular. Then we have m" 1 = L~W L T , and block IV of E is I - bWm" 1 ! 1 = I - bWVV'V'V =I-i,o. Then the nonzero eigenvalues of E are the eigenvalues of block I, which is a scalar. I = [/ 03' £ X dt BLL"VV L T L T B T / co' £ _1 dt]// (w») 2 dt o T T T T (4.1.11) i = [/ u i ^ dt / u» i \it]/[/ (o)') 2 dt] o -1 T Now since £ = [ g() (t),..., g^Ct)], then T T T T / o)' & dt = [/ a)' g (t)dt,..., / a)' gl .dt] , o o ° o k_1 so that we have T _ ± T T k-1 T / 0)' £ dt / 0)' £ dt = I (/ a)' g.(t)dt) 2 . j-o i Express 0)» in a series of Legendre polynomials OO m w? ~ z a g (t), where a. = / aj f g (t)dt . i-0 x i Now if we assume 0)' e L 2 [0, T] (i.e., / (o)') 2 dt < -) , then since {g } is n a complete orthonormal system in L 2 [0, T] , we have 28 w' = E a.g.(t), and i=0 x x oo oo X loo' I I S = * a 2 = E |/ oa' g.(t)dt| 2 2 i=0 x i=0 So \\u'\\l > I |/ co' g.(t)dt| 2 . Since |kl| 2 = / (w') 2 'dt, we have 2 _:^r> n x k-1 T o . . . , ? T . i=0 from (4.1.11) that < I < 1. Now, really I < 1 because if not, we would have 1=1, which would mean that 00 t k-1 T 2 £ |/ OJ' g.(t)dtT = E |/ «' 8 i (t)dt| . i=0 X i=0 This would in turn imply that T \f a) 1 g.(t)dt| =0 Vi > k , x k-1 that is, that a. = Vi > k. This says that w' = I a ± g ± (t) , so that o>' 1 i=0 is a polynomial. But 00 is periodic so 00' is periodic, but polynomials cannot be periodic (excluding constants, and assuming T > 0) , so this is a contradiction. So I < 1, so that all of the eigenvalues of E are < 1. This proves the theorem | | . Lemma 4.1 . Under the conditions stated in Theorem 4.1, B is nonsingular. Proof. Define a (nonsingular) matrix C mapping one basis into the other. 29 £ (t) = 8 (t)" MO = C J = C£(t) def So £ (0 >(t + T) - ca( t + T), and j<°> ( t ) . C a (t), and this implies that £ (0) (t + T) - £ (0) (t ) = C (a(t + T) - _a( t )) . Now if we can show that the space spanned by the elements of -T T (£ (t) - £ (t)) has dimension k, then B will be nonsingular as -T T T -1 t" 1 & &. ~ R B > and £ has dimension k. In the basis {l, t,..., t k }, we have that £L(t + T) = 1 T 1 a(t) 2T 1 3 2 T 3T 3T 1 Call this matrix D. So a(t + T) - £ ( t ) = (D _ I)£(t) . Partition D _ z as lndlcated below. D - I = T ! o T 2T ■ T 3 3T 2 3T • o_ The submatrix above has rank k (the product of diagonal elements is not (if T is not 0)), so it is nonsingular, so D - I has rank k. We have 30 £ (0) (t + T) _ £ (0) (t) = C(q(t + T) -jg(t)) - C(D - I) £ (t) . C has rank k + 1, D - I has rank k, and £ has dimension k + 1. From a well-known theorem of linear algebra, the rank of the product C(D - I) satisfies p(C) + p(D - I) - (k + 1) < P(C(D - I)) < min(p(C), p(D - I)) . Plugging everything in we have (k + 1) + k - (k + 1) < P(C(D - I)) < k, so p(C(D - I)) = k . So the dimension of _g_ (0) (t + T) - _g_ ( (t) is k. But £ (0) (t + T) - £ (0) (t) = [-j—^-—^], S ° ^ dimensl ° n ° f ^(t + T) - j»(t) must be k, and it follows that B is nonsingular. | | 4.2. Smooth Solutions of Oscillating Systems What we are trying to accomplish with the method of Chapter 2 is to find a "smooth" (slowly varying) function z(t) which approximates the long-term behavior of the solution to the original equation in the sense that they are close at integral multiples of the period, starting from the same initial condition. Theorem 4.2 will be concerned with defining z as the limit of the solutions of a sequence of initial value problems . More specifically, suppose we are given a function g(z) as defined in (2.2.1). Assume that g has bounded derivatives and that T is fixed (and small). Recall that z(t) was defined for all t > in terms of z(t) on [0, T] by (4.2.1.) z(t + T) = z(t) + Tg(z) . For non-autonomous problems, t will be considered to be a component of the vector z, so that we can write g = g(z). Then we would like to define z on [0, T] so that it is as smooth as possible. 31 To do that, then, let D be the differential operator. For smooth functions OO -! (4.2.2) «(t + T) - «(t) + Z (TD) z(t) From (4.2.1) and (4.2.2) we have j-l J! Tz'(t) = T g ( Z ) - z smh&i J-2 J ! < 4 - 2 - 3 > «' 2 . j=2 J - THEOREM 4.2. If g(z ) has continuous bounded derivatives and L < oo, then (i) \\z* | | is bounded, and z < p) is continuous, 2 < I < co, 5 P £ °°, 0< t ll^ p) -|!lll<8 p /- 2 3<*<„, 0 3 - Setting w £ = Z£ _ z we have £-2 J-1 ,. x m £_ 2 5=2 w £ (0) = (4.2.7) <*> = d£ _ £ " 2 t^ (j) _ I*" 2 (£-1) dz • - j! Vl (£-1)! z £_i i „ _ lfi nnnnHoH o-r.^1 n v J' Since z jl _ 1 • is bounded, and u«J is bounded by C ^T^ 3 (part (ii) for £ - 1), equation (4.2.7) bounds u) by kT £ ~ 2 . 34 Equation (4.2.6) can be differentiated to show that oj £ Pj is 1-2 continuous and bounded by 0(T ). (iii) E™ = Hz^mT) - z(mT)|| = ||z (mT) - z^((m - 1)T) - TgCz^Gn - 1)T)) + z p ((m - DT)- z((m - 1)T) + z((m - 1)T) - z(mT) + Tg(z((m - 1)T)) + Tg(z £ ((m - 1)T)) - Tg(z((m - 1)T)) | | < | |z £ (mT) - z £ ((m - 1)T) - Tg(z £ ((m - 1)T))| | + (1 + kT)E^ _1 - Q, + (1 + kT)E- 1 . E° - . If we show that | Q J < K^, the result follows, namely that E m < C T £_1 for mT < L. We have that A/ X- T | - (4.2.8) Q„ = IU zJKm - DT + s]ds - Tg(z £ [(m - 1)T])|| 36 o T W Z «) S M zf> ^ - Tz i- * 2 jr z di\ l • by expanding z. in a Taylor series with remainder and using the definition of z„. Everything is evaluated at t - (m - 1)T unless otherwise indicated. Since || Z ( j) - z<3>|| < C.^" 2 , we can replace B «J in equation (4.2.8) with z£ j) , introducing an 0(T " ) error. Therefore, 35 .!.,«> ft).« Q £ = IN 0l . 1)! da 1 1 + 0(T £ ) = o(T £ ) since | \z | | is bounded. V ,m „ „ m £-l It follows that E < C^" 1 for mT < L. | | Lemma 4 ' 2 - Under th e conditions stated on page 31, E m < TK mTe mkT , Proof. By induction on m. This is obviously true for m = 0. Assume it is true for m - 1. Form, we have E* < (1 .+ kDE^" 1 + K;L T 2 < (1+ kT)T 2 K(m - l) e (m - 1)kT + K T 2 1 ' ^ » by the induction hypothesis. Now, 1 + kT < e kT so that E^ T, the error cannot be made arbitrarily small. Setting H equal to T corresponds to the small-step method by itself, and hence is not of interest here. In order to allow H to become arbitrarily small, and still be bigger than T, it is sometimes useful to visualize the method operating on a sequence {P q } of problems where the period T becomes arbitrarily small as q -> °° without changing the smooth q solution much. In this way, we can let H -> and at the same time have H _> T for q large enough. Errors arise in the computation from several sources. These are, roughly: (i) Error due to small-step integration (over intervals of one period), (ii) Error due to large-step integration (to find the smooth solution, with stepsize H ), (iii) Roundoff. 37 We will not deal with (111) here, although i n praotioe it may be a problem for very small T (though this would be true for conventional methods also). In section 5.1, we will explore (i) by comparing the solution computed by the small-step method to the true solution. The errors due to the large-step integration will be examined in section 5.2. There are several distinct modes in which the formulas of Chapter 2 can be used. In particular, we can let the stepsize be chosen arbitrarily, or we can require H to satisfy H = NT where N is an integer (this will be called the synchronized mode). While the former alternative has some interesting features, the synchronized mode seems to lend itself to an easier theoretical development with fewer, and weaker, assumptions. We will extend the concepts of order, stability, and convergence from numerical methods for ordinary differential equations to the generalized formulas of Chapter 2 and use these concepts to investigate the generalized Adams formulas. We will then look briefly into the total error introduced by this algorithm in section 5.3. The effect of changing the choice of T by a small amount will be studied in section 5.4. 5a - Error Due to Small-Step Integ ration In this section we will explore the effect on the computed solution of smail changes in the function y over one period. In other words, because we must compute y over one period of the oscillation with a routine which is not exact, what is the effect of this on the rest of the computation? 38 In order to talk about this, some notation will first be established. Recall the function z defined by z(t + T) = z(t) + Tg(z, t) , (5.1.1) g(z, t) = y ty(t + T, t) - y(t, t)] , with 4- y(t + s, t) = f(y(t + s, t), t + s) ds y(t, t) = z(t) Because we can only find y approximately, what we are really computing instead is a function y which satisfies (5.1.2) d£(t + s, t) = f(y(t + s, t), t + s) , ds y(t, t) = z(t) where f = f + r(y(t + s, t) , t + s) , and r is hopefully small. In other words, the numerical solution by the underlying method over one period is the exact solution to a problem which is slightly perturbed from the original problem. The numerical solution defines new functions z and g by (5.1.3) g(z, t) = i [y(t + T, t) - y(t, t)] , y(t, t) = z(t) , and (5.1.4) z(t + T) - 2(t) + Tg(z, t) , where T is the period of the numerical solution. We will suppose that the function T(t) is known in advance (otherwise, the analysis is much too complicated). The numerical solution y(t, 0) is assumed to be computed over each interval of length T in succession, starting again at the beginning of each cycle with 39 initial values which are the values of $ at the end of the last cycle. With these assumptions, we have for k a positive integer, (5.1.5) g (k T) = £ (k $ s 0) t Finally, for the generalized methods in nonsynchronized mode we must suppose that y(t + s, t) can be defined, with M continuous derivatives, in between small steps. Then we can define z by (5.1.6) g £ (0) = 8(0) % ■ g(z r t) - I —Jci , 2 < £ < M . j= 2 J- Note that if (5.1.5) holds, the difference between z(kT) and y(kT) is just the global error of the small-step method on [0, kT]. The objective of the next two sections will be to compare y(kT) to z C (kT) by way of the inequality ||y(kT) - z c (kT)|| < ||y(kT) - z-(kT)|| ,, +||2(kT) - z c (kT)|| . For the formulas in nonsynchronized mode it is necessary in addition to compare ?(kT, 0) to z £ (kT), and then to compare z Q (kT) to z C (kT). 5 - 2 * Error Due to Large-Step Integration The errors made by the generalized methods of Chapter 2 will be considered here. Most of the discussion will be centered around the synchronized case, where H is constrained so that the method skips over integral numbers of cycles. This is the computational approach taken in [8, 15, 20 and 22]. Later we will examine how the results can be generalized to the nonsynchronized case. It is assumed here that the small-step integration is exact. That is, only the errors introduced in solving the difference equation (2.2.1) are examined in this section. Of course, it is always possible 40 to view the difference equation as being defined by the results of the small-step integration, as in the previous section. To begin let H = NT (N an integer), and suppose for now that T is a constant. Then from z(t + T) = z(t) + Tg(z, t) , we have g(z, t) = f Az(t) =\ (z(t + T) - z(t)) . In addition, it is easy to see that N i (5.2.1) z(t + NT) = Z (J A z(t) . i=0 With H = NT, we need only define g at multiples of the period, t^ = kT ■■■■ k| N Thus, a better notation for g is (5.2.2) g k (z) " §( z > \) ' N N The solution y(t) of the original differential equation and the choice of T define a set of functions {g k (z)}, where N (5.2.3) Tg^(z) = y(\+ T, t^) - z(t^) . N N N N We will assume here that the functions g k ( z ) satisfy N 3 J A g k (z) (5.2.4) 8z J " 1J < C.(CT) 1 , j = 0, 1 = 0, 1,..., N j = 1, i = and |C. . | < K, where Ag k (2) = g k+l (z) " g k (2) ' N N N and the higher order differences are defined similarly. With these assumptions we can compute bounds on the error without having to bound derivatives of g and z with respect to t. If 41 8(2, t) is nearly periodic with period T, then {g k ( z )} will be a slowly changing sequence of functions. N To gain some insight into how the propagation of error for the generalized methods might be analyzed, we will first take a closer look at the simplest method, the generalized forward Euler method. Suppose then that z n is the. solution computed by the generalized forward Euler method. Then we have, for t = nH, (5.2.5) z - = Z n + H S ( V t n ) Letting e n = z(t j _ z ^ and subtractlng (5>2>5) from ^^ we have (5.2.6) e n+1 - e n + NT[g(z(t n ), t n ) - g^, tj ] + ^PTAg( 2 (t n ), ^ + ... + T A N - 1 g (z(tn) , ^ . Then using (5.2.4), (5 ' 2 ' 7) |e n + ll < l e n la + NTC 01 ) + D n , e = , where D n = i 2 ( i )(CT)i " lc i-l,0 T - Note that e n is identically zero if N = 1 , as we would expect. In fact, this is true for all of the generalized Adams formulas, except the generalized backward Euler method. As indicated earlier, we will discuss convergence by means of a sequence {P q } of problems, where the solution to the q-th problem is nearly periodic with period T , and T -> as q - ». This defines functions {g, } ( z } f nT . .u. „ +., ., B _k_,q ' q S r tne ^~ tn Problem just N q as in the case of a single problem. We consider only sequences where {g, } and {z } satisfy e k,q q (5.2.8) 8J (A i gk , q ( V ) 9z (j) — < C. . (CT ) - ij q and |C. . I < K, where (5.2.9) H = N T q q 42 j = 0, i = 0,..., N j = 1, i = and C is independent of the problem q. ij One might visualize the sequence of problems as having the same 'envelope' (actually, it is a little less restrictive than this), where the oscillations 'inside' become more and more dense as q becomes larger, as in the following picures. Figure 5.1. q Small Figure 5.2. q Large 43 If we were solving an equation, in the conventional sense, which had for its solution the 'envelope', we would choose the stepsize H to follow the solution. In discussing convergence, we would let H - and see whether the computed solution tends to the true solution. Here, for one problem, H > T q , so we consider solving all of the problems satisfying (5.2.8) for which H > T q> and for q large, we look at how close the computed solution is to the true quasi-envelope. Condition (5.2.8) is analogous to bounding derivatives for ordinary differential equations. For j - (5.2.8) requires for each problem that the secant lines of length T q at multiples of T change slowly along the solution and along integral curves near the solution. For j = 1 it requires for each problem that changes in g fc (z ) and its differences are not rapid with respect to changes in z . Thus, the solution over one period must depend slowly on the value of z at the beginning of the period. This condition has the effect of constraining the multiplier of the error at each (large) step to be small. It is not satisfied for stiff oscillating systems, which will be discussed in Chapter 7. If integral curves near the solution are nearly periodic (with period T q ), then these conditions should be satisfied. It seems to be a difficult problem to determine whether a given sequence of problems satisfies (5.2.8) from the original differential equation alone. With some qualitative information about the behavior of the solution and curves near the solution we can at least decide whether it is plausible that the conditions might be satisfied. An example of a sequence of problems which satisfies (5.2.8) is given below. y ; = ± Vq + f q^' W=V 44 iA t " iX q t This has solution y q (t) - e q (t + c q ) where c q = e y Q - t Q , iX t and f (t) = e q q After some algebra, we find that *_k_,q q _k_ N q This implies N q for all i , iX t . q Jl N N q , _k_ ■■ N q 2tt = kT , T = y~ q q A 4 H q 8 Jl .q (Zq(t A }) E x ■ This obviously satisfies (5.2.8) and can be easily solved with the generalized methods (in synchronized mode). Note that it cannot be solved b iX t (with H > T ) in nonsynchronized mode because the function g(z , t) = e q oscillates with frequency X , and thus must be followed with small steps. Now we can proceed with the discussion of the generalized forward Euler method. Recall that (5.2.7) bounded | e n+1 1 in terms of |ej and D^. We can bound the solutions of that recurrence by the usual methods (see [6], p. 12, for example) to obtain for the q-th problem, D (a' 01 '-!) . (5.2.10) |e nJ - n VV C 01 ' for < nH < L. (Note the notation. The letter q will always refer to the q-th problem. The letter n will always refer to the time t R = nH. If the method is applied to a single problem, the subscript q will not appear.) Now, 45 N q N D = Z ( . q )(CT )- L " x C K T - q' (5.2.11) | e I < Mg ~ (CH + HUp U1 - ^ n >q ~ C(CH)C 01 Since we can take H as small as we want, with H > Tqf by taking q large enough, (5.2.11) imp iies that ^J is 0(H) in the sense that for all q such that H > Tq , |e njq | < KH, < nH < L. The constant K does not depend on H or q. In general, for any method, the statement M |e I is 0(H P V n,q' ' will mean that for any sequence of problems satisfying (5.2.8), there are positive constants r Q and H Q such that for each H 6[ , H ] , if f < r Q and < nH < L, then I^J < KH?. The constant K is independent of H and q, and r Q depends only on the method. The generalized forward Euler method is a one-step method. That is, it uses values of z and g from one prior time step. An a-step method uses values of z and g from £ previous time steps. A generalized £-step method will have the form where H = NT, z^ is the solution computed at time t , and t = nH (H) . . , n n G n,q 1S the (g lob al) error of the method, applied to the q-th problem at time ^ with stepsize H. That is, it is the difference 46 between the solution computed by the method and the true quasi-envelope. Analogously to ordinary differential equations, we have the following definition: Definition 5.1 . A generalized £-step method is convergent if there is a constant r > such that for any sequence of problems satisfying (5.2.8), we have lim sup e - U , H+0 T n ' q {qh?±rn> H - 0' uniformly for j< nH £ L. Thus, a method is convergent if |e | is 0(H ) with p > 0. n , q To investigate the convergence of generalized £-step methods we will first examine several other properties of the methods, which are analogous to similar properties of methods for ordinary differential equations. A condition that is satisfied by convergent linear multistep methods for ordinary differential equations is stability . A method is stable if for any differential equation satisfying a Lipschitz condition on an interval [0, L] , a small perturbation in the initial values causes a bounded change in the numerical solution as H -> 0, with nH = L, where n is the number of steps taken. One way to guarantee this is to require the polynomial I p(5) = Z a,? 1 1=0 to satisfy the root co ndition . That is, the roots of p(?) = are required to be inside the unit circle, or on the unit circle and simple. (For motivation of this see Gear [6] or Henrici [10].) 47 Analogously, for the generalized linear £-step method (5.2.12), we will have the following definition. De finition 5.2 . A method of the form C5 ? 1?\ *o :* i-ue iorm u.^.iz; is uniformly stable if the polynomials P r (5> = I a.(r)? 1 , r =± 1=0 N satisfy the root condition for all r In some Interval [0, ,„), and o 1 (r) and B.(r) are continuous functions of r on [0, r J, (r > 0) . Another concept from ordinary differential equations that will be useful here is that of order. The order of a formula is a measure of the (local) accuracy of the formula. That is, it is a measure of the amount hy which (5.2.12) fails to he satisfied if z^ and ^ are ^ D efinition 5.3 . A method (5.2.12) is of order j, if Jo [a i ( N> z ^„ +1 >] = o whenever z is a polynomial of degree p, for all r (r . I) such that £ r < r Q . (As usual, Az is the forward difference of z over an interval of length T). A method of order p, with ,>,„,,, h „ „ . v, «iui p *_ i, will be called consistent . Then we have the following theorem, which in its weakest form states that a uniformly stable and consistent generalized £-step method is convergent. THEOREM 5.1. For a sequence of problems satisfying (5.2.8), the error e^ of a uniformly stable generalized d-step method of order p, with H sufficiently small, is 0(H P ). PROOF. First, we will find a bound for D n . the local error n , q in solving the q-th problem with a stepsize H. Let H = T N . Then 48 (5.2.13) D n>q - Z [a.(f )z q (t n+i ) + N q $.(^)A Zq (t n+i )] I Z i=0 I iN iN (5.2.14) = I V [c ti (f)A^ q (t n ) + N q 3 i (f)A j+1 Zq (t n )]( . q ) i= o j=o * "q q q ^ [Q j,q ]AJz q (t n } ' £ IN 1 lN where Q. - I [a.^ )( *> + Ng.^M J>1 . J »q T=n a J q &N +1 q z J-o £ IN i iN Z [a.(f )( j q )+Vi ( ^ i=o 1 w q J q By the definition of order p, Q. = for j = 0, 1,..., p. Deleting the terms corresponding to Q , j = 0, 1,..., P, we have I iN -,. IN (5.2.16) |D | < I Z { & ^' q (V<]'l n ' q i=o j=p+i q iN 1 n+1 1N + Z q N 3.(^)A J +1 z (t )( j=p q X N q q n J Since Az (t.) = T g(z(t ), t ), we have from (5.2.8) that q k. q q K- K |A\(t n )l i VtCl/' 1 . Substituting this into (5.2.16) and using the Triangle Inequality, we obtain (5.2.17) |D | < Z (|a.(f)| | Z q (CT ) J ( q ) n ' q i=0 X % C j-p+1 q 3 1 iN • ±N n + Ve K lBil - a > and i e £ ( F" } i < »• q q we have that le I is 0(H P ).|| n,q' l l It is easily seen that the generalized Adams formulas are uniformly stable, because p^E.) is independent of N, and is thus the N same as p(£) for conventional Adams formulas, so it satisfies the root condition. In addition, the coefficients (3.^)} are polynomials in — , and hence are uniformly bounded for < — < r « - N - 0* It is also easily found that the generalized Adams formula which is of order p when T = is of order p for T / (it is exact for polynomials of degree < p). Thus, we have trivially from Theorem 5.1: Corollary 5.1. The generalized Adams formulas of order p are convergent, and le is 0(H P ). n,q' A slightly more circuitous route may be taken in order to find the error of the generalized methods in nonsynchronized mode (when we are not skipping over an integral number of cycles). Instead of assuming (5.2.8), we need to assume » ll \ to be of order T P . Then the p+1 difference |z C (mT) - z(mT)| is 0(H P ). 5.3. Summary of Error Considerations Combining the results of the last two sections, it follows that the generalized methods are limited in accuracy by the small-step method used to compute the solution over one period. By choosing the period, and stepsize (and order) of the outer method appropriately to follow the smooth solution, computation of the solution can be speeded up, with little sacrifice in accuracy. It seems clear that the optimal choices of stepsize and order would keep the (global) errors of the outer method and the inner method (if it were applied over the entire interval), approximately the same order of magnitude. To see this, notice that ||z C (mT) - y(mT)|| < ||z C (mT) - z (mT) || + || y (mT ,0) - y(mT) || . For mT near L, the first term represents the global error due to the large- step discretization. The second term represents the global error of the small-step method over [0, mT]. There are several ways to approach the problem of bounding the errors of the two methods combined. Considering the generalized methods to be operating on the numerical solution by the small-step method as in section 5.1, then the sequence of numerical solutions by the small-step method is required to satisfy (5.2.8). 53 We could have alternatively dealt with the errors made by the small-step method by lumping them into the terms D . This is n,q straightforward and does not seem to offer as much insight as the approach taken in the previous sections. We have considered two possible modes in which the generalized methods can be used. It is probably safest to always operate the methods in synchronized mode, though it is desirable to understand the limitations of this. Thus, we will take a closer look at how the method operates in both modes. For an autonomous problem with H » T and T(t) varying slowly, the nonsynchronized approach generally works quite well. Unless T(t) is varying very slowly the large steps we use to follow the components of z are solving t(6 + t) = t(£) + x T(t T (£)) , with. absolute error that may be large enough so that the phase information would be lost. The error in t at the end of the interval will not be too severe, however, in the sense that if z(t) is a slowly varying function, z(L + 6) = z(L) + 0(6). So a small error in t just translates to an error of the same order in z at the end of the interval, for a well-behaved, autonomous problem. There are other problems, however, for which constraining H to be a multiple of T may be necessary in order to obtain a solution. If T(t) is changing very slowly, or if we are happy with skipping over just a few cycles at a time, it is possible to retain the phase information. (That is, the error in t will be small relative to T(t).) The feature distinguishing problems that cannot be solved with large 54 steps without constraining H as above is that the function g(z, t), instead of changing slowly in time, may be nearly periodic with period T. Then, although g is changing rapidly, if we sampled g only at multiples of T, the resulting sequence of points would be changing slowly. Such a phenomena occurs in the problem y " = _A 2 y + A 2 sin(Xt) , and is likely to occur in any problem that has a high frequency periodic forcing function. The effectiveness of using the methods in synchronized mode to determine both the amplitude and phase information of the solution is thus limited by how fast T(t) is varying (as the phase is very sensitive to time), as well as by the smoothness of the quasi-envelope z(t). Analysis of the nonsynchronized case provides some insight into how and when the method works if the phase information is lost. 5.4. Effects of Changing the Choice of Period In this section we will investigate the effect of changing the choice of the period T by a small amount. One way we might view T is that it is given to us by some oracle. Once T(t) is specified for each time the functions g(z, t) and z(t) are specified. In that sense, a slight change in the choice of T(t) would not be an error. However, it does affect the computation. It is clear that some choices of T are better than others because they will define functions z(t) which are smoother and easier to follow with large steps. There may be other properties that we would like z to have which could be obtained by a suitable choice of T. For example, if y is periodic, we would like T to be the period so that z would be constant. 55 We would like T(t) to be changing slowly, and thus if it is being computed by some iterative algorithm like the one in Chapter 3, it would be useful to have some idea of when to stop the algorithm so that the errors introduced in computing T do not interfere with the computation of z in a noticeable way. Since g( Z , t) = y (t + T - fc) - v (t - t) T then we have |f = - ty( t + T, t) - y( t . t)1 | 1 3y(t + g, t) 9T 2 T T 8s s=T = Y f f (y(t + T, t), t + T) - g( z , t )] For small changes in T, we have approximately that At (5.4.1) — [f(y(t + T> t)f fc + T) _ g(z> t)] ^ Ag ^ >rm The outer method uses g in the fo £ a. z = h 1 n+1 1=0 5 ' 4 * 2) Z a -j z „4.- = H Z 3. g • n i n+x p i ^ n+i Let B = max 0 errors in HBg should be less than ae, where a < 1. So I HBAg | < ae , or (5.4.3) AT ae - MJj|f(y(t + T, t), t+ T) - g( z , t )| In a practical program, since approximations to all of the quantities in (5.4.3) are known, we will control the relative error in T to satisfy (5.4.3). (That is, the iteration to find T will be terminated when (5.4.3) is satisfied). This will hopefully control z to be smoother and at least more predictable. 56 It is interesting to observe what happens when T is chosen to be consistently higher or lower than what one would expect. For example, if we had y = sin(Xt), and T = 2tt/A - e, the resulting z would no longer be constant but would be a slow oscillation. As always, z will have the property that the solution y can be recovered from it by integration for one cycle starting at a multiple of T. If y consists of two oscillations with slightly different frequencies and T is chosen somewhere between the periods of the two oscillations, two slow oscillations in opposite directions will result and y can be recovered from z as before. As long as the frequencies are close enough together, z will vary slowly enough that we can follow it with large steps 57 6. IMPLEMENTATION CONSIDERATIONS The development of an automatic program implementing the generalized Adams methods will be considered here. Section 6.1 examines a data representation for the generalized Adams formulas which facilitates changing stepsize and order. Implementation details are discussed in section 6.2. Computational results are in section 6.3. In section 6.4, suggestions are made for increasing program efficiency. 6.1. Data Organization for Outer Integration In this section we will see how to implement the generalized formulas as a predictor-corrector process. A data organization scheme which is similar to Nordsieck's form of Adams method (see [6]) will be described and will enable us to change stepsize and order easily. Using the change of variables described in section 2.3, append t onto the end of z. Then we will solve (6.1.D z ( e + x) = zee) + Tg(z(e)) , where g(z(e)) = |_ 8 n+l_ T(t ( P\ \ and 8 n+l == t ' Thls chan §e of variables will be assumed. It is useful to notice that the (n + l)-st component of £ is the estimate for the period T(t) (divided by r) at each step. Thus, computation of T(t) is treated like a function evaluation. The predictor equation is used 58 to form an initial estimate for the period at each step (except the first step, where the user-supplied estimate is used), which is then corrected by the Newton iteration of Chapter 3. The procedures described here can be applied to the vector _z simply by applying them individually to each component of _z at every time step, so we will consider only the scalar difference equation (6.1.2) z(t + T) = z(t) + Tg(z, t) . In this chapter the generalized £-step explicit method will be written as (6.1.3) z n = I (a ± (r) Vi + He i (r)g n-i } ' i=l where r = -. The generalized £-step implicit method is given by H I (6.1.4) z = Z (a*(r)z . + H6*(r)g ) + H&*(r)g(z ) . v n , i n— l i n— J. «-» ii 1=1 In a predictor-corrector method, the explicit formula, called the predictor, is used to form an initial guess for the implicit formula, called the corrector. The corrector fomula is then iterated a fixed number of times (say M), or until it converges. Using the formulas in (6.1.3) and (6.1.4) we have C6a - 5) Z n,(0) = £ to i (r)2 n-l + »1 W W ' I z n (n+1) = * («lWVi + H6 ? (l) Vi ) +He 8 (r) S< Z n,( m )> • ' i=l Here we will use the generalized Adams-Bashforth formula of order k as the predictor (k = i) , and the generalized Adams-Moulton formula of order k as the corrector. These formulas require values of z and g, which can be stored in the vector w^. 59 Hg. (6.1.6) w Hg n-1 _ Hg n-Wj It is easier to compare different organizations of the computation if (6.1.5) is rewritten in terms of the vector w . Then — n the prediction step is given by -n»(0) -n-1 ' where for the generalized Adams methods, — n (6.1.8) B = "l B 1 (r) e 2 (r) ... 3 k (r)" y 1 (r) y 2 (r) ... Y fc (r) o o o Now, B.(r), i = 1,..., k are the coefficients of the generalized Adams- Bashforth method of order k. y ( r ) , i = 1,..., k are given by Y.(r) = 3 i (r) - 3*( r ) Bg(r) where 3*(r), i = 0, 1, . . . , k are the coefficients of the generalized Adams-Moulton formula of order k. The corrector iteration in (6.1.5) will then be rewritten as (6 - 1,9) ^.(PU) =^n,(m) + ^,( m )> • for m = 0, 1,..., M. G(w ) is defined by — n 60 (6.1.10) G(w ) = -Hg + Hg(z ) . — n n n Here Hg is the second component of w , and z is the first component of w Note that G(w ) is the amount by which w fails to satisfy (6.1.2). — n' ~n ~~ " Thus, the goal is to make G(w ) = 0. The vector c is chosen so that (6.1.7) followed by (6.1.9) is equivalent to the iteration defined by (6.1.5). For the generalized Adams methods, we have (6.1.11) Hg*(r) Table 6.1 lists expressions for g*(r) for generalized Adams-Moulton formulas It is inconvenient to change the stepsize from H^ to H r when using the representation (6.1.6) of the past values of z and g. Changing stepsize by a ratio a = ^/H^ corresponds to premultiplying w^ by a matrix C(a ). C(a ) computes the values of g at the points n n ( t t _ H ... t - (n - k + 1)H ) from the values of g at the points n' n n* " " ' n n ( t t _ h ,..., t - (n - k + 1)H ..) by interpolation. The step- n' n n-1 n n-1 changing operation is represented by (6.1.12) w^i = C K } ^-1 ' 61 Table 6.1. Adams-Moulton Coefficients - 3*(r) k (Order) 3* (r) for Order k 0,k 1 1 2 |(1 - r) , 5 6 12 3 12 " 12 r + 12 r A (9 - 12r + 3r 2 )/24 5 (251 - 360r + 110r 2 - r 4 )/720 6 (475 - 720r + 250r 2 - 5r 4 )/1440 7 (19,087 - 30,240r + ll,508r 2 - 357r 4 + 2r 6 )/60,480 8 (36,799 - 60,480r + 24,696r 2 - 1029r 4 + 14r 6 )A20,960 9 (1,070,017 - l,814,400r + 784,080r 2 - 40,614r 4 + 425r 6 - 3r 8 )/3,628,800 10 (2,082,753 - 3,628,800r + l,643,760r 2 - 100,926r 4 + 2250r 6 - 27r 8 )/7,257,600 11 (134,211,265 - 239,500,800r + 112,923,360r 2 - 7,960,480r 4 + 266,090r 6 - 4785r 8 + 10r 10 )/479, 001,600. The step-changing matrix C(a) for the vector w defined by n — n j (6.1.6) is nontrivial. To find a more convenient representation for the past values of z and g, first note that the vector ^ uniquely determines a polynomial w (t) of degree k which satisfies (6.1.13) w (t ) = z n n n f [W n (t n^ T) " w n (t n )] = H § n H r / t ^ W Ct . .- + T) - w (t )1 = He T n n-k+1 n v n-k+r J 8 n-k+l 62 Instead of representing w^t) in terms of (z^, Hg n> . . .Hg n _ k+1 ) , we can equivalently store (6.1.14) a = — n. w (t ) n n H f 4 "n (t n> h *W k!T n n where Aw (t) = w (t + T) - w (t). This is similar to Nordsieck' s form n n n of Adams method, except that one of the derivatives has been replaced by A. Recall the Nordsieck vector, H k (k),T tV % kT y n ] • We can write (6.1.7) and (6.1.9) in terms of the representation (6 1 14). To see this, note that a and w represent the same polynomials, v u • ■*■ • ' — n — n They are related by the linear transformation (6.1.15) For example, when k = 2, a = Sw — n ^n \ lt J 1 0^ w (t ) n n 1 *•„<«.) = 1 S Aw (t ) T n n i KK\ 1 2 1 " 2_ 5 Aw (t . T n n-1 It is easy to demonstrate that for all k, S will be independent of -. It is the same as the matrix which transforms conventional Adams H methods to Nordsieck form. (6.1.16) 63 With the past information stored in a^ the predictor becomes ^n,(0) = A -Vl ' where A = SBS . A can also be determined by using the fact that the predictor is of order k. That is, if w(t) is any k-th order polynomial, and w(t n ) - [w(t n ), f Aw(t n ), |J Aw'CO,..., i^Awf^Ct )J T , then (6.1.17) w(t n ) = Aw(t n _ 1 ) . Thus if k = 2 and A = (a . . ) , we have w(t n ) = a n w(t n _ 1 ) + a 12 f [v(t n-1 + T) - w(t )] H 2 d + a i3 2T dt [w(t + T) " w(t)] t = t n-1 Expanding around t^, and equating coefficients of the first k derivatives of w, we find a ll = !• a l2 * !- a i3 = 1 ~ | • Since rows 2 through (k + 1) of A extrapolate Aw in terms of its past derivatives, these coefficients do not depend upon |. They are the same as the corresponding rows of the Pascal Triangle Matrix, whose (i,j) element is ("!"£), k + l>j>i> 1. (6.1.18) A = 1 1 o^Cr) a 2 (r) 1 2 3 1 3 o 1 The coefficients a^r) are given in Table 6.2. Note that if - = 0, H A is the Pascal Triangle matrix. 64 Table 6.2. Perturbed Coefficients for Nordsieck Prediction 1 2 3 4 5 6 7 8 9 a ± (r) 1 - r 3 12 1 - 2 r + 2 r 1 - 2r + r 1 - -kl5r - 10r 2 + r 4 ) 6 5 2 14 1 - 3r + jr - f r 7 7 2 7 4,16 1 - 2 r + 2 r " 6 r + 6 r 14 2 7 4 2 6 1 - 4r + T r - ^r + 3 r x _ ^L(45r - 60r + 42r - 20r + 3r ) 15 2 4 r 6 3 8 1 - 5r + ^r - 7r + 5r - jr The corrector iteration is rewritten in terms of the vector a^ as (6-1-19) a nj(mfl) =^, (m) + A[Hg nj(m) " Hg( Znj(m) )] . where Hg (m) is the second component of a^ (m y. z n> (m ) is the flrSt component of a ( \ > an< ^ (6.1.20) A " S £ • For example, when k = 2 we have I = 10 10 1 1 B s 8* 1 = 1 — 1 _2_ L The first column of S is always [1, 0,..., 0] T , and c has the form [S*(r), 1,..., 0] T . Thus the components of £ will be the same as I for conventional Adams methods in Nordsieck form, except 3* depends upon r. (This reduces to Nordsieck' s form of Adams method as r ■+ 0.) The components of I are listed in Table 6.3. o o 00 v£> r-\ CO 65 a\ o? ^ o? oo c 0) c o a B o m mo c^ U0 o? <^ CM o [CM CO o CM O KO O 00 CO CM MO iH|C0 o CO M0 00 r^. CM uo M0 CM co H CO o M0 M0 r^ ON o> vD 00 CM a\ CO CO o> O M0 O O M0 H Oi m CO o t-i M0 O rH co St CO tH o o OO M0 CO o 00 oMcm CO CM m st CO o o M0 CM r->~ O M0 H CM r^ CM M0 CO u0 CO o> o o M0 o CM o MO H CO O^ r^ LJ0 O oo CM iH M0 CO 1st i— I CM in cm L<0 St CM CM m oo CT\ O st st C^ stl CM oo sr o MO O , where C(a n ) = diag[l, o^, C^,..., aj . Thus the predictor-corrector technique we will use is given by ( 6 - 1 - 21 > ^n,(0) = AC(a n^n-l ' *n,(m + l) = ^,(m) + ^ H[g n,(m) - g(Z n,(m) )] ' 6.2. Details of Implementation A program was written (see the Appendix for FORTRAN code) which implements the techniques discussed in the previous chapters. In this section the structure and use of this program will be described. This program is intended to demonstrate the efficiency of the methods developed here, and the feasibility of using a reasonably general, adaptive routine for solving non-stiff oscillating problems. As this is the first effort to accomplish this, it is far from perfect, and suggestions for improvements based on the experiences of programming and using this routine are presented in section 6.4. Due to the fact that this routine is, by necessity, fairly long, it may be helpful to begin by breaking it up into its component parts. The function of each subroutine will be explained separately. Figure 6.1 is a picture of how the parts are arranged to form the whole routine. The driving routine sets up the parameters and does the initializations for the rest of the routines. The routine next writes out the parameters, and calls DIFF (the outer integration routine) to do one large step. If DIFF returns with no errors, results from the last step are printed, and another step is taken, as long as t is less than TEND. If an error occurred in DIFF, the routine prints out a message and stops. 67 DRIVING ROUTINE OUTER INTEGRATION ROUTINE PERIOD FINDER INNER INTEGRATION ROUTINE FUNCTION SUB- PROGRAM FOR PROBLEM TO BE SOLVED Figure 6.1. Diagram of Generalized Adams Code The structure of DIFF is very nearly the same as that of DIFSUB (the inner integration routine). DIFSUB can be found in [6, Chapter 9], and it is suggested that readers who are not familiar with this routine read about it there. Using the data organization scheme which was discussed in the previous section, it is only a matter of changing a few lines of DIFSUB to enable it to solve difference equations of the type that are treated here. Some advantages of the two routines being so much alike are (aside from the obvious ease of programming) listed below. 68 DIFSUB was chosen as the inner integration routine because it is well-known and versatile, and it seemed desirable to demonstrate that it is easy to use any 'black box' to serve that function. Also, the Nordsieck data representation is quite convenient for interpolation, which is precisely what the routine PERIOD (the period-finder) does with the data it receives from DIFSUB. Because understanding the entire program already necessitates understanding DIFSUB, it seems expedient for the two routines to be very similar to each other, so that following the whole program is a much less formidable task than if these two routines were appreciably different. Naturally this involves some sacrifice in efficiency. The changes (from DIFSUB to DIFF) are the following. First, the parameter list must be expanded so that it contains parameters and work arrays used by the routines it calls (principally, PERIOD). All calls to DIFFUN are replaced by calls to PERIOD. An array ALPHA holds the coefficients from Table 6.2 and its components are calculated whenever the stepsize or order has been changed. Then, in the prediction step, the coefficients ALPHA are used as described in section 6.1. The elements of I (in the routine, this is a vector called A), which differ from those in DIFSUB only in the first component, are calculated whenever there is a change in stepsize or order. The only change in logical structure from DIFSUB to DIFF is that method coefficients are recalculated at the same order with a change in stepsize in DIFF, while this is not necessary in DIFSUB. In the present form of the program, the code corresponding to solving stiff problems (MF = 1 or 2) could be eliminated since we are not considering stiff oscillating systems. (There are several problems with solving such systems that have not been adequately 69 investigated and will be discussed in the next chapter.) With these changes, DIFF solves the difference equations that arise from oscillating problems. Errors are estimated and the stepsize and order are changed in exactly the same way as in DIFSUB. That is, the single-step truncation error is controlled to be less than EPS. For a k-th order corrector the local truncation error is (6.2.1) CJ+i ( |)k + l A k+l z # The strategy in DIFSUB is to choose the order (from the current order, one higher, and one lower) that gives the largest H when the local truncation error is set equal to EPS. However, here the error coefficient C* +1 depends on - (whereas in conventional Adams methods, C k+1 ±S a cons tant). Setting (6.2.1) equal to EPS and solving for H thus amounts to solving a nonlinear equation. Since the error coefficients are very nearly the same as the (conventional) Adams-Moulton coefficients for | small, and | will generally be small for the type of problems that would be solved with this routine, we avoid the trouble of solving the nonlinear equations by assuming the coefficients are the same as for the corresponding (conventional) Adams method. This is a poor approximation only when H is near T (generally the first few steps at low order). The error coefficients C* +1 for the generalized Adams- Moulton formulas are listed in Table 6.4, where it is easy to verify that they tend to k!C k+1 for conventional Adams formulas when § tends to 0, and that they tend to when H tends to T. 70 Table 6.4. Error Constants for Gen eralized Adams-Moulton Formulas k (Order) C* +] (Error Constant) r ^i 1 1 2 2 "6 + 6 r 1 1 2 3 " 4 + 4 r , ii + l r 2 _ J_/ 4 " 30 3 r 30 9 5 2 14 5 "4 + 2 r "4 r 7 -^F + -r 2 -^ + A r6 ^tq'53 2 1624 4 , 50 6 8 _ 33953 + 48Qr 2 _ __ r + _ r 57281 , , 7Rn 2 9849 4 6 _ 21 8 9 20 - " 10 20 „r nA qo o 29531 4 5345 6 91 8 5 10 10 - ^f§P + 33600r 2 - ^fV + — r - -^r + ^r 1891755 ln 2 214995 4 47025 6 3465 8 15 10 ii _ ia^i/33 + 3326 40r - ^ r — 4 — " 8 C* reduces to C R+1 ([6, p. 156]) when r = . The 'function evaluations' of DIFF are really calls to PERIOD, which solves the minimization problem of Chapter 3. This routine estimates the period of the oscillation and computes the function g. PERIOD computes a tolerance DELTA which is used to terminate the Newton iteration, based on the reasoning in section 5.4. In this implementation, the set P (see Chapter 3) was chosen to be empty because in experiments using linear and quadratic polynomials, results did not differ significantly from using constants. 71 The strategy in PERIOD is as follows. DIFSUB is used to integrate the original differential equation from the current time TINIT for one estimated period TN. The time and value of Y at every KSKIP steps are accumulated in the arrays TIME and YSAVE. The state vector for the last step is stored as (YMID, TMID, HMID, etc.). The next (estimated) period is integrated by DIFSUB starting from TMID, and every time t is greater than or equal to TIME(k) + TINIT for the current k, Y(t) is interpolated back to that time. These values are then used, along with Y(TIME(k)) to compute the integrals needed for the Newton iteration. The quadrature is done simply using Riemann sums over these intervals. The sums are accumulated every time t passes TIME(k) + TINIT for each k. When the difference between one estimate of the period and the last is less than DELTA, the iteration is said to have converged, and g is computed using the values of Y at the ends of the intervals. The changes to DIFSUB to accomodate it to this program are very minor. Basically it is necessary, since DIFSUB may be called in several different contexts, that all of the information we need to begin where we left off is included in the parameter list. The parameter list is expanded somewhat to include these variables. The routines the user must supply are exactly the ones needed by DIFSUB. Also, the user must supply a good (accurate to within about 5 or 10 percent) starting guess TN for the period of the oscillation to the driving program. This may be deduced from the nature of the physical problem, from the largest eigenvalue, if the problem is nearly linear, or, if necessary, by just integrating with DIFSUB for a little while and observing where the solution begins to repeat itself. 72 The main input parameters (to DIFF) which must be supplied by the user are described below. For a more complete description of the parameters, see the comments at the beginning of DIFF in the Appendix. For convenience they have been broken up into three categories, according to which routine they are (mainly) used in. Names in parenthesis are the actual names used for the variables in the program, when different from the first name given. Parameters for Inner Integration These should be set as though the inner integration routine (DIFSUB) were being used to solve the problem over the whole interval. H (HI) Stepsize for the inner integration. The routine will begin the integration with stepsize H, unless that stepsize is too big to achieve the requested error tolerance. Later, H may be adjusted by the routine for purposes of accuracy or efficiency. EPS (EPSI) The routine (DIFSUB) tries to adjust the stepsize and order so that the Euclidean norm of the local error is less than or equal to EPS. Here, error refers to the errors made by the inner integration routine alone. It is the relative error that is controlled, unless the variable is less than one, when absolute error is controlled. Normally, for a periodic problem over a long time interval, EPS for the inner integration should be quite small. HMIN (HMINI) The minimum stepsize to be used in the inner integration. 73 HMAX (HMAXI) MF (MFI) MAXDER (MAXDEI) The maximum stepsize to be used in the inner integration. The method indicator for the inner method. If MF = 0, Adams methods are used. If MF = 1 or 2 stiff methods are used. If MF = 2, the routine computes an approximation to the Jacobian by differencing. The maximum order to be used in the inner integration. It must be less than 13 for Adams and less than 7 for stiff methods. Parameters for Outer Integration EPS HMIN HMAX MF MAXDER The stepsize to begin the outer integration. This may be changed by DIFF to achieve the error tolerance criteria. The routine (DIFF) tries to adjust the number of cycles skipped so that the Euclidean norm of the local error made by the outer integration routine is less than or equal to EPS. The minimum stepsize to be used by DIFF. In the examples in this chapter, HMIN was set to five times the estimated period because it was felt that skipping less than five cycles would be inefficient. The maximum stepsize to be used by DIFF. This is the method indicator for DIFF. It is always because we are always using generalized Adams methods. The maximum order method to be used by DIFF. MAXDER must be less than 12. 74 INTFLG If INTFLG is 1, DIFF will skip over only integral numbers of cycles of the oscillation. INTFLG = 1 is recommended. Parameters for Period-Finder TN MAXL KSKIP MAXCNT Initial estimate for the period. This must be accurate to at least 5 or 10 percent. The more accurate it is, the faster the routine will be in getting started. Values of TN are improved by the routine PERIOD. The main function of MAXL is to allocate storage. A maximum of MAXL values of Y in one period will be saved for the period-finding algorithm. If more space is needed, PERIOD will stop and print a message. It is possible to use less storage by increasing KSKIP (within limits). Every KSKIP result from DIFSUB will be used in the quadrature inside the Newton iteration to find the period of the oscillation. KSKIP = 1 or 2 is recommended. The maximum number of iterations that will be allowed in one call to PERIOD for the Newton iteration. 6.3. Computational Results This section describes results of applying the program which implements the generalized Adams methods to several test problems. All computations were done on an IBM 360/75 in double precision. 75 Problem 1 is an example of a problem that can be solved efficiently if H is constrained to skip over integral numbers of periods, but not if H is allowed to vary arbitrarily. The problem is (6 ' 3 ' 1 ) y" + X 2 y = a sin(At) y(0) = 1 y»(0) = -a/2A . Converting to a system of first order equations, this becomes (6.3.2) y x (t) y 2 (t) t A "y^O = + -X y 2 (t) a/X sin(At) \(0)~ ~i y 2 (o) — -a/2A 2 • This has the solution (6.3.3) y x (t)' y 2 (t) "(1 - (a/2A) t) cos(At) - (1 - (a/2A) t) sin(At) - a/2A 2 cos(At) We will compare the solutions obtained by the program to (6.3.3) at intervals of time which were chosen by the program. (These intervals were, of course, chosen for purposes of efficiency. It is easy to obtain results whenever you want them, however, by using the predictor fomula and the vectors of stored values at these points to extrapolate to other points. ) In this test, problem and program parameters were chosen to have the values in Table 6.5. (An explanation of what the parameters are used for is given in section 6.2.) Note that non-stiff methods can be used for the small-step (inner) integration because the stepsizes are small relative to the period. 76 The program was run in synchronized mode (stepping only over integral numbers of periods), and with DIFSUB alone (with parameters identical to the parameters for the inner integration) for comparison. Results are shown in Table 6.6 and the errors made by the two routines are compared, along with the number of function evaluations (of the original function, f(y_, t) from the equation y/ = f(£, t)), in Table 6.7 Note that the generalized methods were much faster than DIFSUB alone, both in terms of function evaluations and execution time. Table 6.5. Problem 1 - Parameters Problem Parameters X = 1000 a = 100 Parameters for Inner Integration H = .ID- 5 EPS = .ID- 6 HMIN = .1D-13 HMAX = 1.D0 MF = MAXDER = 10 Parameters for Outer Integration H = .2512D-1 EPS = .ID- 3 HMIN = .314D-1 HMAX = 5. DO MF = MAXDER = 10 TEND = 15. DO INTFLG = 1 (synchronized mode) (nonsynchronized mode) i Parameters for Period-Finder TN = .628D-2 MAXL = 400 MAXCNT = 5 KSKIP = 2 77 Table 6.6. Problem 1 - Results time true solution by solution by solution .9987434 gen. methods .9987451 DIFSUB .2513274D-1 .9987458 -.5D-4 -.5032188D-4 -.4766485D-4 .5026548D-1 .9974867 .9974902 .9974922 -.5D-4 -.5063552D-4 -.4513941D-4 2.664071 .8667964 .8669955 .8670935 -.5D-4 -.6509385D-4 -.2940519D-3 5.277876 .7361062 .7363785 .7366541 -.5D-4 -.1620934D-3 -.6477888D-4 8.846726 .5576634 .5581064 .5584882 -.5D-4 -.2100309D-3 -.3004953D-3 12.41558 .3792145 .3798945 .3802331 -.5D-4 -.1672133D-3 -.1978808D-2 13.29522 .3352390 .3359656 - .3362903 -.5D-4 -.1543100D-3 .2296590D-3 14.17487 .2912542 .2920207 * -.5D-4 -.1529315D-3 15.05451 .2472740 .2480030 * -.5D-4 -.2412899D-3 Period = .006283186 * DIFSUB was stopped here after executing for 5 minutes, 45 seconds. Execution time for the generalized Adams program was approximately 1/2 minute. 78 Table 6.7. Problem 1 - Errors and Efficiency time error (synch. error funct. evals. funct. evals. Ren, methods) (DIFSUB) (gen, methods) (DIFSUB) 2513274D-1 -.1737D-5 .3218D-6 .5026548D-1 -.3474D-5 .6355D-6 2.664071 5.277876 8.846726 12.41558 13.29522 14.17487 15.05451 -.1991D-3 .1509D-4 -.2723D-3 .1120D-3 -.4430D-3 .1600D-3 -.6799D-3 .1172D-3 -.7266D-3 .1043D-3 -.7664D-3 .1029D-3 -.7289D-3 .1912D-3 -.2400D-5 -.2335D-5 -.5500D-5 -.4860D-5 -.2971D-3 .2440D-3 -.5479D-3 .1477D-4 -.8248D-3 .2504D-3 -.1018D-2 .1928D-2 -.1051D-2 -.2796D-3 568 809 1291 1990 2678 3335 4213 4633 5251 429 810 40,635 80,457 134,829 182,645 192,881 The same example was run in nonsynchronized mode (skipping over arbitrary numbers of periods). The minimum stepsize for the outer integration was chosen to be five periods. The routine took 22 steps with the minimum stepsize and period -y-, and then took a few steps with larger stepsizes but changed the period, and ran out of time (2 minutes) at T = 3.177514. This was slightly less efficient than DIFSUB alone, and illustrates the advantages of skipping over only integral numbers of periods for problems with high frequency forcing functions such as this one. It is evident that the program should be able to recognize situations such as this when the inner integration routine could do just as well alone (if it is not recognized that only integral numbers of cycles should be skipped). This 79 could be done by counting the number of periods integrated through at each step, and comparing this to the number of periods skipped on that step. If the former exceeds the latter for several steps (for example, four -as this is likely to be the case on the first several steps because of start-up problems), the routine could stop, or switch to the inner integration routine alone. In this example, it would have stopped after 9 steps (45 periods). Problem 2 describes the nonlinear oscillations of a slightly damped pendulum. The angle x of deviation from the origin for a pendulum of mass 1, length £, and damping coefficient U is described by (6-3.4) x" + yx'+ (|) sin(x) = . Starting from an angle of 1 radian, and releasing the pendulum, the initial conditions are x(0) = 1, x'(0) = . This initial angle is large enough so that it is not possible to obtain an accurate solution to (6.3.4) from the linearized equation. Values of the parameters for the problem and the program are given in Table 6.8. Note that a change of variables was made so that the units of time are thousands of seconds. This problem differs from Problem 1 in several ways which should be noted here. First, it is an autonomous problem, and thus we expect not to see the synchronization difficulties which were present in Problem 1, which were due to the high frequency forcing function. Secondly, the period of the oscillation is changing slowly. It turns out this makes it more difficult for us to retain the phase information of the oscillation, although the amplitude information is quite easily obtained. 80 For the program, (6.3.4) was split into a system of first order equations (6.3.5) x 2 = " yx 2 'A Sin(x l } ' The numerical solution obtained by the generalized methods will be compared to the amplitude of the first asymptotic approximation of Bogoliubov and Mitropolsky [3, pp. 141-143], since an exact analytic solution is not available. The first approximation is given by x(t) = a(t) cosOKt)) , where a and \\) are given by " 20 C a(t) = e (6.3.6) (e * iU - 1) ^ = 7! [t + I§ Tooli ] • We will compare the smooth solution z(t) computed by the generalized methods to a(t), the amplitude of the first approximation. Results are shown in Table 6.9, and the program results are compared to results obtained by DIFSUB alone in Table 6.10. The solution obtained by the generalized methods in these tables was obtained with the program in synchronized mode, though for this problem, in contrast to Problem 1, skipping over non-integral numbers of periods does not impair the accuracy or efficiency of the program. In fact, the program is slightly faster in that case. It was found that the numerical solution for this problem (with these parameters) was bounded above by the first approximation and below by the second approximation at each step. Though we do not know exactly how accurate the two approximations are, this behavior seems to be reasonable, Also, it should be noted that the differences between the numerical solution 81 and the first and second approximations are much smaller if a smaller initial angle (amplitude) is used (say, \ radian instead of 1 radian). This is probably due to the fact that the asymptotic approximations are much more accurate for small amplitudes, as they approximate sin(x) by the first two (or three) terms of its Taylor series. The amplitude information obtained by the generalized methods is also very near to the amplitude of the solution by DIFSUB alone. All of these observations would seem to indicate that the amplitude information in the numerical solution is quite accurate. Table 6.8. Problem 2 - Parameters Problem Parameters g = 9.8 x 10 6 m/(1000 sec) 2 I = 2m y = .1/(1000 sec) Parameters for Inner Integration H = .ID- 5 EPS = .1D-6 HMIN = .1D-13 HMAX = 1.D0 MF = 1 MAXDER = 6 Parameters for Outer Integration H = .1204D-1 EPS - .ID- 2 HMIN = .1505D-1 HMAX = 5. DO MF = MAXDER = 10 TEND = 20.00 INTFLG = 1 Parameters for Period-Finder TN = .301D-2 MAXL = 400 MAXCNT = 5 KSKIP = 2 82 ^N m » M0 H -3- MO CO O On H n en cd ■u w u •H >n rO C O •H OS on CO on ON ON CM o ON oo ON On On m oo On CO M0 00 CO M0 On CO 00 M0 m 0. O O o O o ON On On ON ON On ON ON 00 00 00 00 co co co CO co CM CM CM CM CM CM CM CM CM CM CM CM o oo - r^ On ON C/l fn M O c v — ' o •H co 4-1 a CJ o c •H 3 ■u m cO D H cO > OJ o •H 4-> 3 rH O CO w 43 3 ^ rH <" ctj * m CD oo| OJ s ■H 4J •K CM ON NO 00 CO H co CO O ON On en O 3 3 •rH rH B o CO OJ u x: O 4-1 14-1 00 OJ s 4-i ■H 4-1 3 CJ OJ X CD 5-i OJ OJ J-l OJ OJ lH o CJ B crj 5-i 00 o CO B CO id T3 en 86 CM On m ON on O CO co CM CN m cm on oo nO vO ON O CN n oo m co oo o ON co st NO ON m N m o C - PQ a s Q 4J O u r~. rH st 00 o CM 00 m On co r«. NO r^ CM O CM o on CM vo st r^ H St St <-i rH 00 on r>- St r» o\ r^ O 00 m NO r~- rH rH CM co . |s« r^ no [*■>• 00 m rH rH CM ON St On o CM m rH 00 CM CO O 00 H r>. CO rH r-\ CO CO r-» O m O NO co r^« o St r^ rs ON on 00 tH CO nO st CO m 00 O CM 00 St ON on cfl H CO o >. 4-1 E O TJ •H OJ 4-1 N 3 -H 00 OO o rH m m St CO r^ vO J-i CM m 00 00 o ON co St r-« o m vO o 00 o in rH r^ 00 00 m CO ON ^ O r^ r^ 00 CO r^ o St ON 00 ON r~ r^ r^ r-> CM CO on 00 <-i co m St st r^ o co no m st on ON ON 00 r^. St r-\ O o ON 00 vD st ON ON ON ON ON ON ON ON ON 00 00 00 00 1 Q 1 a oo st st ON m st en nO n- r>» NO st m o ^ O r-{ 00 m o ON co co co r-\ r^ rH CM rH ON st o r-^ CO st CO NO NO CM 00 CM m St CO ^ CM ON st C7\ HMIN. This is what we would expect, because the corrector is being solved by simple functional iteration, which does not converge when HA is large. Of course, this is just an example which has been concocted for the purposes of this discussion. Practical problems which are both stiff and oscillating seem to arise in simulating certain electrical oscillators. For example, the program described in Chapter 6 was used to try to solve equations which describe a Colpitts Oscillator near the periodic steady state. (For a description of this problem see [21].) This test failed, except for using very small stepsizes H (the order of T) . The nature of the problem seems to indicate that it is an example of a stiff oscillating problem. Table 7.1. Problem 3 - Parameters Problem Parameters A = 1500, y = 1000 (Problem 3a) A - 10000, y = 1000 (Problem 3b) Initial values: x.. =1, x = Parameters for Inner Integration H = .1D-5 EPS = .1D-6 HMIN = .1D-13 HMAX = 1.D0 MF = 1 MAXDER = 6 Parameters for Outer Integration H = .2512D-1 EPS - .ID- 3 HMIN = .314D-1 HMAX = 5. DO MF = MAXDER =10 INTFLG = 1 94 Table 7.1. Problem 3 - Parameters (cont'd.) Parameters for Period-Finder TN = .628D-2 MAXL = 400 MAXCNT = 5 KSKIP = 2 Table 7.2. Problem 3 - Results time first component of second component of function computed solution computed solution evaluations Problem 3a .2513274D-1 .5026547D-1 .8168142D-1 .9999997 1.000001 .9999944 -.1373227D-5 -.2746897D-5 -.4462097D-5 ERROR IN OUTER INTEGRATION. KFLAG = -3 765 1089 3370 Problem 3b .2513274D-1 1.000000 -.2250640D-6 2062 .5026549D-1 1.000000 -.2390671D-6 2929 .2010619 .9999975 -.1111906D-5 6408 .2324779 ERROR IN OUTER 1.000005 INTEGRATION. KFLAG -.1973452D-5 = -3. 9817 7.2. Stability Regions One way of studying a numerical method is to see how well the numerical solution models the behavior of the solution to some simple problem. For example, the equation (7.2.1) y' = Ay is often used to study methods for stiff problems. 95 This is a good problem to study because stiff problems can often be characterized in terms of their eigenvalues (of the linearized system). Generally, in a stiff problem, some component of the solution has an eigenvalue corresponding to it which has a large negative real part, while other eigenvalues are small. After a short time, this component of the solution is no longer large enough to be important, but the large eigenvalue still poses problems for most methods not designed for stiff problems. If the system were diagonalized, the part corresponding to this component would be equation (7.2.1) with Re (A) « 0. This equation has solution y(t) = e At . To model y(t) the numerical solution should behave like e ' if Re(A) « 0, so the method should be damping the solution to zero for these A. On the other hand, the method should be accurate for A near the origin to follow the components of the solution which are still of interest. A useful concept in studying methods for stiff problems is that ° f the region of absolute stability. This is defined, for a method with constant positive stepsize h, as the set of hA for which the numerical approximations tend to zero as the number of steps tends to infinity, when the method is applied to the test equation y' =, Ay. To avoid having to restrict the stepsize for stability reasons, it is desirable that the region of absolute stability include the points hA where Re(hA) « 0. A very desirable property is that of A-stability . A method is A-stable if its region of absolute stability includes the entire left half-plane. To investigate similar behavior for the step-advancing methods and stiff oscillating problems, we will use the test equation 96 (7.2.2) z(t + T) = z(t) + Tg(z, t) where / „n (e - 1) g(z, t) = z - , . . At which has solution z(t) = e . Definitions of the region of absolute stability and of A-stability will remain unchanged from the conventional definitions except that (7.2.2) is the test equation instead of (7.2.1) and, in general, the formulas, and T hence the stability regions, depend on r, where r = -. In this way, stability regions of a method will help us study how well the numerical solution by the step-advancing formulas models the behavior of the true quasi-envelope. The most interesting observation seems to be the following. THEOREM 7.1. The generalized trapezoidal method is A-stable T for all values of — . XT PROOF. Let w = e . Then the method applied to (7.2.2) gives ,H-T w w-1, , ,H+T. f w-lv Z n+1 = Z n + (— )( — } Z n+1 + ( 2 } ( T } \ ' Solving for z ^ we obtain \ a - f ) + \ a + f ) " Z n« = 1 (1 + Hj + 1 (1 . JL, w Z n If a = y " 1 and b = t + - 1, then b - a - ° and b - 1 ' We have - (w - £) z = L_ z n+l ,a ,x n ( ¥ w - 1) It is easy to see (Levinson and Redheffer [14, p. 273]) that |w| < 1 maps onto - (w - f ) (f w - 1) < 1 97 Thus z n+1 tends to zero as long as |w| < 1. Since w = e AT , this implies the method is A-stable for all values of -. | | Stability regions of other generalized methods depend on the parameter r. One way to find a stability region is to find its boundary. The boundary is a subset of the set of HA for which a solution of the difference equation given by the method applied to (7.2.2) has modulus one. This set can be obtained by substituting e in9 = Zr into the equation defining the method applied to (7.2.2), and solving for e AT - 1, to obtain e AT - 1 - f (r, e i6 ) , where f depends on the method. This gives (7.2.3) HA = log(f (r ' el9) + 1) Letting range through [0, 2tt] gives a set of points which includes the boundary of the stability region. Since complex log is a multiple-valued function, it is apparent that the stability region repeats itself, translated a distance of r , for k an integer. This corresponds to the lack of resolution by the difference equation (7.2.2) of frequencies which are multiples of the one at which we are sampling. It is sufficient for the present application to consider only the principal value of log in (7.2.3). It is easy to see what stability regions of the generalized methods will look like, especially for small values of r. (Recall that r must be small for the method to be efficient.) When r - (T ■* 0, H fixed), the test equation (7.2.2) tends to the ordinary differential equation z' = Az, and the generalized formulas tend to conventional formulas for solving differential equations. Thus, when r is near 0, stability regions of the generalized methods are very nearly the same as 98 the stability regions of the conventional methods upon which they are based (the formulas that they tend to as r + 0). When r - 1, generalized Adams methods are exact (the coefficients approach the coefficients of the forward Euler method, which is exact when T = H) , so then they are A-stable. Since a stiff oscillating problem is likely to give rise to a difference equation whose solutions behave similarly to solutions of (7.2.2) when Re (HA) « 0, and stability regions of generalized Adams methods do not include these points, it is easy to see why we cannot use these formulas for solving stiff oscillating problems, except for very small H. In the next section we will consider the possibility of using methods with stability regions that include the points where Re (HA) « for solving stiff oscillating problems. 7.3. Techniques for Stiff Oscillating Problems Stiff oscillating problems, while closely related to non-stiff oscillating problems and to conventional stiff problems, seem to be harder to solve (efficiently) than either of these other cases. Because of the similarities between these problems and conventional stiff problems, it seems likely that methods which work well on conventional stiff problems might be extended to oscillating problems by using the idea of solving for the quasi-envelope, and could be effective in solving stiff oscillating problems. For this reason we will examine some generalizations of the backward differentiation formulas (BDF). The equations arising from these implicit formulas are usually solved (for conventional BDF) with Newton's method, since simple functional iteration does not converge for stiff problems using large stepsizes. The principal difficulty in extending 99 these methods to oscillating problems is finding an efficient means of estimating the Jacobian of g, ^ , for Newton's method, or finding a form of functional iteration that will converge for ||h ^\\ large. We will discuss some possibilities for this, but there do not seem to be any easy answers. One way of deriving conventional BDF for solving y' = f (y, t ) is to pass an interpolating polynomial Pr through the last k values of y(y n , y^!"--* y n _ k )> and then differentiate it and set the derivative p^O, to -f^, y.. It is easy to modify this procedure so that the resulting formulas solve the difference equation z(t + T) = z ( t ) + T g(z, t) . To do this we must first find the polynomial ?n interpolating the k past values of z (which are separated in time by intervals of length H) , and then set the forward difference of p^ (over an interval of length T) at t n , to Tg(z t ). That is, P n (t n + T) " »n (t „> = g(z , t ) . 1 n n Using this procedure we obtain a form of generalized BDF. (These formulas will be called GBDF(I).) The formula for the GBDF(I) of order k is ( 7 ' 3 -D Hg = I (- l) 1 (" r " 1) y^ i=l where (" r " h = ( " r " 1)( ~ r ~ 2)...(- r - j) _ T i i! ' r " H ' and V is the backward difference over an interval of length H. One of the nice features of these formulas is that they tend to the conventional BDF as r tends to 0, so that for small r, their stability regions are very nearly the same as stability regions of conventional BDF, 100 which are suitable for solving stiff problems for k £ 6. Another desirable feature is the simple polynomial derivation, which makes it easy to envision nice data representations for implementing these formulas with variable stepsize and order. (For example, it is easy to generalize the representation used by Byrne and Hindmarsh [4].) It is unfortunate (mainly from an aesthetic point of view) that, unlike the generalized Adams formulas, the GBDF(I) do not become exact when r = 1. It is easy to remedy this though. One way to accomplish this is to define a second form of generalized BDF, which we will call GBDF(II). The GBDF(II) of order k will be given as a convex combination of the generalized Adams-Bashforth method (GAB) (Adams-Moulton would do as well) of order k and the GBDF(I) of order k, where the coefficient of the GAB T H formula approaches as - tends to 0. For example, we could take GBDF(II) = r(GAB) + (1 - r)(GBDF(I)) . Since both formulas are of order k, it follows that GBDF(II) is of order k. (It is easy to verify that the k-step GBDF(I) is of order k.) As r tends to 0, GBDF(II) tends to the corresponding BDF formula, so for small r it has nice stability regions. It is exact for r = 1. An example of one of these formulas is the first order method given by z = z + H(rg + (1 - r)g ) . n n-1 n-1 n 4 Both forms of generalized backward differentiation formulas are uniformly stable. Thus the global error of the k-step formula is of order k, by Theorem 5.1. To verify the uniform stability, note that the coefficients of the p polynomials of the GBDF's depend continuously upon r, and they reduce to the coefficients of the p polynomial for conventional BDF when r = 0. It is easily demonstrated that all of the p polynomials have a root at £ - 1. Since the other roots of p at r = 101 are all strictly Inside the unit circle (that is, the BDF are strongly stable), it follows from Hurwitz's theorem [19] that for sufficiently small r, the roots (except £ = 1) will all be contained inside the unit circle. The main difficulty in applying generalized backward differ- entiation formulas to stiff oscillating problems is solving the corrector equation (the implicit equation (7.3.1)). Convergence of functional iteration to solve the corrector equation depends on the size of ||h 3 J| . For stiff problems using moderate stepsizes, ||h |&|| is quite oz large, and simple functional iteration will not converge. Thus we must find another way to solve the implicit corrector equation. In conventional stiff problems, the corrector equation is usually solved by Newton's method. This requires some approximation to the Jacobian, |* (for the problem Z ' = ffe, t) ). In the oscillating case> ^ ^ |j ? . g much more complicated and expensive to evaluate. Instead of usifg Newton's method we could try to develop some other type of functional iteration which would converge, but this too seems to require some type of estimate of I*. dz In implementing a procedure to find the periodic steady state of an oscillating system, Aprille and Trick [2] used an observation which could possibly be helpful here. First, recall that & is given by i(t n + T, t) - i( t , t ) £(z , t ) = 2 n ^ v n' n ~ n n where djZ(t n + 8. t ) = f(i(t +S , t ), t " n n + s) i (t n' V " i^ n ) 102 Thus 9^ 3y(t + T, t) 3z " ^ 8z - D/T . Let g = ^X(t + T » t) ^ Then the pro biem is to find G. It is noted in [2] that G is the state transition matrix of the time-varying system (7.3.2) z' = ^ z(0) = I , y(t + s, t) where -P= is the Jacobian of f evaluated along the trajectory y(t + s, t). 9y We could consider estimating G numerically, using values of the Jacobian of the original system. For example, in using the backward Euler method T to integrate the original system over one period, with stepsize h(h = |r) , we have y. = y. t + hf (y.^ ' J h ) • If this is solved with Newton's method, the iteration is y (l+l> . ,i» - [I - hF^")]- 1 [yf > - Z . , - Mfef". Jh>] -J -J 1 —J J--L ~ 1 where F -^ If we apply the backward Euler method to (7.3.2), we have z. - z. . + hF(y.)z. , -J -J-l "J "J or so that z. = [I - hF(y.)] z x , -1 z(T) * z v = n [I - hF(y„ .,,)] z_ . i=l K-i+1' Thus _! K G ■ ~ n [I - hF(y_ )] , i-1 103 where the y's are numbered starting with at the beginning of each period. Now G"- can be used to find ff « (G - I)/T. Unfortunately, no matter how the whole process is arranged, the Newton iteration for solving the GBDF seems to always involve at least one matrix inversion each time ■£ is computed. We can, however, avoid re-evaluating $& ~~ ^— unless it is absolutely necessary, and it may be that many systems of oscillatory type are of relatively small dimension, in which case the expense of computing the Jacobian |& may not be overwhelming. Another possibility is to simply approximate |& directly by perturbing the elements of z and differencing. This involves n integrations of a system of dimension n, over one period, each time -^ 3z is re-evaluated. At the present time there does not seem to be an easy way to solve the implicit corrector equation for stiff oscillating problems, though for some problems (particularly ones of small dimension), the above alternatives may be efficient. 104 8. CONCLUSION This chapter summarizes the results of the previous chapters and outlines some of the problems relating to this method that remain unsolved. We have described a method which can efficiently solve highly oscillatory ordinary differential equations. It is based on the observation that if the solution is sampled at multiples of the period of the oscillation, then the resulting sequence of points can be used to define a slowly-varying function z(t) which can be followed with large steps, and from which the solution to the original system can be recovered. Problems with slowly varying periods of oscillation can be solved with this method by means of a change of variables. The 'period' of a nearly periodic function has been defined here as the minimum over all possible values of the period, of a norm of the difference between the function and the function displaced by the period. An algorithm which is based on Newton's method has been introduced to find this period. This is only one possible definition, which raises the question of which values of the period correspond to functions g(z, t) that vary the most slowly? Also, the convergence of this technique for general nearly periodic functions has not been investigated. Formulas which are generalizations of Adams formulas and of backward differentiation formulas have been derived. These methods can solve reasonably well-behaved non-stiff oscillating problems with an error of order H k (in the sense described in Chapter 5), taking steps that in over several (possibly many) cycles of the oscillation. The error bounds for these- methods depend upon the function g(z, t) , which is only 105 indirectly related to the differential equation. It seems to be a very difficult problem to find error bounds that are expressed in terms of a priori conditions on the function f, or to find conditions on f so that a T can be found so that g(z, t) is slowly varying, except in the case of linear problems. A program was presented here which demonstrated the generality and efficiency of these methods for solving oscillating problems. Many decisions are involved in writing such a program, and in many cases it does not seem at all clear which choice is the best (for example, linear multistep methods, Runge-Kutta methods, and methods which conserve energy all seem to be good choices for the inner integration routine, under certain circumstances). It seems evident that future efforts should lead to significantly faster codes, especially if the problems have a special form (for example, almost linear problems). We have defined what is meant by a stiff oscillating system, and have discussed some of the difficulties associated with solving these problems. Generalized backward differentiation formulas were found to have desirable stability properties in connection with solving these problems. It seems likely that these methods could efficiently solve stiff oscillating problems, if an efficient means of estimating the Jacobian of the smooth solution were found. 106 LIST OF REFERENCES [1] Amdursky, V. and A. Ziv, The numerical treatment of linear highly- oscillatory ODE systems by reduction to non-oscillatory type, IBM Israel Scientific Center Report No. 39 , March 1976. [2] Aprille, T. J. and T. N. Trick, Steady-state analysis of nonlinear circuits with period inputs, Proceedings of the IEEE 60 (1), January 1972. [3] Bogoliubov, N. N. and Y. A. Mitropolsky, ASYMPTOTIC METHODS IN THE THEORY OF NONLINEAR OSCILLATIONS, Gorden and Breach Science Publishers, Inc., New York, 1961. [4] Byrne, G. D. and A. C. Hindmarsh, A polyalgorithm for the numerical solution of ordinary differential equations, ACM Transactions on Mathematical Software 1 (1), March 1975, 71-96. [5] Gautschi, W. , Numerical integration of ordinary differential equations based on trigonometric polynomials, Numerische Mathemetik 3, 1961, 381-397. [6] Gear, C. W. , NUMERICAL INITIAL VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS, Prentice-Hall, Inc., 1971. [7] Gear, C. W. and L. F. Shampine, A user's view of solving stiff ordinary differential equations, Department of Computer Science Report UIUCDCS-R-76-829 , University of Illinois, Urbana, Illinois, 1976. [8] Graff, 0. F. , Methods of orbit computation with multirevolution steps, Applied Mechanics Research Laboratory Report AMRL 1063 , University of Texas, Austin, Texas, 1973. [9] Graff, 0. F. and D. G. Bettis, Modified multirevolution integration methods for satellite orbit computation, Celestial Mechanics 11 , 1975, 443-448. [10] Henrici, P., DISCRETE VARIABLE METHODS IN ORDINARY DIFFERENTIAL EQUATIONS, John Wiley and Sons, 1963. [11] Jensen, P. S., Stiffly-stable methods for undamped second order equations of motion, SIAM Journal on Numerical Analysis 13 (4), September 1976. [12] Kreiss, H. 0., Problems with different time scales for ordinary differential equations, Department of Computer Science Report No. 68 , Uppsala University, Uppsala, Sturegaten 4, Sweden, October 1977. [13] Lambert, J. D. and I. A. Watson, Symmetric multistep methods for periodic initial value problems, Department of Mathematics Report No. 12 , University of Dundee, Dundee, Scotland, September 1975. 107 [14] Levinson, N. and R. M. Redheffer, COMPLEX VARIABLES, Holden-Dav Inc. , 1970. ay ' [15] Mace D. and L. H. Thomas, An extrapolation method for stepping the calculations of the orbit of an artificial satellite ^r r ^oor° 1Uti ° nS ahead at a time ' Astronomical Journal 65 (5), #1280, June 1960. ~ [16] Minorsky, N. , NONLINEAR OSCILLATIONS, Robert E. Krieger Publishing Company, Huntington, New York, 1974, 390-415. [17] Miranker, W. L. and M. van Veldhuizen, The method of envelopes IBM Researc h Report RC 6391 . #27537, February 1977. [18] Miranker, W. L. and G. Wahba, An averaging method for the stiff highly oscillatory problem, Mathematics of Co mpute -ion 30 1976, 383-399. " ' [19] Saks, S. and A. Zygmund, ANALYTIC FUNCTIONS, Elsevier Publishing Company, 1971. [20] Taratynova, G. P., Numerical solution of equations of finite dxfferences and their application to the calculation of orbits of artificial earth satellites, AES Journal supplement, translated from Artificial Earth Satellites 4. 1960, 56-85. [21] Trick T. N., Colon, F. R. and S. P. Fan, Computation of capacitor voltage and inductor current sensitivities with respect to initial conditions for the steady-state analysis of nonlinear ™c X ?wc? irCUitS ' IEEE Transact ions on Circuits and Systems CAS- 2 2 (5), May 1975^ l [22] Velez, C. E. , Numerical integration of orbits in multirevolution steps ' j jAgA Technical Note D-5915 . Goddard Space Flight Center Greenbelt, Maryland. 108 APPENDIX LISTING OF CODE FOR GENERALIZED ADAMS METHODS 109 IMPLICIT DOUBLE PRECISION (A-H,Q-Z) C* DRIVING PROGRAM * COMMON/COUNT/NDIFF,NPER,NDFSB,NFNCT,NCNT COMMON TSCALE, BETA, DELTA, KPFLG DOUBLE PRECISION Y(13,2) ,SAVEI (13,2) ,YMAX(2) DOUBLE PRECISION ERRORI (2) ,CSAVEI(2,3) DIMENSION PWI(2,2),IPI(2) DOUBLE PRECISION TIME(400) , YSAVE(400,2) ,YMID(13 2) YAVGC2) DOUBLE PRECISION Z(12,3) ,ZMAX(3) ,SAVE(12,3) ,ERROR(3) ,CSAVE(3 3) DIMENSION PW(3, 3), IP(3) '' U,J; C SET COUNTERS TO SEE KOW OFTEN ROUTINES ARE CALLED NDIFF = NPER = NDFSB = NFNCT = NCNT = NPROB = 12 C* DEFINE PARAMETERS FOR INNER INTEGRATION * C*************************^^ N = 2 HI = l.D-6 EPSI = l.D-6 HMINI - l.D-14 HMAXI = 1.D0 MFI = MAXDEI = 10 c*********************************^^^^^^^^^^^^^^^^^^^^^^^^^^ a ^ c* define parameters for period-finder * c initialize period TN = 6.28D-3 MAXL = 400 MAXCNT = 4 KSKIP = 1 IF INTFLG IS ONE, WE WILL ONLY STEP OVER AN INTEGRAL C NUMBER OF PERIODS INTFLG = 1 C**********************^^ C* DEFINE PARAMETERS FOR OUTER INTEGRATION * T = 0.D0 H = l.D-3 EPS = 5.D-3 HMAX = 5. DO TEND « 20. DO MF = MAXDER = 10 110 p* *************** ************************************************ ****** C* CALCULATE OTHER QUANTITITIES THAT WE NEED TO START * c ************************************* ********************************* JSTART = NP1 = N + 1 C INITIALIZE Z CALL INIT(Y,T,N) DO 10 I = 1,N 10 Z(1,I) = Y(l, I) Z(1,NP1) = T C INITIALIZE ZMAX TO 1.D0 DO 20 I = 1.NP1 20 ZMAX(I) = 1.D0 TSCALE = 2*TN HMIN = 5*TN ' IF(H .LT. HMIN) H = HMIN IF(INTFLG .EQ. 0) GO TO 30 Nil = H/ TSCALE H = TSCALE*NH 30 CONTINUE C c ****&***************************************************************** C* WRITE OUT PARAMETERS FOR METHOD * r^********************************************************************* WRITE (6, 950) NPROB 950 FORMAT (' PROBLEM =' ,15) WRITE(6,951) 951 FORMAT (' PARAMETERS FOR INNER INTEGRATION') WRITE(6,952) EPSI,11I,HMINI,HMAXI FORMAT (' EPS =' ,D15 . 7,5X, 'H =',D15.7/' HMIN =',D15.7, 952 953 954 960 955 956 957 958 959 1 5X,'HMAX =',D15.7) WRITE (6 FORMAT ( WRITE(6 FORMAT ( WRITE (6 WRITE (6 WRITE (6 FORMAT ( WRITE (6 FORMAT ( WRITE (6 FORMAT ( 'MAXL WRITE (6 FORMAT ( WRITE (6 FORMAT ( WRITE (6 FORMAT ( 953) MFI.MAXDEI MF =',I5,5X, 'MAXDER =',I5) 954) PARAMETERS FOR OUTER INTEGRATION') 952) EPS, H, HMIN, UMAX 953) MF, MAXDER 960) INTFLG INTFLG =',15/' ') 955) PARAMETERS FOR PERIOD FINDER' ) 956) TN,T SCALE, MAXL,MAXCNT,KSKIP TN = ',D15. 7, 5X, 'TSCALE =',D15.7/' ', = ',I5,5X,'MAXCNT =' ,I5,5X, 'KSKIP =\I5/' ') 957) INITIAL VALUES: ') 958) (Z(1,I),I = 1.NP1) »,7D15.7) 959) '/' ') Ill C* CALL DIFF * C****************************^^ c 100 CONTINUE CALL DIFF(NP1,T,Z, SAVE, CSAVE,H,HMIN,HMAX, EPS, MF.ZMAX, 1 ERROR , KFLAG , JS TART , MAXDER , PW , IP , INTFLG , 2 N,Y,SAVEI,YMAX,MAXL,KSKIP,TIME,YSAVE,YAVG, 3 YMID,TN,MAXCNT, 4 CSAVEI,HI,HMINI,HMAXI,EPSI,MFI, 5 ERRORI,MAXDEI,PWI,IPI) IF (KFLAG .LT. 0) GO TO 800 IF(KPFLG .LT. 0) STOP K = JSTART + 1 C******************************^^ C* PRINT OUT RESULTS FROM ONE LARGE STEP * WRITE (6, 900) T,H 900 FORMAT (' ' , »T = » , 5X.D15. 7,10X, 'H = ',D15.7) WRITE(6,901) TN, DELTA 901 FORMAT (' ' , 'PERIOD =' ,D15 . 7, 10X, 'DELTA =' ,D15 . 7) WRITE(6,902) K 902 FORMAT (' *,' ORDER =' ,15) WRITE (6, 90 3) 903 FORMAT (' Z =') DO 210 I = 1,K 210 WRITE(6,904) (Z(I,J),J = 1,NP1) 904 FORMAT (' ',7D15.7) WRITE (6, 905) 905 FORMAT (' YAVG =') WRITE(6,904) (YAVG(I),I = 1,N) WRITE (6, 906) NDIFF,NFER,NDFSB 906 FORMAT (' NDIFF =' , 15, 5X, 'NPER =' , 15 ,5X, 'NDFSB =',I5) WRITE (6, 908) NFNCT,NCNT 908 FORMATC NFNCT =' ,17, 3X, 'NCNT =',I5) WRITE(6,907) 907 FORMATC »,/» ') C Z(1,NP1) IS THE CURRENT TIME IF(Z(1,NP1) .LT. TEND) GO TO 100 STOP C*************************^^ C* ERROR MESSAGES * 800 WRITE (6, 9 81) KFLAG 981 FORMATC ERROR IN OUTER INTEGRATION. KFLAG =',I5) STOP END 112 SUBROUTINE DIFF(N,T,Y, SAVE, CSAVE,H,LMIN,FJ1AX, EPS, MF.YMAX, 1 ERROR, KFLAG,JSTART,MAXDER,PW, IP, INTFLG, 2 NI,YI,SAVEI,YI4AXI,MAXL,KSKIP,TIME,YSAVE,YAVG, 3 YMTD,TN,MAXCNT, 4 CSAVEI,HI,HMINI,HMAXI,EPSI,MFI, 5 ERRORI , MAXDE I , PWI , IF I ) C* THIS SUBROUTINE SOLVES THE SYSTEM OF N + 1 DIFFERENCE EQUATIONS C* WHICH ARISE FROM OSCILLATING PROBLEMS, OVER ONE STEP OF LENGTH * C* II AT EACH CALL, USING GENERALIZED ADAMS FORMULAS. AN EXPLANATION * C* OF HOW IT WORKS IS IN CHAPTER 6. II CAN BE SPECIFIED BY TEE USER, * C* BUT IT MAY BE INCREASED OR DECREASED WITHIN THE RANGE HMIN TO C* UMAX IN ORDER TO ACHIEVE AS LARGE A STEP AS POSSIBLE WHILE NOT C* COMMITTING A SINGLE-STEP ERROR WHICH IS LARGER THAN EPS IN THE C* L2-NORM WHERE EACH COMPONENT OF THE ERROR IS DIVIDED BY THE C* COMPONENTS OF YMAX. (THIS IS REFERRING TO THE ERRORS COMMITTED * C* BY THE METHODS OF THIS ROUTINE ALONE. ERRORS MADE BY THE C* INNER INTEGRATION ROUTINE ARE CONTROLLED SEPARATELY) . C* C* THL PROGRAM REQUIRES A SUBROUTINE NAMED PERIOD, WHICH COMPUTES THE * C* PERIOD OF THE OSCILLATION. TO ACCOMPLISH THIS, PERIOD CALLS C* AN INNER INTEGRATION ROUTINE, DIFSUB, TO INTEGRATE THE DIFFERENTIAL* C* EQUATION OVER ONE PERIOD OF THE OSCILLATION. SUBROUTINES REQLIRLD * C* BY DIFF, PERIOD, AND DIFSUB ARE: * C* DECOMP(N,M,FW,IP) C* SOLVE(N,M,PW,CSAVE(l,l),IP) * C* INTERP(T1,Y,NQ,N,T2,H,SAVE) * C* DECOMP AND SOLVE ARE STANDARD LU DECOMPOSITION AND BACK- C* SUBSTITUTION ROUTINES. INTERP IS LISTED HERE. IT INTERPOLATES C* THE ARRAY Y (OF Y AND ITS SCALED DERIVATIVES) FROM THE POINT C* Tl TO THE POINT T2, AND STORES THE RESULTS IN SAVE. DECOMP C* \ND SOLVE ARE ONLY CALLED IF MFI = 1 OR 2 (STIFF METHODS IN THE C* INNER INTEGRATION ROUTINE) . A LISTING OF DIFSUB CAN BE FOUND C* IN GEAR[6],PP. 158-166. IT IS VERY SIMILAR TO THE ROUTINE DIFF C* GIVEN HERE. A C* * C* TOO ROUTINES MUST BE SUPPLIED BY THE USER. THESE ARE USED BY C* DIFSUB. THEY ARE: C* DIFFUN(T,Y,DY) C* PEDERV(T,Y,PW,M) C* DIFFUN EVALUATES THE DERIVATIVES OF THE DEPENDENT VARIABLES (OF THE* C* ORICINAL DIFFERENTIAL EQUATION) STORED IN Y(1,I), FOR I = 1 TO N, * C* AND STORES THE DERIVATIVES IN THE ARRAY DY. PEDERV IS USED OiML* C* IF MFI IS ONE, AND COMPUTES THE PARTIAL DERIVATIVES OF THE * C* ORIGINAL DIFFERENTIAL EQUATION (Tr.AT IS, IT COMPUTES THE JACOB IAN) . * C* * C* THIS PROGRAM USES DOUBLE PRECISION FOR ALL FLOATING POINT VARIABLES* C* EXCEPT THOSE STARTING WITH P. * C* * PARAMETERS TO THL SUBROUTINE DIFF HAVE THE FOLLOWING MEANINGS: L * 'RECEEDED BY AN I ARE INPUT PARAMETERS ONLY. PARAMETERS* C * ) BY AN ARE OUTPUT PARAMETERS ONLY. PARAMETERS PRECEEDED* * 113 C* BY AN 10 ARE BOTH INPUT AND OUTPUT PARAMETERS. PARAMETERS * C* PRECEEDED BY A W ARE WORK ARRAYS. * C* I. PARAMETERS USED MAINLY BY THE ROUTINE DIFF: * C* I N THE NUMBER OF DIFFERENCE EQUATIONS TO BE SOLVED * C* (THIS IS ONE PLUS THE NUMBER OF EQUATIONS IN THE * C* ORIGINAL OSCILLATING SYSTEM) * C* 10 T THE INDEPENDENT VARIABLE. NOTE THIS IS NOT TIME. * C* IT IS THE INDEPENDENT VARIABLE WHICH MAKES THE * C* PERIOD OF THE OSCILLATION CONSTANT. * C* 10 Y A 12 BY N ARRAY CONTAINING THE DEPENDENT VARIABLES * C* (THESE DEFINE THE SMOOTH SOLUTION. THEY ARE CALLED * C* Z IN THE DRIVING PROGRAM.), AND SCALED DERIVATIVES * C* OF THE FORWARD DIFFERENCE OF Z OVER AN INTERVAL THE * C* LENGTH OF ONE PERIOD. ONLY Y(1,I) NEED BE PROVIDED * C* BY THE CALLING PROGRAM ON THE FIRST ENTRY, AND IT * C* IS THE INITIAL VALUE OF THE DIFFERENTIAL EQUATION(I) . * C* Y(1,N) IS THE INITIAL VALUE OF THE TIME. * C* W SAVE A 12 BY N ARRAY USED FOR STORAGE BY DIFF * C* W CSAVE A 3*N VECTOR USED FOR STORAGE BY DIFF * C* 10 H THE STEPSIZE TO BE ATTEMPTED BY DIFF ON THE NEXT STEP* C* I HMIN THE MINIMUM STEPSIZE THAT WILL BE TAKEN BY DIFF. * C* HMIN MUST 3E LARGER THAN TWICE THE PERIOD, OR ELSE * C* THIS ROUTINE IS NO LONGER MORE EFFICIENT THAN THE * C* SMALL-STEP METHOD IT USES (DIFSUB) . * C* I HMAX THE MAXIMUM SIZE TO WHICH THE STEPSIZE IN DIFF 'WILL * C* BE INCREASED. * C* I EPS THE ERROR TEST CONSTANT IN DIFF. SINGLE STEP ERROR * C* ESTIMATES (OF THE OUTER INTEGRATION ROUTINE) MUST BE * C* LESS THAN EPS IN THE EUCLIDIAN NORM. THE STEP AND/OR* C* ORDER ARE ADJUSTED TO ACHIEVE THIS. * C* I MF THE METHOD INDICATOR FOR DIFF. SINCE WE ONLY * C* CONSIDER NON-STIFF OSCILLATING PROBLEMS, MF WILL * C* ALWAYS BE 0. * C* W ERROR AN ARRAY OF N ELEMENTS WHICH CONTAINS THE ESTIMATED * C* ONE-STEP ERROR IN EACH COMPONENT (IN DIFF) * C* KFLAG A COMPLETION CODE FOR DIFF WITH THE FOLLOWING MEANING* C* +1 THE STEP WAS SUCCESSFUL * C* -1 THE STEP WAS TAKEN WITH H = HMIN, BUT THE * C* REQUESTED ERROR WAS NOT ACHIEVED. * C* -2 THE MAXIMUM ORDER SPECIFIED WAS FOUND TO * C* BE TOO LARGE * C* -3 CORRECTOR CONVERGENCE COULD NOT BE ACHIEVED * C* FOR H .GT. HMIN * C* -4 THE REQUESTED ERROR IS SMALLER THAN CAN BE * C* HANDLED WITH THIS PROGRAM * C* 10 JSTART AN INPUT INDICATOR FOR DIFF WITH THE FOLLOWING * C* MEANINGS : * C* -1 REPEAT THE LAST STEP WITH A NEW H * C* PERFORM THE FIRST STEP. THE FIRST STEP MUST* C* BE DONE WITH THIS VALUE OF JSTART SO THAT * C* THE ROUTINE CAN INITIALIZE ITSELF * C* +1 TAKE A NEW STEP CONTINUING FROM THE LAST. * 114 c* c* C* I MAXDER c* C* I INTFLG c* c* c* c* w PW c* w IP c* C* II. PARAMETERS C* I NI c* c* C* 10 YI c* c* c* c* w SAVEI c* c* c* w YMAXI c* c* C* I MAXL c* c* c* C* I KSKIP c* c* c* w TIME c* w YSAVE C* YAVG c* c* w YMID c* C* 10 TN C* I MAXCNT c* c* / C* III . PARAMETERS c* CSAVEI C* 10 ill C* I IiMINI C* I I1MAXI C* I EPS I C* I MFI c* ERROR! C* J MAXDE] c* w PWI * k * * * * * k * * * JSTART IS SET TO NQ, THE CURRENT ORDER OF THE METHOD, AT EXIT. A PARAMETER MUCH RESTRICTS THE ORDER. IT MUST BE LESS THAN 12 FOR GENERALIZED ADAMS METHODS. THIS PARAMETER DETERMINES WHETHER OR NOT STEPSIZES IN DIFF WILL BE RESTRICTED TO BE INTEGRAL NUMBERS OF PERIODS. IF INTFLG IS 1, WE WILL STEP OVER INTEGRAL NUMBERS OF PERIODS. A BLOCK OF H**2 FLOATING POINT LOCATIONS A BLOCK OF N INTEGER LOCATIONS USED MAINLY BY PERIOD N - 1 (THE ORDER OF THE ORIGINAL SYSTEM OF DIFFERENTIAL EQUATIONS) . THIS IS CALLED N IN- PERIOD AND DIFSUB. A 13 BY HI ARRAY CONTAINING THE DEPENDENT VARIABLES OF THE ORIGINAL OSCILLATING SYSTEM AND THEIR DERIVATIVES. ONLY YI(1,I) NEED BE INITIALIZED BY THE CALLING ROUTINE A 13 BY NI ARRAY USED FOR STORAGE BY PERIOD AND DIFSUB. THIS IS CALLED SAVE IN PERIOD AND DIFSUB. A VECTOR OF LENGTH NI HOLDING THE MAXIMUM OF YI SEEN SO FAR. THIS IS CALLED YMAX IN PERIOD AND DIFSUB. THE MAXIMUM NUMBER OF INTERVALS IN THE INTEGRATION (FOR THE NEWTON ITERATION TO FIND THE PERIOD). NOTE THAT IT IS THE MAXIMUM DIMENSION OF TIME AND YSAVE THE NUMBER OF INTERVALS TO BE GROUPED TOGETHER TO OCCUPY ONE POSITION IN THE ARRAY YSAVE. SET KSKIP = 1 UNLESS STORAGE IS VERY SCARCE. A VECTOR OF TIMES OF LENGTH MAXL USED BY PERIOD YSAVE (I, J) CONTAINS Y(1,J) AT T = TIME (I) A VECTOR CONTAINING THE AVERAGE OF Y OVER ONE CYCLE OF LENGTH TN AN ARRAY CONTAINING THE VALUES OF Y AT THE MIDDLE OF THE INTERVAL (BETWEEN TWO ESTIMATED PERIODS) ESTIMATE FOR THE PERIOD. MAXIMUM NUMBER OF NEWTON ITERATIONS THAT WILL BE ALLOWED TO FIND THE PERIOD. USED BY DIFSUB THIS IS CALLED CSAVE IN DIFSUB. THESE PARAMETERS ARE USED EXACTLY AS DESCRIBED IN THE COMMENTS WITH DIFSUB, SO THEY WILL NOT BE DESCRIBED HERE III DETAIL. THEY CORRESPOND TO PARAMETERS WITH THE SAME NAMES IN DIFSUB, EXCEPT THE NAMES HERE HAVE AH I AT THE END OF THEM, TO DISTINGUISH THEM FROM THE PARAMETERS OF THE OUTER INTEGRATION ROUTINE. DIMENSION CSAVEI (NI, 3) ,ERRORI(NI) ,PWI (NI ,NI) ,IPI(NI in,:, SHOULD BE SET AS THOUGH DIFSUB WERE DOING * * k k k k * k k ft * k k * k k k ft * k k * 115 C* W IPI THE ENTIRE INTEGRATION. * IMPLICIT DOUBLE PRECISION (A-h,Q-Z) COMMON TSCALE, BETA, DELTA, KPFLG COMMON/ COUNT/NDIFF,NPER,NDFSB,NFNCT,NCNT DIMENSION Y( 12, N), YMAX(N) , SAVE(12,N), ERROR ( N) , PW(N,N), 1 A(12) ,PERTST(12,2,3) ,CSAVE(N,3) ,IP(N) DIMENSION YI(13,NI) ,SAVEI(13,NI) ,YMAXI(NI) .TIME(MAXL) DIMENSION YSAVE(MAXL,NI) DIMENSION YAVG(NI),YMID(13,NI),CSAVEI(NI,3),ERRORI(NI) DIMENSION PWI (NI ,NI) , IPI (NI) DOUBLE PRECISION ALPHA(9) C* THE COEFFICIENTS IN PERTST ARE USED IN SELECTING THE STEP AND * C* ORDER, THEREFORE ONLY ABOUT ONE PERCENT ACCURACY IS NEEDED. * C***************************************^ DATA PERTST/2.,4.5,7.333,10.42,13.7,17.15,1.,1.,1.,1.,1.,1., 1 2., 12., 24., 37. 89, 53. 33, 70. 08, 87. 97, 100. 9, 126. 7, 147. 3, 168.8,191.4,3.,6.,9.167,12.5,15.98,1.,1.,1.,1.,1.,1., 3 1., 12. ,24. ,37. 89, 53. 33, 70. 08, 87. 97, 100. 9, 126. 7, 147. 3, 4 168.9, 191. 4, 211. ,1.,1.,. 5, .1667, .04133, .008267, 5 l.,l.,l.,l.,l., 1.,1.,1.,2.,1.,.3157, .07407, .0139, 6 .0021818,. 00029, .000035, .0000037, .00000035 / IROUT = NDIFF = NDIFF + 1 IRET = 1 KFLAG = 1 IF (JSTART.LE.O) GO TO 140 C*************************************^ C* BEGIN BY SAVING INFORMATION FOR POSSIBLE RESTARTS AND CHANGING * C* H BY THE FACTOR R IF THE CALLER HAS CHANGED H. ALL VARIABLES * C* DEPENDENT ON H MUST ALSO BE CHANGED. * C* E IS A COMPARISON FOR ERRORS OF THE CURRENT ORDER NQ. EUP IS * C* TO TEST FOR INCREASING THE ORDER, EDWN FOR DECREASING THE ORDER. * C* KNEW IS THE STEP SIZE THAT WAS USED ON THE LAST CALL. * NQ = JSTART K = NQ + 1 100 DO 110 I = 1,N DO 110 J = 1,K 110 SAVE(J,I) = Y(J,I) HOLD = HNEW IF (H.EQ.HOLD) GO TO 130 120 IRET1 = 1 GO TO 750 130 NQOLD = NQ TOLD = T IF (JSTART. GT.O) GO TO 250 GO TO 170 140 IF ( JSTART. EO.-l) GO TO 160 C****************************************^ C* ON THE FIRST CALL, THE ORDER IS SET TO 1 AND THE INITIAL * 116 C* DERIVATIVES ARE CALCULATED. * BETA = TSCALE/U Y(2,N) = 2*TN/BETA BR = 1.0 NQ = 1 N3 = N N4 = N**2 CALL PERIOD (Y,CSAVE(1, 3) ,H,NI,YI,SAVEI,KI, 1 HMINI , IiMAXI , EP S I , MFI , YMAXI , ERRORI , MAXDE I , 2 PWI,IPI,CSAVEI,MAXL,KSKIP,TIME,YSAVE,YAVG,YMID,TN, 3 MAXCNT.EPS) IF(KPFLG .LT. 0) RETURN DO 150 I = 1,N 150 Y(2,I) = CSAVE(I,3)*K HNEW = II 'K - 2 GO TO 100 C* REPEAT LAST STEP BY RESTORING SAVED INFORMATION. * 160 IF (NQ.EQ.NQOLD) JSTART = 1 T = TOLD NQ = NQOLD K = NQ + 1 GO TO 120 C* SET THE COEFFICIENTS THAT DETERMINE THE ORDER AND THE METHOD * C* TYPE. CHECK FOR EXCESSIVE ORDER. THE LAST TWO STATEMENTS OF * C* THIS SECTION SET IWEVAL .GT.O IF FN IS TO BE RE-EVALUATED * C* BECAUSE OF THE ORDER CHANGE, AND THEN REPEAT THE INTEGRATION * C* STEP IF IT HAS NOT YET BEEN DONE (IRET = 1) OR SKIP TO A FINAL C* SCALING BEFORE EXIT IF IT HAS BEEN COMPLETED (IRET =2). * 170 CONTINUE A(2) = -1.D0 BETA = TSCALE/H BETA2 = BETA*BETA BETA4 = EETA2*BETA2 BETA6 = BETA4*BETA2 BETAS = BETA4*3ETA4 BETA10 = BETA8*BETA2 C COMPUTE THE ARRAY ALPHA ALPHA (1) = - BETA ALPHA(2) - - 1.5D0*BETA + 0.5D0*BETA2 ALFrlA(3) = - 2*BETA + BETA2 ALPHA(4) = (-15*BETA + 10*BETA2 - BETA4)/6.D0 ALPHA(5) - -3*BETA +(5*BETA2-BETA4)/2.D0 LPHA(6) = (-21*bETA+21*BETA2-7*BETA4+BETA6)/6.D0 HA(7) - (-12*BETA+14*BLT/.2-7*BETA4+2*BETA6)/3.D0 AL?HA(8) = (-45*BETA+60*bETA2-42*BETA4+20*BETA6-3*BETA8) /10.D0 ALPLA(9) ■ (-10*BETA+15*BETA2-14*3ETA4+10*EETA6-3*BETA8)/2.D0 117 IF (MF.EQ.O) GO TO 180 IF (NQ.GT.6) GO TO 190 GO TO (221, 222,223, 224, 225, 226), NQ 180 IF(NQ.GT.ll) GO TO 190 GO TO (205,206,207,208,209,210,211,212,213,214,215), NQ 190 KFLAG = -2 RETURN C* THE FOLLOWING COEFFICIENTS SHOULD BE DEFINED TO THE MAXIMUM * C* ACCURACY PERMITTED BY THE MACHINE. THEY ARE, IN THE ORDER USED * C* (WHEN BETA IS ZERO) : ' * C* * C * - 1 * C* -1/2,-1/2 * C* -5/12,-3/4,-1/6 * C* -3/8,-11/12,-1/3,-1/24 * C* -251/720,-25/24,-35/72,-5/48,-1/120 * C* -95/288,-137/120,-5/8,-17/96,-1/40,-1/720 * C* -1S0&7/604S0, -49/40, -203/270, -49/192, -7/144, -7/1440, -1/5040 * C* -5257/17280, -363/280, -469/540, -967/2880, -7/90, -23/216C.-1/126C, * C* -1/40320 * C* -1070017/3628800,-761/560,-29531/30240,-267/640,-1069/9600,-3/160 * C* -13/6720,-1/8960,-1/362880 * C* -4165506/14515200,-7129/5040,-6515/6048,-4523/9072,-19/128 * C* -3013/1036800,-5/1344,-29/96768,-1/72576,-1/3628800 * C* -134211265/4 79001600, -7381/5040, -177133/151200, -8409 V145152, * C* -341693/1814400,-8591/207360,-7513/1209600,-121/193536, * £* -11/272160,-11/7257600,-1/39916800 * C* c* C* -1 C* -2/3,-1/3 C* -12/25,-7/10,-1/5,-1/50 * C* -120/274,-225/274,-85/274,-15/274,-1/274 * C* -180/441,-58/63,-15/36,-25/232,-3/252,-1/1764 * C***************** ******************** * ********************************* 205 A(l) = -1.0D0 GO TO 230 206 A(l) - -(1.D0 - BETA} /2. DO A(3) = -0.500000000DO GO TO 230 207 A(l) = -(5. DC - 6*EETA + BE1A2)/12.D0 A(3) = -0.750000COODO A(4) = -0. 166666666666666 7D0 GO TO 230 208 A(l) = -(9. DO - 12*BETA + 3*EFTA2)/24.B0 A(3) - -0.9165666666666667D0 A(4) = -0.3333333333333333D0 A(5) = -0.04166666666666667D0 GO TO 230 209 A(l) = -(251. DO - 360*BETA + 110*BETA2 - BETA4; /720.D0 A(3) = -1.0416666666666667D0 118 A(4) = -0.4861111111111111U0 A (5) = -0.1041666666666667D0 A(6) = -0.008333333333333333D0 GO TO 230 210 A(l) = -(475 - 720*BETA + 250*BETA2 - 5*3ETA4) /1440.D0 A(3) = -1.14166666666666G7D0 A(4) = -0.625000000D0 A (5) = -0.1770833333333333D0 A(6) = -0.0250000000DO A(7) = -0.001388888888888889D0 GO TO 230 211 A(l) = - (19087- 30240*BETA+11508*BETA2-357*BETA4+2*3ETA6)/60480. DO A(3) = -1.225000000D0 A(4) = -0.7518518518518519D0 A (5) = -0.2552083333333333D0 A(6) = -0.04861111111111111D0 A (7) = -0.004861111111111111D0 A (8) = -0.0001984126984126984D0 GO TO 230 212 A(l) = -(36799-60480*BETA+24696*BETA2-1029*3ETA4+14*3ETA6)/120690.D0 A(3) = -1.296428571428485D0 A(4) = -0.8685185185185185D0 A (5) = -0.3357638888888888D0 A (6) = -0.07777777777777777D0 A (7) = -0.01064814814814815D0 A(8) - -0.0007936507936507937D0 A(9) = -0. 00002480158730 15873D0 GO TO 230 213 A(l) = -(1070017-1814400*BETA+784080*BETA2-40614*EETA4 1 +425*3ETA6-3*BETA8)/3628800.D0 A(3) = -1.35892857142857D0 A(4) = -0.976554232804233D0 A(5) = -0.41718750000D0 A(6) = -0.1113541666666667D0 A(7) - -0.0187500000D0 A(8) = -0.001934523809523809D0 A(9) = -0.000111607142857143D0 A(10) = -0.00000275573192239859D0 GO TO 230 214 A(l) = -(2082753-3628300*BETA+1643760*BETA2-100926*BETA4 1 +2250*3ETA6-27*BETA8)/7257600.D0 A(3) = -1.41448412698413D0 A(4) - -1.0772156O846561D0 A(5) = -0.498567019400353D0 A(6) = -0.148437500000D0 A(7) = -0.0290605709876543DO A(8) = -0.0037202380952331D0 A(9) = -0.000299685846560847D0 A(10) - -0.0000137786596119929D0 A(ll) = -0.00000027557319223986D0 GO TO 230 215 A(l) = -(134211265-239500800*LLTA+112923360*BETA2-7960480*BETA4 119 221 222 223 224 225 226 230 1 +266090*BETA6-4785*BETA8+10*BETA10) /479001600 . DO A(3) = -1.46448412698413D0 A(4) = -1.17151455026455D0 A(5) = -0.579358190035273D0 A(6) = -0.188322861552028D0 A(7) = -0.0414303626543210D0 A(8) = -0.0062111447989418D0 A(9) = -0.000625206679894180D0 A(10) = -0.0000404174015285126D0 A(ll) = -0.00000151565255731922D0 A(12) = -0.0000000250521083854417D0 GO TO 230 A(l) - -l.OOOOOOOOODO GO TO 230 240 A(l) = > -0.6666666666666667D0 A(3) = ■ -0.3333333333333333D0 GO TO 230 A(l) = ■ - 0.5454545454545455D0 A(3) = : A(l) A(4) = ■ -0.09090909090909091D0 GO TO 230 A(l) = ■ -0.480000000D0 A(3) = ■ -0.700000000D0 A(4) = • -0.200000000D0 A(5) = -0.020000000D0 GO TO 230 A(l) = -0.437956204379562D0 A(3) = -0.8211678832116788D0 A(4) = -0. 310218978102189 8D0 A(5) = -0.05474452554744526D0 A(6) = -0.0036496350364963504D0 GO TO 230 A(l) = -0. 408163265 3061225D0 A(3) = -0. 9206349206 349206D0 A(4) = -0.4166666666666667D0 A(5) = -0.0992063492063492D0 A(6) = -0.0119047619047619D0 A(7) = -0. 00056689 3424036282D0 K = NQ+1 IDOUB = K MTYP = (4 - MF)/2 ENQ2 = .5 /FLOAT (NQ + 1) ENQ3 = .5 /FLOAT (NQ + 2) ENQ1 = 0.5 /FLOAT (NQ) PEPSH = EPS EUP = (PERTST(NQ,MTYP,2)*?EPSH)**2 E = (PERTST(NQ,MTYP,l)*PEPSh)**2 EDWN = (PERTST(NQ ,MTYP , 3) *PEPSH) **2 IF (EDWN.EQ.O) GO TO 780 BND = (EPS*ENQ3)**2 IVJEVAL = MF GO TO ( 250 , 680 ),IRET 120 p********************************************************************^ C* THIS SECTION COMPUTES THE PREDICTED VALUES BY EFFECTIVELY * C* MULTIPLYING THE SAVED INFORMATION BY THE PASCAL TRIANGLh * C* MATRIX, PERTURBED SO THAT THE COEFFICIENTS CORRESPOND TO * C* SOLVING THE DIFFERENCE EQUATION * 250 T = T + H IF(K .LT. 3) GO TO 256 DO 255 J = 3,K JM2 = J - 2 DO 255 I = 1,N 255 Y(1,I) = Y(1,I) + ALPHA(JM2)*Y(J,I) 256 CONTINUE DO 260 J = 2,K DO 260 Jl = J,K J2=K-J1+J-1 DO 260 I = 1,N 260 Y(J2,I) = Y(J2,I) + Y(J2+1,I) DO 261 I = 1,K c *********************************************^ C* UP TO 3 CORRECTOR ITERATIONS ARE TAKEN. CONVERGENCE IS TESTED BY * C* REQUIRING THE L2 NORM OF CHANGES TO BE LESS THAN END WHICH IS C* DEPENDENT ON THE ERROR TEST CONSTANT. * DO 270 I = 1,N 270 ERROR(I) =0.0 DO 430 L=l,3 CALL PERIOD(Y,CSAVE(I,3),N,NI,YI,SAVEI,HI, 1 HMINI ,HMAXI ,EPSI ,MFI ,YMAXI ,ERRORI ,MAXDEI , PWI , IPI , CSAVEI ,MAXL,kSKIP,TIME ,YSAVE ,YAVG,YMID,TN, 3 MAXCNT,EPS) IF(KPFLG .LT. 0) RETURN C* IF THERE HAS BEEN A CHANGE OF ORDER OR THERE HAS BEEN TROUBLE C* WITH CONVERGENCE, PW IS RE-EVALUATED PRIOR TO STARTING THE C* CORRECTOR ITERATION IN THE CASE OF STIFF METHODS. IWEVAL IS C* THEN SET TO -1 AS AN INDICATOR THAT IT HAS BEEN DONE. P*********************************************************************^ IF (IWEVAL. LT.l) GO TO 350 IF (MF.EQ.2) GO TO 310 CALL PEDERV ( T , i' , P W ,N3) r - A(1)*H DO 2S0 J = 1,N DO 280 I = 1,N 200 PW(I.J) = PW(I,J)*R C* ADD T] [DENTITY !IATRIX TO THE JAC03IAN AND DECOMPOSE INTO LU = PW * ^*********************************A************************A*''«** 5 ' c ***** :,1c; ' { 290 DO 300 I - L,N 300 (1,1) - PW(I,I) 1- 1.0 I ..HVAL = -1 CALL DECOMI J (H,N3,PW,IP) * 121 IF (IP(N) .NE. 0) GO TO 350 GO TO 440 C****************************^^ C* EVALUATE THE JACOBIAM INTO PW BY NUMERICAL DIFFERENCING. R IS THE * C* CHANGE MADE TO THE ELEMENT OF Y. IT IS EPS RELATIVE TO Y WITH * C* A MINIMUM OF EPS**2. F STORES THE UNCHANGED VALUE OF Y. * C******************************^^ 310 DO 340 J = 1,N F = Y(1,J) R = EPS *DMAX1 (EPS, DABS (F)) Y(1,J) = Y(1,J) + R D = A(1)*H/R CALL PERIOD(Y,CSAVE(l,l),N,NI,YI,SAVEI,HI, 1 HMINI, HMAXI.EPS I, MFI,YMAXI,ERRORI ,MAXDEI, 2 PWI,IPI,CSAVEI,MAXL,KSKIP,TIME,YSAVE,YAVG,YMID,TN, 3 MAXCNT,EPS) IF(KPFLG .LT. 0) RETURN DO 330 I = 1,N 330 PW(I,J) = (CSAVE(I,1) - CSAVE(I,3)) * D 340 Y(1,J) = F GO TO 290 350 DO 360 1=1, N 360 CSAVE(I.l) = Y(2,I) - CSAVE(I,3)*H IF(MF.EQ.O) GO TO 410 CALL SOLVE(N,N3,PW,CSAVE(l,l) ,IP) C* CORRECT AND COMPARE DEL, THE L2 NORM OF CHANGE /YMAX, WITH BND. * C* ESTIMATE THE VALUE OF THE L2 NORM OF THE NEXT CORRECTION BY * C* ER*2*DEL AND COMPARE WITH BND. IF EITHER IS LESS, THE CORRECTOR * C* IS SAID TO HAVE CONVERGED. * 410 DEL = 0.0D0 DO 420 I = 1,N Y(l, I) = Y(1,I) + A(1)*CSAVE(I,1) Y(2,I) = Y(2,I) - CSAVE(I,1) ERROR(I) = ERROR(I) + CSAVE(I,1) DEL = DEL + (CSAVE(I,1)/YMAX(I))**2 420 CONTINUE DO 421 I = 1,K 426 IF(L.GE.2) BR = DMAX1(.9*BR, DEL/DELI) DELI = DEL IF(DMIN1(DEL,3R*DEL*2.0) .LE. BND) GO TO 490 430 CONTINUE C* THE CORRECTOR ITERATION FAILED TO CONVERGE IN 2 TRIES. VARIOUS * C* POSSIBILITIES ARE CHECKED FOR. IF H IS ALREADY HMIN AND * C* THIS IS EITHER ADAMS METHOD OR THE STIFF METHOD IN WHICH THE * C* MATRIX PW HAS ALREADY BEEN RE-EVALUATED, A NO CONVERGENCE EXIT * C* IS TAKEN. OTHERWISE THE MATRIX PW IS RE-EVALUATED AND/OR THE * C* STEP IS REDUCED TO TRY AND GET CONVERGENCE. * C*****************************^^ 440 T = TOLD 122 IF (DABS (H) .LE.HMIN*1. 00001. MD. (IWEVAL-MTYP) .LT.-l)GO TO 4b0 IF((MF.EQ.0).OR. (IWEVAL.NE.O)) H = K*0.25D0 IF(INTFLG .EQ. 0) GO TO 445 Nil = H/TSCALE II = NH*TSCALE 445 CONTINUE IWEVAL = MF IRET1 = 2 IF(IROUT .EQ. 0) IRET1 = 4 GO TO 750 450 CONTINUE I RET = 1 GO TO 170 460 KFLAG = -3 NQ = MQOLD 470 DO 480 I = 1,N ■ DO 480 J = 1,K 480 Y(J,I) = SAVE (J, I) H = HOLD JSTART = NQ RETURN c ,,, * ... * * * ,.. * * * ,-c * * * * * * * * * * * * * * * * * * * * * * * * * * * * * ******* * * * * * ******** * ********** C* TEE CORRECTOR CONVERGED AND CONTROL IS PASSED TO STATEMENT 520 C* IF THE ERROR TEST IS O.K., AND TO 540 OTHERWISE. C* IF TEE STEP IS O.K. IT IS ACCEPTED. IF IDOUB HAS BEEN REDUCED C* TO ONE, A TEST IS MADE TO SEE IF THE STEP CM BE INCREASED C* AT TEE CURRENT ORDER OR BY GOING TO ONE HIGHER OR ONE LOWER. C* SUCH A CHANGE IS ONLY MADE IF THE STEP CAN BE INCREASED BY AT C* LEAST 1.1. IF NO CHANGE IS POSSIBLE IDOUE IS SET TO 8 TO C* PREVENT FUTHER TESTING FOR 8 STEPS. C* IF A CuANGE IS POSSIBLE, IT IS MADE and IDOUE is set to C* NQ + 1 TO PREVENT FURTHER TESIHG FOR THAT NUMBER OF STEPS. C* IF THE ERROR WAS TOO LARGE, THE OPTIMUM STEP SIZE FOR THIS OR C* LONER ORDER IS COMPUTED, MD THE STEP RETRIED. IF IT SHOULD C* FAIL TWICE MORE IT IS AN INDICATION THAT THE DERIVATIVES THAT C* HAVE ACCUMULATED IN THE Y ARRAY HAVE ERRORS OF THE WRONG ORDER C* SO TINE FIRST DERIVATIVES ARE RECOMPUTED MD THE ORDER IS SET * C* TO 1. c * * . k ..< * * ... ... * * * ; V**yc**sBr* * * * * * * * ***** * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 490 J = 0.0 DO 500 I = 1,1! 500 D = D + ,(ERROR(I)/YMAX(I))**2 . i fEVAL = IF (D.GT.E) GO TO 540 IF (K.LT.3) GO TO 520 ; CORRECTION OF THE HIGHER ORDER DERIVATIVES AFTER A * * * * * * -k c r-.p * C* SUCCESFUL STEP. _ * n*************** * * **** * * * * * * * * **** * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 510 J = 3, . DO 510 I = 1,N ,l\j,i.) = Y(J,I) + A(J)*ERR0R(I) 123 520 KFLAG = +1 HNEW = H IF (ID0U3.LE.1) GO TO 550 IDOUB = IDOUB - 1 IF (IDOUB. GT.l) GO TO 700 DO 530 I = 1,N 530 CSAVE(I,2) = ERROR(I) GO TO 700 C* REDUCE THE FAILURE FLAG COUNT TO CHECK FOR MULTIPLE FAILURES * C* RESTORE T TO ITS ORIGINAL VALUE AND TRY AGAIN UNLESS THERE HAVE * C* THREE FAILURES. IN THAT CASE THE DERIVATIVES ARE ASSUMED TO HAVE * C* ACCUMULATED ERRORS SO A RESTART FROM THE CURRENT VALUES OF Y IS * C* TRIED. THIS IS CONTINUED UNTIL SUCCESS OR H = HMIN. * 540 KFLAG = KFLAG - 2 IF (DABS (H).LE.(HMIN*1. 00001)) GO TO 740 T = TOLD IF (KFLAG. LE. -5) GO TO 720 C******************************^^ C* PR1, PR2, AND PR3 WILL CONTAIN THE AMOUNTS BY WHICH THE STEP SIZE * C* SHOULD BE DIVIDED AT ORDER ONE LOWER, AT THIS ORDER, AND AT ORDER * C* ONE HIGHER RESPECTIVELY. * C**********************^ 550 PR2 = (D/E)**ENQ2*1.2 PR3 = l.E+20 IF ( (NQ.GE.MAXDER). OR. (KFLAG. LE.-l)) GO TO 570 D = 0.0 DO 560 I = 1,N 560 D = D + ((ERROR(I) - CSAVE(I,2))/YMAX(I))**2 PR3 = (D/EUP)**ENQ3*1.4 570 PR1 = l.E+20 IF (NQ.LE.l) GO TO 590 D = 0.0 DO 580 I = 1,N 580 D = D + (Y(K,I)/YMAX(I))**2 PR1 = (D/EDWN)**ENQ1*1.3 590 CONTINUE IF (PR2.LE.PR3) GO TO 650 IF (PR3.LT. PR1) GO TO 660 600 R = 1.0/AMAXl(PRl,l.E-4) NEWQ = NQ - 1 610 IDOUB = 8 IF ((KFLAG.EQ.l).AND.(R.LT.(l.l))) GO TO 700 IF (NEWQ. LE. NO) GO TO 630 C* COMPUTE ONE ADDITIONAL SCALED DERIVATIVE IF ORDER IS INCREASED * DO 620 I = 1,N DFK = K 620 Y(NEWQ+1,I) = ERROR(I)*A(K)/DFK 630 K = NEWQ + 1 124 IF (KFLAG.EQ.l) GO TO 670 HSAVE = H H = E*R IF(INTFLG .EQ. 0) GO TO 635 NH = H/T SCALE H = NH*TSCALE ?, = il/HSAVE 635 CONTINUE IRET1 = 3 GO TO 750 640 IF((NEWQ .EQ. NQ) .AND. (IROUT .EQ. 1)) GO TO 250 NQ = NEWQ GO TO 170 650 IF (PR2.GT.PR1) GO TO 600 NEWQ = NQ R = 1.0/AlIAXl(PR2,l.E-4) ■ GO TO 610 660 R = 1.0/AMAXl(PR3,l.E-4) NEWQ = NQ + 1 GO TO 610 670 IRET = 2 R = DMIN1(R,HMAX/DABS(H)) ilSAVE = E H = II*R IF(INTFLG .EQ. 0) GO TO 675 NH = H/TSCALE E = NE*T SCALE R = E/ESAVE C75 CONTINUE tlNEW = E IF((NQ .2Ci. NEWQ) .AND. (IROUT .EQ. 1)) GO TO 680 NQ = NEWQ CO TO 170 680 Rl = 1.0 DO 690 J = 2,iv Rl = R1*R DO 690 I ■ 1,N 690 Y(J,I) = Y(J,I)*Rl IDOUE = K 700 DO 710 I = 1,N 710 YMAX(I) = DMAX1(YMAX(I),DAES(Y(1,I))) JSTART = IIQ ElETURN 720 IF(NQ.GT.l) GO TO 725 IF(DADS(U).LE.2.D0*HMIN) GO TO 780 GO TO 550 725 R = E/EOLD DO 730 I = 1,N Y(1,I) = SAVE(l.I) 2,1) = SAVE(2,I)*R - 1 iJTLAG = 1 125 GO TO 170 740 KFLAG = -1 KNEW = H JSTART = NQ RETURN C*******************************^^ C* THIS SECTION SCALES ALL VARIABLES CONNECTED WITH H AND RETURNS * C* TO THE ENTERING SECTION. * 750 H = DSIGN(DMAX1(HMIN,DMIN1(DABS(H),HMAX)),H) Rl = 1.0 DO 760 J = 2,K R = H/HOLD Rl = R1*R DO 760 I = 1,N 760 Y(J,I) = SAVE(J,I)*R1 DO 770 I = 1,N 770 Y(1,I) = SAVE(1,I) IDOUB = K GO TO ( 130 , 250 , 640 , 450),IRET1 780 KFLAG = -4 GO TO 470 END 126 * C* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* c* SUBROUTINE PERIOD(Z,G,NPl,N,Y, SAVE, H.HMIN, UMAX, EPS, MF.YMAX, ERROR, MAXDER,PW, IP, CSAVE, MAXL , KSKIP ,TIME , YS AVE , YAVG , YMID , TN ,MAXCNT , EPS OUT) IMPLICIT DOUBLE PRECISION(A-H,Q-Z) **************************************************************** * * * * * * * * THIS ROUTINE FINDS THE PERIOD OF THE OSCILLATION BY SOLVING A MINIMIZATION PROBLEM USING NEWTON'S METHOD. THE PARAMETERS HAVE THE FOLLOWING USES: NP1 SAVE CSAVE H HMIN HMAX EPS MF YMAX ERROR MAXDER PW IP YSAVE TIME KSKIP THE VECTOR OF VALUES WHICH CONTAINS THE SMOOTH SOLUTION AND ITS SCALED DERIVATIVES. ONLY Z(l,*) AND Z(2,NP1) WILL BE USED IN THIS ROUTINE. Z(1,NP1) CONTAINS THE CURRENT ESTIMATE OF TIME. THE VECTOR OF FORWARD DIFFERENCES OF Z, OVER AN INTERVAL OF LENGTH TSCALE. G IS OUTPUT OF THE ROUTINE* PERIOD. * THE NUMBER OF SIMULTANEOUS EQUATIONS IN THE ORIGINAL SYSTEM. N + 1 THE VECTOR OF VALUES WHICH WILL CONTAIN SOLUTIONS TO THE ORIGINAL EQUATIONS, WITH INITIAL CONDIIONS GIVEN BY THE VALUES OF Z(1,J) , J = 1 TO N. AN ARRAY USED FOR TEMPORARY STORAGE IN DIFSUB. AN ARRAY USED FOR TEMPORARY STORAGE IN DIFSUB. THE STEPSIZE OF THE NEXT STEP OF THE INNER INTEGRATION* THE MINIMUM STEPSIZE FOR THE INNER INTEGRATION. THIS IS SET BY THE USER. THE MAXIMUM STEPSIZE TO BE USED ON THE INNER INTEGRATION. THIS WILL BE ADJUSTED TO TN IF IT IS LARGER THAN TN. RELATIVE ERROR/UNIT STEP IS CONTROLLED BY DIFSUB TO BE LESS THAN OR EQUAL TO EPS. METHOD TYPE FOR INNER INTEGRATION ADAMS PREDICTOR-CORRECTOR 1 STIFF, SUPPLY JACOBIAN STIFF, DIFSUB COMPUTES JACOBIAN BY DIFFERENCING LENGTH N HOLDING THE MAXIMUM OF Y 5EEN N HOLDING THE ERROR ESTIMATES IN THIS MUST BE VECTOR OF SO FAR. VECTOR OF LENGTH DIFSUb. MAXIMUM ORDER TO BE USED IN DIFSUB. LESS THAN 13. USER SETS THIS. SINGLE PRECISION ARRAY (N,N) HOLDING JAC03IAN OF THE SYSTEM VECTOR OF LENGTH N OF INTEGERS USED BY DECOMP AND SOLVE. YSAVE(I,J) CONTAINS Y(1,J) AT T-TIME(I) A VECTOR OF TIMES OF LENGTH MAXL THE NUMBER OF INTERVALS TO BE LUMPED TOGETHER TO OCCUPY ONE POSITION IN THE ARRAY YSAVE. NORMALLY KSKIP = 1. 127 C* MAXL THE MAXIMUM NUMBER OF INTERVALS IN THE INTEGRATION. * C * NOTE THAT THEREFORE IT IS THE MAXIMUM DIMENSION * C* OF THE ARRAYS TIME AND YSAVE. * C* YAVG A VECTOR CONTAINING THE AVERAGE OF Y OVER ONE CYCLE * C* OF LENGTH TN * C* YMID AN ARRAY CONTAINING THE VALUES Y AT THE MIDDLE * C* OF THE INTERVAL (BETWEEN TWO ESTIMATED PERIODS). * C* TN ESTIMATE FOR THE PERIOD. * C* MAXCNT MAXIMUM NUMBER OF NEWTON ITERATIONS OF PERIOD- * C* FINDER THAT WILL BE ALLOWED * C* EPSOUT THE VALUE OF EPS FOR THE OUTER INTEGRATION * C * ROUTINE. THIS IS USED TO COMPUTE DELTA. * DOUBLE PRECISION Z(12 ,NP1) ,G(NP1) DOUBLE PRECISION Y(13,N) ,YMAX(N) ,SAVE(13,N) ,CSAVE(N,3) ,ERROR(N) DIMENSION PW(N,N),IP(N) DOUBLE PRECISION TIME(MAXL) , YSAVE (MAXL, N) ,YMID(13,N) ,YAVG(N) COMMON/COUNT/NDIFF,NPER,NDFSB,NFNCT,NCNT COMMON TSCALE, BETA, DELTA, KPFLG NPER = NPER + 1 KPFLG = 1 I COUNT = C BETA TIMES Z(2,NP1) IS AN ESTIMATE FOR THE LENGTH OF TWO C PERIODS TN = BETA*Z(2,NP1)*0.5D0 IF(HMAX .GT. TN) HMAX = TN C TIME IS CARRIED AS THE LAST COMPONENT OF Z T = Z(1,NP1) TEND = T + TN JSTART = TINIT = T IRET = 1 C INITIALIZE ELFMENTS OF YMAX TO 1 DO 10 I = 1,N 10 YMAX(I) = 1.D0 C***********************^^ C* ACCUMULATE THE ARRAYS TIME AND YSAVE BY STORING EVERY KSKIP * C* TIME AND Y. L WILL BE THE NUMBER OF ELEMENTS OF THE ARRAYS * C* FIRST INITIALIZE Y TO THE ELEMENTS OF Z. * C**************************^^ DO 20 I = 1,N 20 Y(l, I) = Z(1,I) KOUNT - L = 1 30 CONTINUE IF ((KOUNT/ KSKIP ) *KSKIP .NE. KOUNT) GO TO 50 C ADD THIS ELEMENT TO THE ARRAY TIME(L) = T DO 40 J = 1,N 40 YSAVE(L,J) = Y(1,J) IF(L .GT. MAXL) GO TO 800 L = L + 1 128 50 CONTINUE CALL DIFSUB(N,T,Y, SAVE, CSAVE,H,1MIN,HMAX, EPS, MF, * YMAX, ERROR, KFLAG , JSTART , MAXDER, PW , IP , * HLAST,HNEW,IDOUB) IF(KFLAG .LT. 0) GO TO 810 KOUNT = KOUNT + 1 IF(T .LE. TEND) GO TO 30 L = L - 1 C STORE THE ELEMENTS OF T AND Y AT THE LAST CALL TO DIFSUB IN C THE ARRAY YMID NCNT = NCNT + 1 HMID = H HMIDN = HNEW TMID = T IMID = IDOUB JMID = JSTART JSTP1 = JSTART + 1 DO 60 I = 1.JSTP1 DO 60 J = 1,N 60 YMID (I, J) = Y(I,J) WRITE(6,956) H,HLAST,(ERROR(I),I = 1,N) 956 FORMAT (' ',5D15.7) DO 70 I = 1,H 70 YMID (3, I) = -ERROR(I)*((H/HLAST)**2)/2.D0 c ********************************************************************** C* IN THIS SECTION, THE VALUES 111 THE NEXT PERIOD WILL C* BE COMPARED AGAINST THE ONES WE HAVE STORED, AND A CORRECTION * C* WILL BE OBTAINED FOR THE VALUE OF THE PERIOD. * 80 . CONTINUE K = 1 SUM1 = 0.D0 3UM2 = 0.D0 DO 90 I = 1,H 90 YAVG(I) = 0.D0 100 CONTINUE C SINCE THE FIRST THIRTEEN ROWS OF SAVE ARE TEMPORARY STORAGE FOR C DIFSUB, USE THESE AS TEMPORARY STORAGE FOR THE INTERPOLATION IF((TIME(K) + TN) .GT. T) GO TO 200 C INTERPOLATE THE CURRENT VALUE OF Y BACK TO TIME(K) + TN TD = TIME(K) + TN CALL INTERP(T,Y, JSTART, N,TD,H, SAVE) R = TD - T IF(K .LT. L) GO TO 110 HINT = TEND - TIME(L) GO TO 120 110 HINT = TIME(K+1) - TIME(K) 120 CONTINUE IF (PC .LT. L) GO TO 128 C FIND DELTA, THE TOLERANCE WITH WHICH TO COMPUTE THE PERIOD BMAX = 0.D0 DO 125 I = 1,H 129 ABSB = DABS (SAVE (2, I)) IF(ABSB .GT. BMAX) BMAX = ABSB 125 CONTINUE BMAX = BMAX/R DELTA = DABS(EPSOUT*BETA/BMAX) 128 SI = 0.D0 52 = 0.D0 53 = O.DO DO 150 I = 1,N A = YSAVE(K,I) - SAVE (1,1) B = -SAVE(2,I)/R IF((JSTART .LE. 1) .AND. (T .NE. TMID)) GO TO 130 C = -2.D0*SAVE(3,I)/(R*R) GO TO 140 C IF THE ORDER WAS ONE, WE NEED TO COMPUTE THE SECOND DERIVATIVE C DIFFERENTLY 130 C = ERROR(I)/(HLAST**2) 140 CONTINUE 51 = SI + A*B 52 = S2 + C*A 53 = S3 + B*B YAVG(I) = YAVG(I) + HINT*YSAVE(K, I) 150 CONTINUE SUM1 = SUM1 + HINT*S1 SUM2 = SUM2 + HINT*(S2 + S3) K = K + 1 IF(K .GT. L) GO TO 300 GO TO 100 C********************************^^ C* DO ONE STEP OF THE INTEGRATION * C*********************************^^ 200 CALL DIFSUB(N,T,Y, SAVE, CSAVE,H,HMIN,HMAX, EPS, MF, * YMAX, ERROR, KFLAG, JSTART,MAXDER,PW, IP, * HLAST,HNEW,IDOUB) IF (KFLAG .LT. 0) GO TO 810 GO TO 100 C* THE INTEGRATION IS ALL DONE (ALL INTEGRALS FOR NEWTON'S * C* METHOD ARE NOW COMPUTED) . COMPUTE THE NEWTON CORRECTION AND * C* SEE IF ANY MORE ITERATIONS ARE NECESSARY. * C*********************************^ 300 CONTINUE NCNT = NCNT + 1 TNOLD - TN TN = TN - (SUM1/SUM2) IF (DABS (TNOLD - TN) .LE. DELTA) GO TO 400 C WE WILL HAVE TO DO ANOTHER NEWTON ITERATION IF((TINIT + TN) .GT. TMID) GO TO 330 JSTP1 = JMID + 1 320 DO 310 I = 1,JSTP1 DO 310 J = 1,N 310 Y(I,J) = YMID(I,J) 130 H = HMID HNEW - HMIDN JSTART = JMID IDOUB = IMID T = TIED I COUNT - I COUNT + 1 IF (I COUNT ,LT. MAXCNT) GO TO SO WRITE (6, 89 2) MAXCNT 892 FORMAT (' PERIOD FAILED. CONVERGENCE COULD NOT BE ACHIEVED IN 1 , * 15 , f ITERATIONS ' ) KPFLG - -1 RETURN 330 CONTINUE TD = TINIT + TN 335 CALL DIFSUB(N,TMID,YMID,SAVE,CSAVE,HMID,HMIN,HMAX,EPS,MF, 1 YMAX,ERROR,KFLAG,JMID,MAXDER,PW, IP, HLAST, HMIDN, IMID) IF(KFLAG .LT. 0) GO TO 810 IF(TMID .LT. TD) GO TO 335 JSTP1 = JMID + 1 GO TO 320 C* THE NEWTON ITERATION HAS CONVERGED, AND WE NEED TO COMPUTE * C* G FOR OUTPUT. SO INTERPOLATE THE CURRENT Y BACK TO * C* TINIT + 2*TN, AND COMPUTE G. * C**A********************************"********************************** 400 CONTINUE TD = TINIT + 2*TN CALL INTERP ( T , Y , JSTART , N , TD ,11 , SAVE) DO 410 I = 1,N 410 G(I) = (SAVE(1,I) - Z(1,I))/TSCALE G(NP1) = 2*TN/TSCALE RETURN C C* ERROR RETURNS * C*****A**************************************************************** 800 WRITE (6, 890) 890 FORMAT (' L IS BIGGER THAN MAXL. NEED TO PROVIDE MORE STORAGE 1 ) KPFLG = -1 RETURN 810 WRITE (6, 891) KFLAG 891 FORMAT (' ERROR IN INNER INTEGRATION. KFLAG =',I5) KPFLG = -1 RETURN ' END 131 SUBROUTINE INTERP(T1,Y,NQ,N,T2 ,H,SAVE) IMPLICIT DOUBLE PRECIS ION (A-H.Q-Z) C* THIS SUBROUTINE INTERPOLATES THE ARRAY Y FROM THE POINT Tl * C* TO THE POINT T2 , AND STORES THE RESULTS IN SAVE. IN THE ' * C* PROCESS, IT RESCALES THE VARIABLES BY THE STEPSIZE * C* R = T2 - Tl, ASSUMING THEY WERE PREVIOUSLY SCALED BY * C* THE VARIABLE H. * DOUBLE PRECISION Y(13,N) ,T1,T2,H,SAVE(13,N) R = (T2 - Tl)/H K = NQ + 1 Rl = 1.D0 DO 60 J = 1,K DO 50 I = 1,N SAVE(J.I) = Y(J,I)*R1 50 CONTINUE 60 Rl = R1*R C MULTIPLY Y BY THE PASCAL TRIANGLE MATRIX DO 100 J = 2,K DO 100 Jl = J,K J2 = K - Jl + J - 1 DO 100 I = 1,N 100 SAVE(J2,I) = SAVE(J2,I) + SAVE(J2+1,I) RETURN END F ° rm ,fi/Sr 427 US - AT0M,C ENERGY COMMISSION AECM3201 UNIVERSITY-TYPE CONTRACTOR'S RECOMMENDATION FOR DISPOSITION OF SCIENTIF!C AND TECHNICAL DOCUMENT I See Instructions on Reverse Side ) 1. AEC REPORT NO. COO-2383-0049 2. TITLE 1 TYPE OF DOCUMENT (Check one): H a. Scientific and technical report LJ b. Conference paper not to be published in a journal: Title of conference . Date of conference AN EFFICIENT NUMERICAL METHOD FOR HIGHLY OSCILLATORY ORDINARY DIFFERENTIAL EQUATIONS Exact location of conference Sponsoring organization □ c. Other (Specify). 4. RECOMMENDED ANNOUNCEMENT AND DISTRIBUTION (Check one): B a. AEC's normal announcement and distribution procedures may be followed. D b. Make available only within AEC and to AEC contractors and other U.S. Government agencies and their contractors. 11 c. Make no announcement or distrubution. 5. REASON FOR RECOMMENDED RESTRICTIONS: 6. SUBMITTED BY: NAME AND POSITION (Please print or type) C. William Gear Professor and Principal Investigator Organization Department of Computer Science University of Illinois Urbana, Illinois 61801 7* Signature / /\ s\ Date August 15, 1978 FOR AEC USE ONLY AEC CONTRACT ADMINISTRATOR'S COMMENTS, IF ANY. ON ABOVE ANNOUNCEMENT AND DISTRIBUTION RECOMMENDATION: PATENT CLEARANCE: □ a. AEC patent clearance has been granted by responsible AEC patent group. D b. Report has been sent to responsible AEC patent group for clearance. U c. Patent clearance not required. UBUOGRAPHIC DATA HEET . Title and Subtitle 1. Report No. UIUCDCS-R-78-933 AN EFFICIENT NUMERICAL METHOD FOR HIGHLY OSCILLATORY ORDINARY DIFFERENTIAL EQUATIONS , Author(s) Linda Ruth Petzold Performing Organization Name and Address Department of Computer Science University of Illinois Urbana, Illinois 61801 I Sponsoring Organization Name and Address Department of Energy 9800 South Cass Avenue Argonne, Illinois 60439 i. Supplementary Notes 3. Recipient's Accession No. 5. Report Date August 1978 8. Performing Organization Rept. No " UIUCDCS-R-78-933 10. Project/Task/Work Unit No. 11. Contract /Grant No. ENERGY/EY-76-S-02-2383 13. Type of Report & Period Covered Thesis 14. Abstracts A quasi-envelope" of the solution of highly oscillatory differential equations is defined. For many problems this is a smooth function which can be integrated sing much larger steps than are possible for the original problem. Since the lefinition of the quasi-envelope is a differential equation involving an integral of the original oscillatory problem, it is necessary to integrate the original problem over a cycle of the oscillation (to average the effects of a full cycle). :his information can then be extrapolated over a long (giant!) time step. Unless the period is known a priori, it is also necessary to estimate it either early in the integration (if it is fixed) or periodically (if it is slowly varying). Error propagation properties of this technique are investigated, and an automatic program is presented. Numerical results indicate that this technique is much more efficient than convent ional ODE methods, for many oscillating problems. Key Words and Document Analysis. 17a. Descriptors oscillating differential equations stiff equations envelopes >• Identifiers/Open-Ended Terms • COSATI Field/Group Availability Statement unclassified ! M NTIS-35 (10-70) 19. Security Class (This Report) _k UNCLASSIFIED 20. Security Class (This Page UNCLASSIFIED 21. No. of Pages 132 22. Price USCOMM-DC 40329-P7 1