LIBRARY OF THE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN 510.8 TJL(or y\o.547-55Z cop- Z !"!*>,$ ..J?*"*' 333 '*.(x) i=l where the functions . are some fixed set. The problem is thus converted to n the finite one of determining; the set {q . } . Define the functional E by E(f) = ||x - 5c | J where x is the solution of the differential equation as a function of f and x is the observed data. The solution method is then to find that f which minimizes E. We replace the problem of minimizing E(f) by the finite problem of minimizing E(f ) = E(a) sav. The initial condition x(0) again being included in f and in the q.'s. n l The only requirement on the .'s is that they are linearly independent. However, it is convenient in practice to require that they be orthonormal, l. e, 1 J iJ to where < g, h > = / g(x)h(x)dx. * a This method differs from that used for simnle parameter estimation in that in this case the number of parameters, n, used to annroximate the function f can be varied. It can in fact be altered during the calculation, starting with n "small" and increasing it during later iterations. Since the amount of computation is proportional to n (or to n depending on the algorithm being used) substantial savings in time can be gained during the earlier iterations. 8 Of course a natural requirement is that we can represent the minimizing function f as accurately as necessary. We therefore require that, for a given continuous function f; for all e > 0, there exists an n such that [| f - f || < e . In any computational method requiring discretization it is desirable to be able to show that if the discretization is sufficiently refined a solution arbitrarily close to the theoretically exact one may be obtained. In the special case of identifying f in fr = f < x <*>) dt x(0) = x Q whose solution (on the finite interval [0, TJ) we denote by x (t), it is possible to get such a convergence result. Lemma For any functions g, h: x -> x. as g ■+ h , provided g satisfies a Lipschitz g h condition. This follows immediately from the following theorem (Coddington and Levinson [8], p. 8). If g satisfies the Lipschitz condition |g(x) - g(y) | < k |x - y| , |x (0) - x. (0)1 < 6 g h and |g(x) - h(x) | < e then for all < t < T, |x (t) - x. (t)| < 6 e kt + £ (e" - 1). g n ' k Theorem . If f* minimizes E(f) = ||x_ - xll " uniquely, and f* minimizes E(f ) ; n f ii ^ j ■• n n then f* ->■ f * as n -> «> . n Proof For any e > 0, there is a g such that ||f»- g || < e a nd g is of the form n n n I q, . satisfy «(>., .> = 5. ., (2) the Kroneker delta and the inner product is given by b = / g(x) h(x) dx. a n The choice of functions used to form the orthonormal basis {d>.} is to a 1 i=l certain extent arbitrary. Polynomials, trigonometric, spline functions may be chosen. In this case spline functions were selected since these can represent arbitrary functions to medium accuracy easily. A spline function S(x) is defined by: k S(x) = 7T n (x) + I a. (x - q)" (3) i=l where /-, _ >rj ^ f(x - £. ) for x > £. (*-«!>? = * x (*) Lo for x < I. - i and tt (x) is a polynomial of degree n. In other words it is a piecewise k polynomial of degree n except at the points {£.} where the first n - 1 1 i=l derivatives are continuous. The set of points k are usually called the knot points. The two simplest splines are step func- tions for n = and connected straight line segments for n = 1. 11 One point should be noted when using spline in differential equations. In solving the equation | f ■ f <*>- if f is a spline with discontinuities in the n derivative, then x will have discontinuities in its (n + l) derivative. If we are to solve this numerically, then we must use a method whose order is at most n, since continuous n derivatives are required. To obtain an orthonormal basis, one may start with an arbitrary basis m {^} such as i=l J x 1- i = 1,2,. . . ,n ♦ ± (x) = < n (5) l (X " C i-n } + i = n+1, n+2, ..., m and the Gram-Schmidt orthonormal iz at ion procedure applied, giving, for i < n, 4>.(x) = P.(x) (6) i i = the Legendre polynomial of degree i on [a,b], and for i > n, ♦,(x) = tt.(x) + \ a (x - % f + , (7) 1 J-n+1 J J for appropriate polynomials tt.(x) and constants «... In applications where splines are being used to form an approximation f to some known function f, the knot points {£. } may be chosen so as to minimize the norm ||f - f || . However in our case the functions f being approximated are completely unknown so that the knots are selected arbitrarily. They are therefore equally spaced along the line [a,bj. In actual practice it is better to consider splines of the form, 12 k+n-1 1 S(x) = 7T (x) + V a. (x - £.) f X (8) n . L .x 1 + i=l ; with the added requirement that the spline be a polynomial of degree n for x > £ as well as x < L , the natural splines of degree 2n - 1. For a K. J- given n and k, these require the same number of parameters for their repre- sentation, the extra n - 1 knots compensating for the n - 1 constraints imposed by requiring the (n + l) to (2n - l) derivatives to be zero for x > L . Natural splines have the following property. Let the pseudo-norm K+n— J_ of a function f be defined by ||f|| = / (f (n) (x)} 2 dx (9) a k+1 then of all functions f through a given set of points {£.} , 1 a = £ < £ < ...< £L = b, the natural spline through these points minimizes this pseudo-norm (For a proof see Ahlberg, Nilson and Walsh, [l], §5.1+). m The problem of finding a basis {^.} now becomes slightly more compli- 1 i=l cated. We set r i-l -no c i = 1,2, . . . ,n h U) = L , ,2n-l . B *?- 1 , x 2n-l (x- ^ X + I c (x- r )^ (10) 1 -" + k=m+l lk k " n + i = n+1 , n+2 , . . . , m where the last n - 1 coefficients (c. ) are chosen to satisfy the requirement that \b . (x) be a polynomial of degree n for x > £ i m-n For x > 5 J m+n-1 , m+n-1 (x - K. J 2 "" 1 + I c., (x - E ) 2n " 1 (11) k=m+l lk k " n is of degree at most n. Hence equating the coefficient of x q to zero, q going from n+1 to 2n-l yields the equations 13 _ . m+n-1 _ n „2n-l-q v _2n-l-q _ , x «i-n + „ I C ik ^k-n = ° < 12 » k=m+l or, putting r = 2n-l-q, m+n-1 [ c. £ = -? r (13) , u ,- ik k-n l-n k=m+l for r running from to n-2. Since these equations are of a special form, the c may he evaluated using Cramer's rule. If D(x , x , ..., x ) represents the determinant of an n x n matrix whose i,j element is x. , then J D( C m-n+l ' C m-n+2 ' * * ' ' C i-n ' * * ' ' C m-1 ) ik D(C m-n+l' C m-n+2' '•"^k-n' •*•' ? m-l ) Both numerator and denominator are Vandermonde determinants. The value of the numerator is m-1 n ,W (15) p,q=m-n+l p > q and that of the denominator the same except that L, replaces £. . Cancelling- common terms gives m-1 |i\. (5 i-n"V q=m-n+l q^k-n (16) c. = - lk m-1 q=m-n+l q^k-n This gives an initial basis of natural splines to which the Gram-Schmidt orthogonalization procedure may be applied. 11+ The algorithm for converting an arbitrary basis {^}._-, into an orthonormal ,m s et { d> . } . ., T i i=l i-1 d," = ih - Y < ip. > i = 1,2,. . . ,m i 1 ._n J 1 J j=l (IT) ♦ ± = +:/ IU-II • An equivalent algorithm which is far more stable is described by Rice [19] Equations (lT) are replaced by the following calculations and at the i step 3 = 1,2, ... . ,m +, = *;/ II*, (18) y (i+l) = ^ (i)_ < ^ (D ^ > f> J J J i i j = i+l, i+2, . .. , m. ,m Both methods yield the same system {<)>.}. and require the same amount of computation. The difference is that in the modified method a pivoting scheme is possible, at the i step the vector i//. selected to form the next orthonormal basis component may (by suitable change of subscripts) be chosen (i) , (i) (i) .th from any of ip. , tjj x ',..., ty_ x ~' . ¥e wish to choose for the i~'" vector m that one which makes as large as possible an angle to the basis plane spanned by the previous i-1 vectors. The angle, 0, between two vectors a,b being defined by cos = / | |a|j .19) 15 The pivoting algorithm thus initially normalizes the ijj. so that J ||i// || =1, j = 1,2,..., m. J At the i step, after i - 1 orthonormal vectors have been found, compute for the remaining vectors s^ = Y <4 i} , v 2 (20) k . =1 k J 3 = i, i+1, • •• , m and choose that value of k which minimizes s , thus finding the = <4 J) • V ' Hence m We have shown how to construct an orthonormal basis {. since these, (for j > n), are proportional to the discontinuities in the (n + l) derivatives m+n-1 2 However, the quantity I is a constant for a given set of knot points | = « > • 16 , =6 p q pq ■/ m+n-1 y n (25) / (x - 5 ) * * x u dx i > n, j 5 n 2n-l I / (x " ? i-n> + (x " «j-n). ^2n-l dx i , j > n or if $ = {d> } _ , in matrix notation we have pq p,q=o or $ C $ = I T -1 $ - C , (26) (27) Summing the diagonal elements of each side gives m+n-1 m+n-1 _ m+n-1 . i=0 j=0 1J i=0 xl (28) indicating that the Euclidean norm of $ is a constant for a given set of knots H . IT It remains an open question as to how to minimize 17 6 as a function of the knot points 5. It is for this reason that the knots are chosen equally spaced on the interval [a,b]. Despite the use of the modified Gram-Schmidt method, the calculations are numerically unstable and should be computed using double precision arithmetic throughout. A representation which require less computation and which is more stable is 2n-l +,(x) = I ♦,.. (x - 5 ) for % < x < r (29) i = 1,2,. . . ,m where £ is taken to be a. Despite the fact that this representation contains redundant parameters, it is far more stable than (23), whose parameters {<{>..} are large and J alternating in sign causing severe cancellation errors. This was the representation used for the test problems. 18 5. BASIC SOLUTION TECHNIQUES Let x = x (x , f) (l) be the solution of F(x, x (l) , ..., x U) , f lS f 2 , ..., f n ) = (2) The continuous problem we wish to solve is, minimize the error functional E(x Q , f) = ||x (x Q , f) - x|| (3) where T ||y|| 2 = I I w(t) [y.(t)] 2 dt (U) i-1 x for some weighting function w(t ) ; usually w(t) = 1 for all t. T Discretization of both the functions f and the integral J_ ... dt produces the problem: Minimize E(q) = ||x (q) - x|| (5) where q represents both the initial conditions x n and the discretization of T T the functions f. The integral J w(t) z(t) dt is approximated by > w z (k 6t k=0 k where 6t is the interval between measurements of the elements of the observed functions x. and the w. are chosen from some numerical integration formula, l k The solution algorithm consists of finding a q such that VE(q) = 0. This will minimize E provided only that it is a convex functional. We discuss in chapter 8 how to alter E to ensure its convexity but for the moment we will simply assume it. Differentiating E(q) with respect to q. n T = 2 I I w [x. (q, k fit) - x. (k fit ) ] 9( ix A k=o k J - J 9x. (6) x ^ (q, k fit) ^i 19 3x. The — — must be computed for all i and j. If x. is given by the q i J equation F (x, x (l) , f) = (7) 3x. then an equation for -r-*- can be derived from 8q i 9F. . . t-J- (x, x U \ f) = (8) 8q i - ~ Two possible cases arise, each q. may either be an initial condition or a parameter in one of the f's. By way of example, consider the very simple case f = 'W (9) x(0) = x Q We approximate f by n f n (x) = I q ♦ (x) (10) i=l If we also identify x with q , then the problem reduces to one of finding the n + 1 parameters {q.} 1 i=0 Differentiating with respect to q n d_ j>x_ 3f _9x_ 3f #._% dt 3q Q 3x 3q Q 3q Q 3f But - — = since q^ is simply the initial condition x_. The initial 3q„ 3x 3X condition, at t =0 is = 1. 3q 3X Q 20 For i = 1, 2, ..., n; (12) d at ax 3f 3x 3x , 3f 3q. 3q. T. 1 3f 3 n ^ f„\ . But |i_ = d j (x) = (x) (13) 3q i 3q ± A J J i ana the initial conaition is ■— (at t = 0) = 0. 3q. (We assume the oraer of aifferentiation can be changea. ) 8x If we abbreviate - — by 6.x (i = 0, 1, ...,n) then (ll) ana (12) are on. l combinea as, §- 6.x = |£ 6.x + . (x) (HO at i 3x i i where cj> (x) = 0. With initial conaitions { 6 Q x (0) = 1 6.x (0) = i = 1, 2, . . . , n (15) As mentionea previously, equations containing aerivatives higher than the first can be convertea to a system of first oraer equations in the usual way. We therefore neea only consiaer the solution of a set of first oraer equations F (6x, 6x, x) = (16) In general these equations will contain 6x implicitly. Algorithms are available for the solution of such systems (Altman [ 2 K Verner[2l ] ) , however an implicit equation can be reaaily convertea to an explicit one (Collatz [9 ]). 21 Consider the implicit equation F(x'(t), x(t)) = (17) Differentiating with respect to t gives gr «-♦£«--<> ( 18 ) (Since by hypothesis, the equations are time invariant, there is no term 3t '' Hence 8F , , 9F , nrt v X = -3x" X 'fc? (19) provided 3F/ 3x' / 0. Setting y = x', this can be split up into two first order equations _9F , 3F 7 9x y ' 3y (20) •x* ~ y In this work all equations are converted to an explicit form (if necessary). For certain algorithms, e.g. Newton's method the second derivatives also have to be calculated. * 2 tt n T 9x 3x. 8q i 9q i " k=l £=0 l 8q i 3 *J n T + 2 I I w , [x (q,£ fit) - X. (4 fit)] (21) k=l *=o * k ~ • 8 s For some algorithms we assume that the second term containing the difference [x, (q,£ fit) - x^ U fit)] is negligible compared to the first 22 9 \ If this is not the case then the second derivatives - r — must be 9q i 9q j computed from, (x, x U; , f) = (22) 3q. 9q. A PL1 - Formac program to calculate the necessary derivatives is described in chapter 10. The Formac language [20] is designed to manipulate algebraic expressions and to perform differentiation. The program performs the following steps: 1) Manipulation of the original equations to put them in a form acceptable to Formac. 2) Checking to see if any of the equations are implicit in the highest derivative and if so converting them to explicit form. 3) Calculation of the first and second derivatives 3F. 9 F. 1 j x and a *a 94 j *k The printed expressions from this program were manually transcribed to the main Algol program. 23 6. SOLUTION ALGORITHMS We now turn our attention to the study of the different algorithms which may be used in the minimization of E(q) or, equivalently, solving the equations VE(q) = 0. The subject of minimization of functionals has been extensively studied. Many of these are described in an abstract setting. Daniel [10] gives some 190 references. The subject is an old one going back to Euler and to Newton, Here we consider the adaptation of these methods to our particular problem, taking into account factors such as accuracy, reliability and speed of convergence. The simplest such method is the "steepest descent" method; in minimizing E(q), E decreases most rapidly in the direction VE(q). This suggests an iterative scheme of the form Vi = \ + c k p k where P k - - VE(q k ) (1) This method is typical of all those considered, we first choose a direction in which to move and then minimize in that direction, i.e. we choose c. ■ k such that E(q k + c p k ), c > (2) is minimized. From (2), |^ (q, + c p.) = (3) dc K K This may be solved by Newton's method, 2k (1+1) . (i) c = c 9 2 E { (1) s 7T (q k + t V 3E , . . (i) v 37 (q k +t P k } (U) 3E / v To calculate — (q + c p) aC and c=0 2 3 E 2 3c (q + c p) , we have c=0 E(q + c p) = E(q) + c < p, VE(q) > 2 2 + !"" v e(< i) p > + ... 2 p Cj "C 1 where V E(q) is the Hessian matrix with elements — (q). da. oQ . H i "J Therefore, and aE 3c 2 3 E 3 2 c (q + c p) (q + c p) c=0 c=0 - < p, VE(q)> = < p, V E(q) p > (5) (6) Since < p, VE(q) > and < p, V E(q)p > may each be calculated by solving only one differential equation, the amount of work required to find c is negligible compared to that for p . The steepest descent method requires the solution of only n differential equations per iteration, however convergence is slow and so we turn our attention to other methods. 25 Newton's Method The solution of VE(q) = is found "by the iterative procedure Vi = \ + c k p k where P k = - [^E^)]- 1 VE(q k ) ( 7 ) 3E Since (VE). = and i 9q. 1 T = 2 I w. [x(q, J fit) - x (j fit)] x |2- (q,j fit) (8) j=0 J 9q i (V 2 E)..= ij 3q. 9q. i j T . / p 3x 3x \ (9) fl " 3 2 x \ + 2 ( J w k [x (q, k fit) - x (k fit)] x d * (q, k fit)) *k=0 q i j ' it is necessary to compute both first and second derivatives of x, requiring 2 the solution of about n /2 differential equations. The method is however quadratically convergent. A simplification of this is the Newton-Gauss method, where the second term, containing the presumed small difference [x(q, k fit ) - x (k fit ) ] is ignored. This avoids the need to calculate second derivatives at all. If the equation can be identified exactly, so that the computed solution x, is the same as the observed one x, then the Newton-Gauss method will, in the limit also be quadratically convergent. If, however, this is not the case then only a linear convergence can be proved, and in practice convergence may be slow. 26 The question arises as to whether or not one can achieve a tetter convergence rate than that obtained from calculating the n first derivatives. In this light we consider the conjugate gradient method . Before discussing this algorithm, consider a special case used in solving linear equations. To solve Aq = b, a set of n equations, for some positive definite matrix A, we minimize over x, H(x) = < A(h - q), h - q > (10) where h = A b. (This material may be found in a number of places, e.g. Beckman [ 5 ], but is included here for completeness.) This is achieved by minimizing H along each of a set of n "A-conjugate" vectors {p.} , i.e. < Ap., p. > =0 for i ^ j. At most n steps are thus required to effect the minimization (in practice more may be required because of round off errors.) Minimizing H(q) over all q of the form q = q + c p , we have H(q k + c p k ) = H(q k ) - 2c < r^, p fc > + ?k > where r is the residual - (Aq - b). (ll) By differentiating (ll) with respect to c, H(q + c p ) is minimized when c = < p R , r R > / < Ap fc , p k > . The algorithm is P = r o = " u % " b) \+l = \ + C k P k r k + l = r k ~ C k Ap k P k + 1 = r k + l + \ P k (12) 27 < r k + l> Ap k > and u k - - < P k , Ap k > < V r k > L k - < P k' Ap k > It is necessary to show that the requirement that the set {p.} be A- conjugate leads to a construction of this form. Note first that an A-conjugate set {t . } can be constructed from an arbitrary linearly independent set {v.} by the formula i < Av. ._ , t > *m ■ \ - \ Also, from (13), v. is a linear combination of t , t„, ..., t. and hence, < v. , At, > = 0, k > i (lM 1 k Two orthogonalization procedures are performed in parallel. One forms the {r.} by performing an orthogonalization procedure on the vectors r n , Ap , Ap , . . . , Ap and the other forms the {p.} by producing a set of A-con,1ugate vectors from r~, r, , ...» r . . This gives the relations 1 n-1 < r., r^ > = i 4 j and (15) < Ap i , p > = i } J In the orthogalization for the {r..} from the set r , Ap^ , Ap 1 , . . . , Ap n _ 2 equation (ik) gives (substituting the identity matrix for A, A P i+1 f ° r v., and r, for t ) gives k k 28 < Ap . , r. > = < Ar. , p . > = ( l6 ) 1 k k l Applying equation (13) to the forming of {p.} from r. , r , ..., r gives k < Ar , p . > P k + 1 = r k + l " I < Ap k !\ > P j (1T) 0=1 J J And, using (ik) 9 < Ar, . n , p, > k+l -^k P k+1 " r k+l " < Ap k , p k > " P k " r k + l + \ P k (18) Simarly, it may be shown that < p •. r > p k' k r k+l " r k < Ap, , p, > Ap k =r k - c k Ap k (19) In the more general case, we assume the function being minimized, E(q), approximates a quadratic. The algorithm can then be defined thus Let J(q) = VE(q) J k - JC^) then the conjugate gradient method is p o = r o = " J o q k+l = \ + C k P k (20) r k+l = _J k+l Pn j.1 = r n ^i + ^i Pn k+l k+l k k where b, = - 29 k+1' "k+1 ^k < r,_ n , j;_, p,. > k < p k+l' J k+1 p k > J corresponds to Aq - b and J^ to A. c is now calculated by minimizing along q, + c p, as described earlier. We still have to compute second derivatives for the term 2 J,* = V Eg. ,. , but only those in the direction p. ; this reduces number of k+1 k+1 k 2 equations to be solved from n /2 to n. To see how this works in practice, consider first the special case, ^ = f(x) (21) dt Where, as before, we obtain equations for the first derivative |r 6x. = |£ fix. + *. (x) (22) dt i 9x l l and the second derivative 2 d r 2 3 f « ~ dt ij 3 x 2 i j + |^ 6 2 x., (23) 3x ij + 2 £ +i (x) 6x j where 6x. = r — x (t) l 9q. l 6 x. E — x (t ) Since we require only J'p and not J' explicitly, we need only to calculate v 2 the components \b. E ) 6 x., p,. i *■ ij 30 Multiplying (23) "by p. and summing over j gives J x ♦ g #± (A) + 2^ ♦,(,) (I6x jP .) (Summation being over all components of q and hence of p) Using (9) we obtain T < J 'p'i = 2 , 1. w * If: <^jV k=0 M T + 2 £ w [x(q, k fit) - x (k fit)] ip. (k fit) (25) k=0 X ' Since a system of equations can always be expressed in the form x'= F (x, f) (26) such summation is always possible. A further refinement is to apply the conjugate gradient technique not to the simple gradient VE(q) but rather to the direction obtained from the 2 —1 2 Newton-Gauss Method [V E(q)] VE(q) (the first term only of V E(q) being used), giving rise to the following algorithm p o = r o = " (J o rl J o \ + l = \ + c k p k r k + l ■ - (} k + l rl J k + 1 (2T) p k+l = r k + l + \ p k 31 < r .J' r> > k+1' k+1 p k where d. = - rs ' k < P k+1 , J k+1 P k > (J" is just the first term of J' = V E). This algorithm has proved to be very rapidly convergent in practice and is recommended whenever the second derivative can he calculated fairly easily. Its speed of convergence does not depend on the residual being small which is desirable whenever the equations cannot fit the observed data exactly. If the equation from which the second derivatives are derived is too complicated to use in practice or the equations can be identified exactly, then the Newton-Gauss method is probably best. 32 7. RATES OF CONVERGENCE In this chapter we consider the local convergence rates of the various algorithms already discussed. When considered in connection with the amount of work required to perform each iteration some idea of the effi- ciency of the algorithm may be obtained. Of course, only a bound on convergence rates can be given but this usually reflects the actual rate fairly accurately. The analysis makes the assumption that no round off error occurs. While this is obviously not the case, its effect should not be noticeable except close to the solution. Some notation: Suppose the functions that we are identifying are approximated by a vector q of n components. Let q* be that q which minimizes E(q) and (q, } be the approximations to q* obtained by successive iterations of the algorithm, J = J(q, ) - VE(q) q = q k f\ T"l / \ Also define J." = J'(q. ) to be the Hessian matrix with components -r ■■ — k k 9q. 9q. i J (Here, of course q. and q. are elements of the vector q). Further define i J J* = J(q#), J* = J"*(q*). Then J* = 0, and as a necessary condition for convexity, J* is a positive definite matrix. We denote the error at each step by £ k = q k - q* (1) Since we will have to expand J and J" about the point q* using a Taylor's ri .K. q=q k * s* * series, we use the notation J*, J* , ... to denote the multilinear operators with components 9 3 E(q) Sq i 3q j ^k ' 9 *i 9 ^.i Nf 3% ' * ' 9 3 E(q) 9 U E(q) 33 evaluated at q = q*. An n-linear operator operates on an n-triple of vectors producing a vector, we denote this operation by juxtaposition. Thus for a bilinear operator B and vectors x and y, we have (Bxy) = B. x. y., 2 with summation over repeated subscripts. Also Bx - Bxx. The simplest method to analyse is of course Newton's method. We have Vi = \ - < j ;> _1 J k (2) Expanding J and J" about q# gives J k = J(q k } = J(q * + h) 1 T " 2 J* + J* e fc + — J* e k + (3) and J k = J ^ (q k } = J ' (q * + E k : J* + J* e k + 2~ J * 2 e + k (h) From (2) £ k + l = £ k " (J k" r " J k (5) J," e = J," e - J. k k+1 k k k = [J* + J* e R + - J :j * + J *' e k + \ J *"\ + 5" J *'" e k 3 + ' ' ' ] 1/ --2 1 *-- 3 Therefore e, _,, = k (J,') 1 J *' ' e. 2 + (e 3 ) k+1 2 k k k (6) (T) 3^ (We use the notation 0(e ) for any vector such that (e^) •> C e^ as e k -> (8) for some bounded i-linear operator C. ) And hence the method is quadra- tically convergent. The disadvantage is that J, must be calculated; this involves the solution of some (n + l)n/2 differential equations (The factor of — coming from the fact that J is symmetric). Since the other methods 2 K require only n or 2n differential equations to be solved per iteration, the number of iterations must be reduced by a factor of n/2 or n/k to be competitive. (This makes the simplifying assumption that all the equations take about the same time to solve, in practice the ones involving second derivatives may well be more complicated). We turn now to the conjugate gradient algorithm. p o = r o = " J o \+l = \ + c k p k r = -J (9) k+1 k+1 ' •n = r + b D p k+l k+1 k ^k where < r , , J , p > v k+1* k+1 k k = < P k> J k+1 P k > and c, is found by minimizing q, + c p, over c > 0. k " k k We make use of the following results, proved in Daniel [10 ] §5.6. < P k' J k P k-1 > = < r k+l' P k > = ° = (10) \\\W 2 =H\\ 2 +A-i H p k- i|2 l" 35 We vill also assume that for some positive constants a, 3 al < J-(q) < 31 (11) (i.e. J'(q) - al and 31 - J'(q) are both non negative matrices). Each iteration involves minimizing in one direction so the function E(q. ) decreases at each iteration. Since E is assumed convex, the error e. is also decreasing. We can therefore replace an error term (e. ) "by 2 a larger one (e. ), j < i if necessary. Where an error term would J involve several e's, we can replace it "by the one with the lowest index for simplicity. In order to perform the analysis we compare it with the analogous problem of minimizing the quadratic function H(q) = < A(h - q) , h - q > where h = A d. This converges to the correct solution in exactly n steps (neglecting round off errors). What we show in the more general case is that after n steps we achieve a quadratic reduction in the error. i.e. e^ =0 (e. 2 ). (12) k+n k In the quadratic case we had the algorithm p o = r o = - (Aq o - b) Vi = q k + c k p k r k+l = r k - c k p k p k+l = r k + l + \ p k (13) v, * K 'k+1' AP k : w\]i-r<- h = - k < p , Ap, > 36 : V r k > and. c, = : — k < p k , Ap k > This leads to the identities < p. , Ap. > = j < i (lU) < r. , p. > = j = j < i - 1 (16) =0 j (for j < i), < r. , p. > (for j < i), < r., Ap. > (for j < i - l) and < r. , r. > (for j < i) all contain only second order error terms. Arguing by induction on i, we assume that < P i , J* P^ > = ( £j 2 ) j < i (18) < r , p > = (e ) j < i (19) =o(e. 2 ) j < i - 1 (20) j j 2 < r. , r. > = (e. ) J '< i (21) i J J Then < P i+1 > ^ Pi > < r i+1 , J* p. > + h ± < p., J* v ± > (22) Substituting for b. gives < P i+1 , Ji P. > x < p., Jj +i p. > = < r i+1 > J* P. > < p., J^ +1 p. > - < r. +1 , J^ +1 p. > < p., j; p.: = < r i+1 , z > say, (23) 37 where z = < p. , J.._ p. > J* p. - < p. , J* p. > J. ,. p. 1 l+l i i l i i+I -i < p., J* p. + J* e.^., p. + . . . > J* t>. i l i+I l -i < P i5 J* P i > (J* P i + J* e i+1 P i + •••) (e. +1 ). (2k But r. +1 = J(q. +1 ) = J(q* + e i+1 ) " 2 = J* e 1+1 + J * e i+1 . . . Hence < r..- , z > = (e.. 2 ) (25) i+I i+I ' 2 Therefore < p. , _ , J* p. > = ( z. ^ ) (26) i+I l i+I Corresponding to the result r . . . = r. - c. Ap. i+I i li we show that i+1 r.^ n = r. - c. J* p. + (e. 2 ) (27) r i+l = - J(q i+1 } = -J(q* + e. +1 ) = -J* - J* e i+1 - 2 J * « 1+1 - ... ■ "J* « 1+1 + (e. +1 2 ) = -j'* [e. + c. p.] +0 ( e . +1 2 ) = r. - c. J* p. + (e. 2 ). (28) ill i+I 38 From (10), < r. +1 , p. > = 0, (29) and from (28), < r , p. > = < r , p > - c < J* p , p > J < i. (30) _L J- J -L J X _LJ Using the induction hypotheses (l8) and (19) • . . , , p. > = (e. i+l J J Combining (29) and (3l), < r,,,. P, > = (e, 2 ) J < i. (31) 2 < T AJm . , P, > = (e, ) j < i + 1. (32) Putting j = k - 1 in (9) gives *i-'i- Vi p J-r (33) Therefore, i+l J i+l J J-l i+l 'J-l 2 and since both < r. , ., , p . > and < r. ,_ , p. n > are (e . ) , i+l j i+l j-l J < r , r > = (e 2 ). (35) From (28), r = r - c J* p + (e 2 ) (36) J J J J J Therefore < r. +1 , r. +1 > = < r., r. +1 > - c. < J* p . , r. +1 > (37) c < ji p r > = (e 2 ). (38) J J - 1 - -J- J It may be shown that, (Daniel [10], §5.6), if oil < J* < 31 then c (1 > a /^ (39) 39 We can therefore divide equation (38) by c. without affecting the order of J the approximation. ■" 2 Hence, < J* p^. , r i+1 > = (e ) j < i (ko) Finally, < J * P j' P i+1 > = < J * P j' r i+l > + b i < Ap j' p i > (Ul) and since from (10), 2 llp 1+1 ir- K +1 lr + V Kir. ("2) |b.| < |[p i+1 !l/ HpJI , (1.3) so the b. are bounded above. 1 1 ' Therefore, < J* Pj, P i+1 > - (e ) j < i (1+1+) This completes the induction step of the proof, the proof for i = can be obtained from the same argument, using only those steps involving i and i + 1 which do not require the induction hypotheses. We have proved that, for all j < n < J* p , p n > = (e 2 ) (k5) in other words p and p are "nearly J*-orthogonal" . Since p may be "I expressed as a linear sum of the {p.} it follows that J j=0 P n = (e 2 ). (U6) Also since r = -J(q ) n ^n = -J* - J* e - \ J* e 2 - ... (hi) n 2 n 140 and hence e = jj r + (e 2 ) . (U8) n n n IP. 2 K^ + \J HVlll 2 - ™ Therefore llrjl < llpjl. (50) From which we may conclude that e = (e n 2 ) . (51) n Each n steps will produce a quadratic factor in the convergence. Thus a quadratic improvement can be guaranteed by solving 2n differential equa- tions. While a similar bound can be obtained from one iteration of Newton's 2 method solving about n /2 equations. In practice it seems to perform better than Newton's method often taking only about n iterations to converge, If the modification is made in which the gradient VE(q) is replaced by that obtained from the Newton-Gauss method, even faster convergence rates may be obtained. ill 8. SMOOTHING AND CONVEXITY If it is desired, to smooth the functions "being identified in order to reduce the effect of errors in the data, then we may add to the error functional terms involving norms of f. For simplicity, we restrict this analysis to the simplest case of identifying f where dt y (i) x(0) = x Q J given some approximation x to x , the solution of (l). We minimize the functional 2 E (f) = || x_ - x|| + a W[f] where W[f] = C Q ||f|| 2 + C ± ||f'|| 2 and , ||f|| 2 = / f 2 (x) dx (2) a wh ere a, C , and C are non-negative constants. This procedure has two additional beneficial effects, for a large enough the functional E (f) may be shown to be convex, also in certain cases there is a unique solution for positive a but not for a = 0. The addition of the extra term will alter the f's obtained and so we must consider the following questions : (i) If f minimizes E (f) in (l), what, if anything, can be said about a a the bound on \\ f - f_ || ? 11 a " (ii) What value of a is required to insure that E (f) is a convex a functional? k2 Define F to be the set of functions {f} and Y to be the set of all solutions to (l). Let X be the mapping X: F -> Y where x = Xf is the solution to (l). (More precisely, we should write x = X(f, x ) since the solution depends on the initial condition but it is convenient to consider f as representing the pair (f, x ) ) . Suppose we have an approximation to x, x r (= x) such that o |x - x 6 ||= 6 (3) We assume the existance of an inverse operator R : Y -> F defined by R (x) = f , f* being that f which minimizes )|Xf - x|| + a W[f ]. Let f = R (x), x = Xf a a a a f = R (xj, x = Xf . ao a o ao ao These relationships may be shown diagrammatically thus:' This analysis is based on an analysis of another inverse problem by Cabayan and Belford [ 7 ]. Since f^ minimizes ||xf - x || 2 + ow[f] Xf a& ~ x a l| + °W[f a6 ] 2 < ||Xf - x a || 2 + ow[f] (M U3 Since Xf = x, llx . - xJf + aW[f J < 6^ + aW[f] " ao 6 " ao Therefore l|x . - x || 2 < 6 2 (1 + -f W[f]), (5) ao o ~d o and using the triangle inequality, I x r - x I x . - xJ + |x. - x ii a 5 ii - n a g gu ii g ii < 5[1 + A + -f W[f] ] . 6 Taking 6=0, llx _ xll 2 ■: aW[f] (6) "a ii - Therefore x -*■ x (in norm) as a -> 0. a It does not seem possible to give a bound on If - f _ I and we have to a be content (as in a backward error analvsis) with the bound on Xf - Xf^ = a x - x given by equation (6). Making the assumption that the inverse operator R exists, however, we do have f -*■ f as a -»■ 0. a a To assume a global minimum is obtained for E (f) we may require it to be a convex. Clearly for a sufficiently large, convexity can be assured since aW[f] will be predominant term. To get a bound on a, we may proceed as follows. We require that for all f , f : B a (?-T^)sK (f i> + H (f 2> (T) o where, as before E (f) = 1 1 Xf - x . 1 1 + aW[f] a o ■ ' kh and Xf(t) solves ~ = f(x) x(0) = x Q Let f = (f + f 2 )/2, x = Xf , x 1 = Xf l9 x 2 = Xf g . Then | E a ( fl ) + | E^) - E^-^)= | f (^ - x 6 ) 2 dt + \ f ( Xg - x/dt - / (x - x J dt 6 + a Awt^] + ~W[f 2 ] - W[(f x + f 2 )/2]| T r /l 2 1 2 2v _ = J o ( 2 x x + - x 2 - x ) dt T - 2 / (| x i + | x 2 - x) x 6 dt + £ w [f l " f 2 ] (8) A sufficient condition that this be positive and hence for E(f ) to he convex is : f W[f x - f g ] a|2 / (| x x + x 2 - x) x 6 dt r A 2 , 1 2 2x ,, J (- x + - x„ - x ) dt 2 i 2 2 T 1 i.e., f W[f x - f 2 ] > || / ( Xl - x) ( X]L + x - 2x 6 ) dt + / (x - x) (x Q + x - 2 Xj5 ) dt | (9) h5 an d a sufficient condition for (9) to hold is: \\x ± - x|| || Xl + x - 2x 6 || + ||x 2 - x|| ||x 2 + x - 2x 6 a > 2 — — w[f x - f 2 ] (10) In order to "bound ||x - x|| in terms of ||f - f|| , we require the following theorem: If || f - f || = max |f. (x) - f (x)| = e, |y_ - y_ | < 6 and a.(x), 1 1 2 n 2 then ||f|| = I q 1 n n and ||f'|| = I I ■ (2°) -I "I J 1 d 8 n Hence 7. W[f] = 2C q. + C J 4. < $' , , (2l) 3q. "^ J "0 *i ~l£ *J *i ' *j and 3 2 ¥[f] = 2C 6 . . + 2C ? < r , r > (22) 3^i 3CLj ij 'I *i ' *J hi It has "been found useful in practice to vary the value of a during the calculation. Starting at (say) a = 10 and at each iteration reducing it by a factor of 10. In this way convergence to a local (but not global) minimum caused by choosing a too small can sometimes be avoided while at the same time avoiding large errors due to excessive smoothing. 48 o STABILITY AND ERROR ANALYSIS OF NUMERICAL METHODS FOR SOLVING DIFFERENTIAL EQUATIONS In this chapter we consider the effect of truncation and roundoff error introduced by the numerical method for solving: differential equations. The usual analysis gives error hounds and stability results in the difference between the computed solution X and the exact solution x. Since we are con- cerned with finding a function f rather than a solution x we want to bound the difference between f and some function F that, when substituted for f and the equation solved, would yield the computed solution X. dx Consider as usual the equation — = f(x(t)) and suppose that some multi- g. x> step method of the form. k T [A. x. . + hB. f(x. .)] = (1) is being- used. We will make the simplifying; assumption that equation (l) represents a corrector and that sufficient iterations of the corrector are made so that (l) holds exactly except for round off error. We assume that (l) is a single equation although the results could readily be extended to systems . If X is the computed solution, then k I [A. X. . + hB. f(X. .)] = R. (2) where R. is the round off error produced at the i step in the evaluation l of the f(x) and computing the vector dot product. Since x is the exact solution, we can insert a truncation error term into (1) giving k I [A. x. . + hB. f(x. .)] = T* (3) where T* = Ch q+1 x . (q+l) + (h^ 2 [ k9 If the computed solution is satisfed "by some equation X" = F(x) then k T [A. X. . + h B. F(X. .)] = T. (h) where T. = Ch^ +1 X. (a+l) + (h^ +2 ) If we define AF. = AF(X. ) = F(X. ) - f(X.), subtracting eauation (2) from 1111 equation (h) gives k J h B. AF. . = T. - R. (5) j=o t ■*■ 3 x l Since X is only defined at discrete intervals "by X. , we cannot hope to bound AF(X) but must restrict the analysis to the set{AF(X )}. Any function X may be chosen as long as it passes through the -points {X.}. The choice of X is by no means unimportant since it affects the truncation error term T. = Ch 4+1 x (q+l) + (h q+2 ). 1 The general solution of the difference equation (5) may be found by taking any particular solution and adding to it any linear combination of solutions of the corresponding homogeneous equation \ M AF. , J i-J k I h B. LT 4 _ A = (6) j=0 A particular solution of (5) may be obtained in terms of solutions of the homo- geneous equation (Henrici [lU] §5-2). Let the sequence {y.} , i > 0, satisfy y. = i i i k-1 and k f h for i = k * h B J y i"J = \ =0 d J L for i > k (T) 50 Then y = — k B Q and a particular solution of (5) "with AF = AF. = I (T .1- R .1 } y j=k i-J AF = i > k AF n = is k-1 (8) The most general solution of (5) is therefore k i (T. - R.) AF. = I c. £. . + I —^-r — ^- y. . where the sequences {£. . ) (j = 1, 2, . .., k) are the set of linearly independent solutions of the homogeneous equations. These solutions are given "by K. . = r/ (9) where r. is a root of the equation J k a(r) = J B. r k " j = j=0 J (Or, in the case where a(r) = has a root of multiplicity m then . i .2 i .m-1 i r ± , ir , l r , . ., -, i r will he solutions). (10) If r. is a simple root of (10) then £. . remains bounded if and only if J 1 5 J 1,J |r. I < 1. If r. is a root of multiplicity m > 1 then the corresponding E, J (J (= r. , ir. , ..., i r. ) will remain hounded if and only if Ir.l < 1. J J J Since y. satisfies the homogeneous difference equation for i > k, y. is a linear combination of the £. for i > k. Hence, if all the roots of (10) lie within the unit circle or are simple on the unit circle, then the sequence {y.} is bounded, i.e. there exists a y < °° , such that |y . | < y, for all i. Similarly, there exists a k < °°, such that k / c . E, . . < k , for all i . (11) (12) 51 Therefore, i |AF.| < K + J- \ I (T.-R.)| (13) If a "bound can be placed on the truncation and round off errors It. I : T 1 l ' ~ for all i. (lU) [R. I < R Then | AF. | < K + i (T + R) 3- (15) Any method used must of course be stable and hence satisfy the condition that the roots of the equation k . . p(r) = I A. r J = (16) be inside the unit circle or be simple on the unit circle. To be useful for this work, a method must be both stable and produce a bound on AF. For the three-step methods 3 I [A. x. . + h B.f(x. .)] = 0, (IT) j^ J i-J J i-J it may be shown that for a fourth order method there are two free parameters (B , B say) and that (Gear [12] §8. U) B l = I + B - 2B 3- B 2 " \ - 2B + V Equation (10) becomes B Q r 3 + {\ + B Q - 2B 3 ) r 2 + (| - 2B Q + B 3 )r + B 3 = (l8) 52 As shown in appendix 2, a necessary and sufficient condition for equation (l8) to have all its roots "within the unit circle or simple on the unit circle is 3 ■ B 3 > 1/8 In this case A., B. have the following values J J J 1 2 3 A. J -l!- 2B -3/' ,+6B o 3/k -6B Q I? +2B o B. J B 2 - B h\ B o (19) The equation p(r) = becomes (_!, [ (^. 2 ^, r 2 +( ^ tl ^) r .^. 2 ^]-o (20) Since the coefficients of r and 1 in the quadratic factor are equal and the discriminant f " 8B o is negative for B > -5- , the roots of (20) are one and a complex conjugate pair on the unit circle. The method is therefore only weakly stable. If we set B = 1, B = B = B = then all the roots of a(r) = are zero and we get (directly from equation (k)) AF. = 1 T. - R. 1 1 (21) This yields the third order three step method, 53 J l 2 3 A. -11/6 3 -3/2 1/3 B. J l 7 4- / "30"1 for which the roots of p(r) = are 1, -~ the two complex roots 22 having modiolus AUO /22 ~ 0.95. The method is therefore strongly stable, 54 10. THE FORMAC DIFFERENTIATION PROGRAM The purpose of this program is to take the differential equations containing unknown functions and generate from them the necessary equations required for the various solution algorithms used. It makes use of Formac, a system for carrying out formal manipulations on mathematical expressions. A full description of the Formac language may be found in [20]. The program consists of five routines. The first EDIT- INPUT accepts card input of the equations to "be identified and converts them into a form acceptable to Formac. These are given to REORDER which sorts the equations so that the i equation holds the highest derivative of x. . This is merely to provide a suitable ordering scheme. The procedure EXPLICIT examines each equation in turn and determines if the i equation contains the highest derivative of x. (x. , say) explicitly. If not it obtains an explicit representation of x. . The i equation is then of the form x. = F. l 11 h where F. is independent of x. or higher derivatives of x. . The remaining two routines FIRST-DERIVATIVE and SECOND-DERIVATIVE then differentiate the {F.} with respect to each of the {q.} representing the set of functions to be identified. Two cases can arise, either a particular q. represents an initial condition or some subset of the {q.} represent one of the functions J being identified. The two cases are treated separately. In the latter case since all the equations are of the same form only one representative need be formed. A detailed discussion of each procedure follows. EDIT- INPUT: This is a simple editing program written in PL1 to convert equations input. Variables are written Xl, X2, ... and unknown functions Fl, F2, ... . Derivatives are indicated by primes and all equations are terminated by semicolons. Thus an equation such as, 55 dx — = (x 2 + f x (x 3 )) f 2 (x 2 ) "would be input as XI' = (X2 + Fl (X3)) * F2 (X2); For each x we need to record its index, which derivative it is and, since it is a function of all the {q.} , we transform it into the form J x (, , QS), the QS being provided to allow differentiation with respect to any of the q.. Similarly the function Fi(s) is transformed into F(Q(i), {transformation of s}). The equation E = E becoming the zero expression (E x ) - (E 2 ). Thus the above expression would become (X(l, 1, QS)) - (X(2, 0, QS) + F(Q(1), x(3, QS ) ) * F(Q(2), X(2, 0, QS)). EXPLICIT: To determine if the i equation (G. =0 say) can be made explicit in the highest derivative of x. (= x. say), differentiate with respect to x. and see if the resultant expression is free of x. (as determined by differentiating again and determining if the resulting expression is zero). The equation is then of the form G. = x. r. + s. = where r. and s. do not contain x. and x. = -s./r.. If not, calculate 11 1111 (h+l) .. ... x. explicitly as , Vlll /h-1 3G. dx. (.1+1) (h+l) _ / v i_ 1 1 as described in Chapter k. In either case, a set of equations in a form suitable for numerical methods is obtained. 56 FIRST-DERIVATIVE: From each equation x. = H. (say), we require a representative equation obtained by differentiating with respect to one s of the (q v ) parameterizing each of the unknown functions {q.} . Call this Q(J). Since each of the x's is a function of all the {q }, we first replace the parameter QS by QCHAIN = Q(l) + Q(2) + ... + Q(S) thus ensuring 8 f differentiation of each x by Q(J). Note that we set (x) (denoted by PHI1 ) . To obtain the equations for those {q } oX K representing initial conditions we may simply set (x) (and -r— (x)) to oX zero. Since we also require to minimize in a particular direction p we need to form the equations for )>6 . p. (called DXP in the program). The expressions 3. 1X x Yd), (x) p. and J-r — (x) p. (which mav be calculated once p is known) are u i i ^9x l ~ designated PHIP and PHIP1 respectively. SECOND- DERIVATIVE: This proceeds in a similar manner to the previous routine except that only second derivatives of the form J^(q) p are required. The equations for the {q } representing initial conditions are obtained separately as are the second derivatives needed for minimizing in the direction p. In each case the equations are formed by summing the derivatives with respect to QS (representing the initial conditions) and each Q(K) (representing each unknown function f ). k Table 1 lists the symbols used in the Formac program and their meaning. 57 DXI. (r, s, QS) DXP. (r, s, QS) F. (Q(r), s) Fl. (Q(r), s F2. (Q(r), s) PHI. (s) PHIl. (s) PHIP. (s) PHIP1. (s) \~7Ti 6 - x dt 1 r dt V 1 r l l f (8) 3f l 3s 3 2 f ] as (s) (s) . (s) 3. 3s X ~(s) I +i (s) Pi 3. l TABLE 1 58 The equations are EQ1 (I, J) for the i equation differentiated with respect to Q(J) representing f., EQIC1 (i) for the initial condition and EQP1 (i) in the direction p. For the second derivatives these are EQ2 (I, J), EQIC2 (I) and EQP2 (i) respectively. Although the output from the differentiation program is at present manually transcribed to the numerical program, there is no reason, in principle, why this could not be automated. (The transcription process is very tedious and error prone). An algebraic simplification procedure would be a most desirable attribute of such a program. 59 11. TEST PROBLEMS Two examples were run, in each case commaring the conjugate gradient method with the Newton-Gauss method. In each case runs were made with and without "noise" being added to the input data to observe its effect not only on the solution obtained but also on the rates of convergence of the two algorithms. Examole 1 This is the simplest possible examnle, taking i - f <*> and input data of the form -1 x (t) = t+2 specified on the interval [0:10] at discrete points distance .05 an art . This will give the solution f(x) = x 2 on the interval (determined from the range of the observed data) [- /2 , - /12]. A smoothing factor «[<=o Ifr || a ♦ e a ||f|| 2 ] -2 was introduced with c = 0, c =1 and a initially set to 10 and reduced bv -9 a factor of 10 at each iteration until a minimum of 10 ' was reached. The function f was approximated on the interval [- /2, 0] by a spline with 8 equally spaced knots. At each iteration the value of the error functional and the % norm of its gradient were produced. The programs were rerun after applying a random noise with zero mean and a variance of .01 to the input data. The results are shown in Tables 1 - h. In the noise- free case, both methods behave similarly, 60 exhibiting second order convergence near the solution as expected. Under such circumstances the Newton-Gauss method would be preferable to the conjugate gradient since the amount of calculation required per step is considerably less. When noise was present, the conjugate gradient method converged somewhat faster, It may be noted that in all cases the function was better identified in the middle of the range where more data points were available than on the edges. Table 1. Newton-Gauss Me rthod - no no: Iteration l|VE|j E 1 6.77E 00 9.99E-02 2 3,39e 00 5.92E-02 3 4.51E 00 2,90!!-02 4 2.16E 00 4.70E-03 5 2.13E-01 1.61C-04 6 1.27E-02 U31E-06 7 2.82E-04 4.77EM0 8 2.80E-04 4.73E-10 f(x) computed 0.50 2»49986r*01 '0t45 2 l 0250U*Oi 0.40 lt60000E"01 '0.3* 1»22500P"01 •0»30 9tOOOOlP*02 '0.25 6t24999p'02 •0.20 4i00000r*02 •0.15 2#25000P"02 •OtlO 9it9997r*03 •0.05 2.5036lP - 03 exact 2.50000E-01 2,02500E*01 1.60000E-01 1,22500E*01 9.00000E-02 6,25000E"02 4.00000E-02 2.25000E-02 1.00000E-02 2,50000E»03 0.00 3«5833BF*05 0.00000E 00 61 Table 2. Conjugate Grac Lient Method - eration ||VE|| E 1 6.77E 00 9.99E*02 2 5.59E 00 5.92f-02 3 2.29E 00 1.60F-0? 4.47E-01 5.90F-04 1.10E-01 1.77F-05 7.23E-04 1.5AF-08 7.20E-04 i t «or-oe 7.21E-04 1.40E-08 3.13E-05 2.50F-10 f(x) computed 0.50 2.49985r - 01 0.4S 2.0250ir"0t 0.40 l»60000r"0t •0i35 1.22500f"01 -0«30 9t00001r"02 •0.25 6.2ft999T"0? •0.20 4»0000lr"02 •Oil* 2»25000T"0? •0.10 9t99997r"01 •0.05 2.50390r"0^ exact 2.50000E-01 2.02500E-01 1.60000E-01 1.22500E-01 9,0OO00E"02 6.25000E-02 4.00000E-02 2.25000E-02 1.00000E-02 2.50000F-03 0.00 3.B2l08r"0^ O.OOO00F 00 Table 3. Newton-Gauss Method - with noise, iteration ||ve|| E 6.86E 00 5.53E 00 4.29E 00 2. HE 00 1.64E-01 2.41E-01 4.32E-02 6.52E-03 1.69E-03 1.01E-01 5.87E-02 2.72E-02 5.18E-03 1.21E-03 9.90E-04 9.38E-0* 9.32E-04 9,32E"04 62 Table 3 . (Continued) x f(x) computed exact • 0.50 3#93796F - 01 2.50000E-01 • 0.45 ltB943ir-01 2.02500E-01 -0.40 ltB47UP"01 1.60000E-01 -0.35 1.03462r - 01 1.22500E-01 •0.30 8.22265r'02 9.00000E-02 -0.25 7.12669r"02 6.25000E-02 -0.20 4.004l2r"0 2 4,OO0OOE«"02 -0.15 2.53864P-02 2,25000E*02 -0.10 1.01355F"02 1.00000E-02 -0.05 -5.73504T"02 2,50000E"03 -0.00 -8.29363P"01 0.00000E 00 Table k . Conjugate Gradient Method - with noise, ■ eration II VE || E 1 6.86E 00 1.01P-01 2 5.53E 00 5.87E-02 3 2.01E 00 1.30E-02 4 7.96E-02 1,05E»03 5 3.82E-02 9,55E"04 6 1.44E-02 9.35E-04 7 6.51E-03 9.32E-04 8 4.04E-04 9,32E»04 9 5.38E-05 9.32E-04 x f(x) computed exact -0.50 3.93990P*01 2.50000E-01 -0.45 liB9365r"01 2,02500E W 01 -0.40 1.84783r"01 1,60000E»01 -0.35 l.03447F*01 1.22500E-01 •0.30 8.22342r"02 9.00000E-02 •0.25 7.12708r"02 6,25000E*02 -0.20 4.0039lp"0? 4,00000E"02 -0«15 2.53827F-0? 2.25000E*02 -0.10 1.01508E-02 1.00000E-02 -0.05 •5.8540lr"02 2,50000E*03 -0.00 -8.ft0439p"01 0.00000E 00 63 Example 2 This example uses a system of two equations with two unknown functions, Starting with the vector equation |d 2 x — o = fW, and splitting it into horizontal and vertical components x and x_ , we try the equations d X l 2 2 — — = ? 1 {x ± ) f 2 (x 1 + x 2 ) dt d 2 x 2 „ , v „ , 2 2 N —2 = f l (X 2 } f 2 (X 1 +X 2 J - dt If the original equation represents an inverse square law then we should obtain for the functions f and f f (x) = kx V*> ■ \ - " 3/2 where k is an arbitrary constant. The non-uniqueness of f and f_ is handled by introducing: a smoothing term 2 . r i, _ i,2 i-1,2 i-1,2 into the error functional, with c = c = 1. Again a was allowed to vary from lO"' - down to 10 in the absence of noise and down to 10 when noise was added to the data. The results are shown in tables 5-8. As in the first example, for noise-free data both methods gave similar results (although the conjugate gradient method required about twice as much time per iteration). When noise was present, however, the conjugate gradient method showed a definite superiority. Appendix 1 gives a computer generated list of the equations required for this problem. Gk Table 5. Newton-Gauss Method - no noise, Iteration IM| E 1 5.72E 02 7.19E-01 2 5.66E-01 t.TOE-02 3 1.96E-01 9.77E-04 4 1.24E-02 1.91E-04 5 3.29E-02 1.37E-04 6 3.73E-02 1.01F-04 7 3.74E-02 6,66E"05 6 3.29E-02 4,00E"05 9 2.33E-04 1.02E-07 FUNCTION 1 X F= 'O* £ SUBSTR(CARD , P , I ) <= • 9' THEN 00 ; 00 WHILE (SUBSTR(CARD,P f 1) = «0M I P = P«-l ; END ; COUNT = ; DO WHILE (SUBSTR(CARD f P + COUNT t 1) >='0» £ SUBSTR(CARD,P «■ COUNT , II <■ "9M ! COUNT = COUNT + 1 ; END : OUTPUT = OUTPUT I I SUBSTR(CARD , P , COUNT) II •),' ♦ VAR# = SUBSTR(CARD ♦ P t COUNT) ; P a P ♦ COUNT ; END ; FLSF OUTPUT = OUTPUT || «l)i* ; CALL SKIP_BLANKS ; IF SUBSTRICARD t P , 1) = •(• THEN P = P + 1 ; IF VAR# > *VARS THEN #VARS = VAR# ; END ; ELSE if next_char = 'x' then do ; output = output ii •*.(• ; p = p ♦ i ; call ski»_blanks ; X# = 1 ; IF SUBSTRICARD , P , 1) >= «O f £ SUBSTRICARD » P , 1 ) < = «9» THEN 71 oo ; DO WHILE (SUBSTR■ *0* I SUBSTRCCARD t P ♦ COUNT , 1) <» «9'l ; COUNT - COUNT ♦ 1 ; END t DUTPUT » OUTPUT || SUBSTRtCARD t P * COUNT I M S* ; Xi > SUBSTRICARD , P , COUNT} I P * p «■ COUNT ; END ; ELSF OUTPUT = OUTPUT II »l,» ; CALL SKIP_BLANKS ; COUNT = o : DO WHILE (SUBSTR(CARD , P ♦ COUNT , 1) = '"») ; COUNT = COUNT ♦ 1 ; END ; P = P ♦ COUNT : IF ORDER2(E0N# , X# ) < COUNT THEN 0R0ER2(E0N# ♦ X* \ = COUNT : DUTPUT x OUTPUT || COUNT II »,QS1» ; END 5 ELSE IF NEXT.CHAR = •«• THEN DO ; P « p «■ 1 ; OUTPUT » OUTPUT 1 1 •)-(• ; END ; ELSE IF NEXT.CHAR * •;• THEN DO ; p = p ♦ l : OUTPUT * OUTPUT II «| • ; EQNIEQN*! » OUTPUT ; EQN« =* EQN# «■ 1 ; 30 TO START ; END ; FLSE do ; dutput = output ii substricard t p t 1) ; P = P ♦ 1 : END ; GO TO SCAN ; EOF : ¥EONS = EON# - 1 ; OUTPUT = 'Oil)' ; DO I = 2 TO #VARS ; DUTPUT = OUTPUT || •♦Q(« II I || •»• ; END ; END EDIT_INPUT : REORDER : PROCEDURE ; DECLARE (I , J , K , Tl FIXFD BINARY ; 00 I = 1 TO *EQNS ; LET (I = "I"> ; LET ( EON( I ) = "EON( I)") ; END ; LET (OCHAIN = "OUTPUT"! ; DO I = 1 TO ¥EQNS ; T = ; LET (I =»T") ; DO J = I TO #EONS ; IF 0RDER2U f J)> = T THEN 03 ; K = J ; T = 0RDER2(I,J) ; ENO ; ENO ; IF I -.= K THEN DO ; LET (K * "K M I ; LFTITEMP = EON( 1)1 ; LFT(EON( I I = EON(K) ) ; 72 LET (EON I K ) » TEMP) ; END ; PRINT_OUT (EQNI II); OROERIII « T ; END REORDER ; EXPLICIT : PROCEDURE ; DECLARE (I t J) FIXED BINARY } DO I » 1 TO #EQNS ; LET (I - "I") ; LET ♦ 1 ; END ; PRINTJDUT (EQN( I ) ) ; END ; END EXPLICIT ; FIRST.DEPIVATIVE : PROCEDURE ; DECLARE (I , J) FIXED BINARY ; LET (OIFF(X) = CHAIN(S, S, DXI. ($(1 ),S(2), QS) )) ; LET (DIFF(F ) = CHAIM(PHI .( $(2)) , Fl . ( $( 1 ) ,$( 2 ) ) ) ) ; LET (DIFF(Fl) = CHAIN(PHI1.( $(2)) , F2. ( %i 1 ) , $( 2 ) ) ) ) ; DO I = 1 TO #eons : LET ( I = "I") ; LET (EON(I) = PEPLACE(EON( I ) , OS , QCHAIN)) J END ; DO J = 1 TO #VARS ; LET (J = "J") ; DO I = 1 TO #EQNS ; LET (I = "I") ; LET (EQlli.J) = REPLACE(DERIV(EON( I ) , Q(J) , 1) , QCHAIN , QS) ) ; PRINT_OUT (EQK I, J) ) ; END ; END ; DO I = I TO #EQNS ; LET! I = "I") ; PRINT.OUT (EQICKI) = EVAL ( EQK I ♦ 1 ) , PHI. IS) , , PHU.IS) , 0) ) ; END ; LET (DIFF(X) = CHAIN(S,S,DXP.(S(l),S(2),aS) )) ; LET (DIFF(F ) = CHAIN(PHIP .( S(2)) , Fl . ( S( 1 ) , S( 2 ) ) ) ) ; 73 < $(2)i , F2. I) t $(2) QS) )) $(2) $(2) QS)) ) QS) )) t Fl.l $(1) ,$(2) ) )) « F2.( $(ll,$(2) ) )) , F3.($( 1) ,$(2)1 )) )) I ; ))) ; LET (DIFF(Fl) * CHAIN(PHIP1 DO I » 1 TO «EQNS ; LET (I « "I") ; LET(EQN(I ) - REPLACE(EQN( LET (EQPKI ) * OERIV(EQN( DO J - 1 TO #VARS i LET (J - «J«I t LET (EQPHII « EQPiUI END ; PRINTJDUT(EQP1( I) ) ; END FIRST_DERIVATIVE ; SECOND_DERIVATIVE : PROCEDURE ; DECLARE (I t J t K) FIXED BIN LET (DIFF(X) = CHAIN( $ , $ , LET (DIFF(DXI ) = CHAIN( $ , $ LET (DIFF(DXP) = CHAIN($ , $ LET (DIFF(F ) = CHAIN(PHIP .( LET (DIFF(Fl) = CHA IN ( PHI PI . ( LET (DIFFIF2) = CHA IN ( PHIP2. ( LET (DIFF(PHI ) x CHAINIPHU. LET (DIFF(PHU) = CHAINIPHI2. LET (DIFF(PHIP ) * CHAIN(PHIP LET (DIFF(PHIPI) « CHAIN(PHIP DO J » 1 TO «VARS ; LET (J * "J") ; DO I = 1 TO *EQNS ,; LET (I = "I") ; LET (E02 C I *J I ' DERlV(EOKItJ) . QS , 1)) ; DO K * 1 TO KVARS ; LET (K * "K"l ; LET (E02 (ItJ) = EQ2 (I, J) ♦ DER I V( EQ K I » J) , Q(K) , 1)) ; END ; PRINT_OUT(EQ2(I »J)) ; END ; END ; 00 I = 1 TO #EQNS ; LET (I = "I") ; LET (EQIC2( I) = DER IV(EQIC1 DO K = 1 TO #VARS ; LET (K = "K") ; LET (EUIC2( I ) = EQIC2( I) END ; PRINT_OUT (E0IC2( I) I ; END ; DO I = 1 TO #EQNS ; LET ( I * "I") ; LET (EQP2(I ) = DERIV( EQPK I DO J « 1 TO *VARS ; LET (J = M J"> ; LET (E0P2( I ) = EQP2( I ) ♦ ENO ; PRINT_0UT(EQP2( I ) ) ; END : END SECOND_DERIVATIVE ; CALL EDIT_INPUT ; CALL REORDER ; CALL EXPLICIT ; CALL FIRST.OERIVATIVE ; CALL SECOND.DERIVATIVE ; END EONGEN ; ( I ) , QS , II) ; ♦ DERIVI EQICK I) i Q(K) ,11) ; I , QS * ll) ; DERIV( EOPKI ) , Q( J) , 1) ) ; 7h EQUATIONS INPUT : Xl M « FltXl) * F2IXl**2 ♦ X2**2) ; X2»« - FHX2I * F2IX1**2 ♦ X2**2) I 2 2 EQMtll - X.t It 2, OS I - F.J 012)* X. I I, 0* QS ) ♦ X. ( 2, 0, QS ) ) F.( U(l), X.( 1* 0» OS I ) 2 2 EQN(2) » X.t 2, 2, OS ) - F.I 0(2)t X. ( 1, 0, QS ) ♦ X. ( 2, 0, QS ) ) F.( Qtll, X.( 2, 0. QS ) ) 2 2 EQM(l) = F.t 0<2)t X. ( 1, 0, QS I ♦ X. ( 2, 0, QS > ) F.( 0(1), X. ( 1 , 0, QS ) ) 2 2 EQM(2 ) * F.( Q12), X. ( 1, 0, QS ) * X. ( 2, 0, QS ) ) F.I QUI, X. ( 2 * Of OS ) ) EQUlti) * ( 2 X.( 1, 0, QS ) DXI.( I , 0, OS ) *• 2 X. ( 2, 0, QS ) DXI.< 2 2, 0, QS I > F.( Qll), X.t It Of OS I ) Fl.l Q(2) f X. I i, 0, OS ) ♦ X. 2 ( 2, 0, OS ) \ + < PHI. I X.( 1, 0, QS > ) ♦ Fl.l 0(1), X. ( 1, 0, QS ) 2 2 ) OXI.t 1* 0, OS I I F.( 012), X. ( 1, 0, OS I ♦ X. ( 2, 0, QS ) ) EQ1(2,1) « ( 2 X.I 1, 0, QS > OXI.I 1, 0, QS ) + 2 X. ( 2, 0, QS ) OXI.I 2 2, 0, OS ) ) F.( 0(1), X.( 2, 0, OS ) ) Fl.l 0(2), X. ( I, 0, OS ) ♦ X. 2 t 2, 0, OS ) ) ♦ ( PHI. I X.( 2, 0, OS > ) + Fl.l 0(1), X.t 2, 0, QS ) 2 2 ) OXI.t 2, 0, OS ) ) F.I 0(2), X. ( 1, 0, QS ) ♦ X. ( 2, 0, QS ) ) 2 2 EQK1.2) = F.I 0(2), X. I 1, 0, OS ) ♦ X. ( 2, 0, QS ) ) Fl.l Oil), X. 2 2 ( 1, 0, OS ) ) OXI.t I, 0, OS 5 * I PHI. I X. I 1, 0, OS ) ♦ X. I 2, , QS ) > ♦ ( 2 X. I 1, 0, QS > OXI.I 1, 0, QS ) ♦ 2 X.I 2, 0, QS ) OXI.I 2 2 2, 0, OS ) ) Fl.l Q(2), X. I 1, 0, QS ) ♦ X. I 2, 0, QS > ) ) F.I Oil) 75 , X.( It 0, OS I I 2 2 EQH2t2) - F.< 0(2), X. ( h 0, OS I » X. I 2, 0, QS ) ) Fl.t 0(11, X. 2 2 I 2. 0, OS I I DXI.I 2, 0, OS I ♦ I PHI. I X, I I, Of QS I ♦ X. I 2, , OS I ) ♦ ( 2 X.I 1, 0. OS ) DXI.I 1, Ot QS I ♦ 2 X. ( 2, 0* QS ) OXI.( 2 2 2, 0, QS ) ) Fl.t 0(2), X. ( i, 0, QS I ♦ X. ( 2, 0, OS ) ) ) F.( Q(l) t X.I 2i 0. OS II 2 2 EQICK1) - F.( Q(2>, X. ( 1, 0, OS I «■ X. ( 2, 0, QS ) ) Fl.t 0(1), X. ( 1, 0, QS ) ) DXI.< 1, 3, QS I ♦ ( 2 X.( 1, 0, QS ) DXI.t 1, 0, OS ) ♦ 2 X.( 2t Ot QS ) DXI.t 2f 0, OS ) ) F.I Q( I ) . X.I 1, 0, QS ) ) Fl.l 0(2) 2 2 , X. ( It 0, QS ) ♦ X. ( 2, 0, QS I ) 2 2 EQICK2I > F.I Q(2). X. ( 1, 0, OS ) ♦ X. ( 2t 0* QS ) ) Fl.t Qtl), X. ( 2, 0, OS ) ) OXI.t 2, 0, OS ) ♦ ( 2 X.I 1, 0, QS ) DXI.t I, 0, OS I t 2 X.I 2, 0, QS I DXI.t 2, 0, OS ) ) F.t 0(1), X.I 2, Ot QS ) I Fl.l 0(2) 2 2 , X. t 1, 0, QS ) ♦ X. ( 2, 0, OS ) ) 2 2 EQPK1I ■ F.t 0(2), X. ( 1, 0, OS ) ♦ X. ( 2, 0, OS ) ) Fl.l 0(1), X. 2 2 ( 1, 0, QS ) ) OXP.( 1, 0, QS ) ♦ F.t Q(2), X. ( 1, 0, OS ) ♦ X. ( 2» 0, QS ) I PHIP.t X.( 1, 0, OS ) ) ♦ F.( 0(1), X.( 1, 0, QS ) ) PHIP.( X. 2 2 ( I, 0, OS > ♦ X. ( 2, 0, QS ) ) ♦ ( 2 X.I It Ot QS ) OXP. ( l f 0, QS » ♦ 2 X.I 2, 0, QS ) DXP.t 2, 0, OS ) ) F. ( Ql 1 I , X.I 1, Ot QS ) ) Fi. 2 2 ( Q(2), X. ( 1, 0, OS ) ♦ X. < 2, 0, OS ) ) 2 2 EQPK2) = F.t 0(2), X. ( 1, 0, OS > ♦ X. ( 2, 0, OS 1 ) Fl.t Qtl), X. 2 2 1 2, 0, QS ) ) DXP. I 2, 0, QS ) ♦ F.I 012), X. I 1, 0, QS ) ♦ X. I 2, 0, QS ) ) PHIP.t X.I 2, 0, QS I ) «- F.I 0(1), X.I 2, 0, QS > ) PHIP.t X. 2 2 76 ( 1, 0, QS I ♦ X. ( 2f 0, QS I > ♦ I 2 X.I It 0* QS I OXP. ( 1, 0, QS 7" 2 X.I 2, 0* QS ) OXP, I 2, 0, QS * I F.4 QClIt X.( 2, 0, QS ) > Fl, 2 2 ( QI2)» X. I li Oi QS I ♦ X. I 2t Oi QS II EQ2 ■ I 2 X.< It O f QS I DXI.I 1» Ot QS > ♦ 2 X. ( 2» 0, QS > OXI.I 2 2 2, 0, QS ) I Fl.( Q(2)t X. ( 1, Off OS ) ♦ X. ( 2, 0, QS I I Fl. J 0(11 , X.( 1» 0, OS ) ) DXP.< Iff Off QS ) ♦ ( 2 X.I 1, 0, QS ) 0X1.1 1, 0» QS 2 ) * 2 X.I 2* Ot QS ) OXI.I 2. 0, QS ) I Fl.l 0(21* X. I 1, 0, OS » ♦ X 2 . ( 2* Off OS ) ) PHIP.I X.I I* Ot QS ) ) * ( PHI. I X.I 1, 0, QS ) ) ♦ Fl.l 0(11, X. ( 1, Of OS I I OXI.I 1» Off QS ) ) PHIP.I X. I 1, 0, OS » 2 2 2 ♦ X. ( 2, 0, QS ) I *■ F.I QI2), X. I 1, 0* QS I ♦ X. I 2* 0* QS ) I OXI.I 1* 0, QS ) PHIP1.I X.I I* Off OS ) ) ♦ I 2 X.I 1, 0, QS ) DX I . I 1, Off OS I ♦ 2 X.I 2* 0* QS ) OXI.I 2* 0, OS I ) F.I Oil), X.I 1, 0, QS ) ) ?. 2 PHIP1.I X. ( 1» Off QS I ♦ X. I 2, 0, OS I I r ( 2 X.I 1, 0* QS ) OXI. I I* 0* QS ) ^ 2 X.I 2. Off QS ) OXI.I 2 , 0, QS ) ) I 2 X.I 1, 0* QS ) OXP. I 1* Off QS I «■ 2 X.I 2, 0« QS ) DXP.I 2, 0, QS I I F.I QUI, X.I 1, 2 2 0, QS > ) F2.I 0(2)* X. I 1, 0, QS ) ♦ X. I 2, 0, QS ) I + I 2 DXI.I 1 , 0* QS ) DXP.I I* Of QS ) 4- 2 OXI.I 2* 0* QS I DXP.I 2, 0, QS I ♦ 2 X. I U Off OS ) DXIP.I 1, 0, QS ) ♦ 2 X.I 2, 0* QS I DXIP.I 2, 0, QS 1 ) F. 2 2 I 0(11. X.I 1* 0* QS I I Fl.l (2 1 * X. I 1, 0, QS > «■ X. I 2, 0» QS ) I 4- I 2 X.I I* 0* QS ) DXP.I 1* 0, QS ) ♦ 2 X.I 2» 0* QS ) DXP.I 2, 0, QS I ) I PHI.I X.I 1* 0* OS ) > ♦ Fl.l 0(1), X.I U 0» QS I ) DXI.I 1. 2 2 ♦ QS I I Fl.l QI2), X. I 1. 0. QS I ^ X. I 2, 0, QS » ) ♦ I OXI.I 1, • QS ) F2.I QUI, X.I 1* Off QS ) I OXP. I 1, 0, QS ) ♦ PHU.I X.I 1, 0* QS ) > DXP.I 1, Off QS I «- Fl.l QUI, X.I 1, 0, QS I I DXIP.I 1, 0, QS ) 2 2 ) F.I QI2), X. I 1, 0* OS I ♦ X. I 2. Off OS I ) 77 EG2l2,l> - ( 2 X.I 1, O t QS ) DXI.t I , 0, QS I ♦ 2 X. ( 2» 0* OS ) OXI.( 2 2 2* 0. OS I I Fl.( QI2), X, ( 1, 0, QS ) * X. < 2, 0, OS I ) Fl.l 0(1) , X.I 2, 0, QS > I DXP.I 2. 0, QS I ♦ ( 2 X.I It 0, QS ) DXI.t 1, 0. QS 2 > ♦ 2 X.( 2. 0, QS > DXI.( 2. 0, QS I I Fl.l QI2), X. I 1, 0, QS ) ♦ X 2 . I 2. 0, OS I ) PH[P.( X.( 2, 0, QS ) ) + < PHI. I X.t 2i 0, OS ) ) ♦ 2 Fl.l 0(1), X.I 2, 0, QS ) ) OXI.I 2, 0, OS > I PHIP.I X. I 1, 0* QS ) 2 2 2 «- X. < 2, 0, QS > ) ♦ F.I 0(2), X. I 1, 0, QS ) «• X. I 2, 0, QS ) ) OXI.I 2. 0« OS ) PHIP1.I X.I 2t Ot OS I ) ♦ I 2 X.I 1, 0, QS ) OXI.I 1, Ot OS I ♦ 2 X.I 2, 0. QS I OXI.I 2, 0, QS ) ) F.l Q(i), X.I 2, 0, QS I ) 2 2 PHIP1.I X. I I, 0, QS ) ♦ X. ( 2, 0, QS > ) «- I 2 X.I It 0, QS I DXI . I It Ot OS ) ♦ 2 X.I 2, 0, OS ) OXI.I 2, 0, QS ) ) ( 2 X.I 1* 0, OS ) OXP.I I, 0, QS I ♦ 2 X.I 2, 0, QS ) DXP.I 2. 0, QS ) ) F.l Qlllt X. ( 2, 2 2 0, OS ) I F2.I 0(2), X. I 1, 0, OS > ♦ X. < 2 , 0, QS ) » «■ < 2 OXI.I 1 , 0, QS ) OXP.I 1, 0, OS I ♦ 2 OXI.I 2, 0, QS ) OXP.I 2, 0, QS > ♦ 2 X. I 1. Ot QS I DXIP.I 1, 0, QS ) ♦ 2 X.I 2* 0, QS > DXIP.I 2, 0, QS ) I F. 2 2 I QUI. X.I 2, 0, QS ) I Fl.l 0(2), X. ( 1, 0, OS ) «- X. I 2» Ot QS > I ♦ I 2 X.I It 0, QS ) OXP.I 1, 0, QS ) ♦ 2 X.I 2, 0, QS ) OXP.I 2, 0, QS ) ) t PHI. I X.I 2, 0, QS ) » ♦ Fl.l 0(1), X.I 2, 0, QS ) ) OXI.I 2, 2 2 , OS I I Fl.l 0(2), X. I 1, 0, QS I ♦ X. I 2, 0, OS I ) ♦ I OXI.I 2, , QS ) F2.I 0(1), X.I 2, 0, QS ) ) DXP.I 2, 0, QS ) ♦ PHU.I X.I 2, 0, QS ) ) OXP.I 2, 0, OS > ♦ Fl.l 0(1), X.I 2, 0, OS ) ) OXIP.I 2, 0, QS ) 2 2 ) F.l 0(2), X. ( 1, 0, QS ) ♦ X. ( 2, 0, QS ) ) 2 2 E02(l,2) - F.( 0(2), X. I 1, 0, OS I «• X. I 2, 0, QS ) ) OXI.I 1, 0, QS ) F2.I 0(1), X.I 1, 0, OS ) ) OXP.I 1, 0, OS ) ♦ I PHI. I X. I 1, 0, 78 OS » ♦ X. I 2, 0, Q$ I I ♦ I Z X.I It 0, QS I DXI.l 1, 0, QS > ♦ 2 X.( 2 2 2, 0, QS ) DXI.( 2* Ot OS i ) Fl.l Q( 2 1 1 X. I 1, 0, QS ) ♦ X. ( 2* 0, 2 OS I I > Fi.l 0(1). X.( It 0, OS » I DXP.I 1, Ot OS I M PHI. I X. ( I 2 , 0, OS I ♦ X. I 2, 0, OS ) I ♦ I 2 X.( 1, 0, QS I DXI.l i, Ot OS ) ♦ 2 2 2 X.( 2t 0. OS ) DXI.l 2t 0. OS I I Fl.l 0(2), X. ( 1, 0, QS ) ♦ X. (2 , 0, OS > I > PHIP.I X.< It 0. QS I M Fi.( Q( 1 ) , X. ( 1, 0, QS > > DXI . 2 2 ( It Ot OS ) PHIP.I X,. ( It 0, OS ) ♦ X. (2, Ot OS I M F.( Q(2), X. 2 2 ( 1, 0, OS ) ♦ X. ( 2, 0, QS ) ) DXI.l 1» Ot QS ) PHIP1.I X.I It 0* OS I I H 2 X.I It Ot OS ) DXI.l 1, Ot QS M 2 X.I 2, 0, QS » DXI.l 2 2 2 , 0, OS I I F.I Oil). X.I It 0, QS M PHIP1.I X. I 1, 0, QS ) ♦ X. I 2 2 2, Ot OS ) ) ♦ F.I Q(2)t X. I I, 0, QS ) * X. I 2t Ot OS I ) Fl.l Qll) t X.I It Ot OS ) ) DXIP.I It Ot QS I t { 2 X.I It 0* QS ) DXP.I Lt Ot QS 2 ) + 2 X.I 2* 0* QS ) DXP.I 2t Ot QS ) 1 Fl.l QI2), X. I 1, 0, QS I ♦ X I 2, 0, OS ) ) Fl.l 0(1)» X.I 1* 0, QS I ) DXI.l 1, 0, OS I ♦ I I 2 X .( It Ot QS ) DXI.l 1, 0, QS I ♦ 2 X.I 2t Ot QS > DXI.l 2t 0, QS ) ) I 2 X.I It 0, OS ) DXP.I It Ot QS I t 2 X.I 2t Ot QS I DXP.I 2, Ot QS ) I 2 2 F2.I 0(2). X. I 1, Ot QS I + X. I 2, 0, QS > ) ♦ I 2 DXI.l 1, Ot QS ) DXP.I 1, 0, QS ) ♦ 2 DXI.l 2, 0, OS ) DXP.I 2, 0, QS ) ♦ 2 X.I 1, 0, QS ) DXIP.I 1, 0, QS I ♦ 2 X.I 2, Ot QS ) DXIP.I 2, 0, QS > ) Fl.l Q(2)t X 2 2 . ( It 0, OS ) <• X. ( 2, 0, QS ) ) ♦ I 2 X.I 1* Ot QS ) DXP.I It Ot QS 2 2 ) ♦ 2 X.( 2, 0, QS ) DXP.I 2, 0, QS ) > PHIi.( X. ( It Ot QS M X. ( 2, 0, OS ) ) ) F.( 0(1), X.( 1, 0, OS ) ) 2 2 79 EQ2(2,2) ■ F.I Q(2), X. ( I, 0, QS » ♦ X. I 2, 0, QS ) I OX I . ( 2, 0, 2 QS I F2.( 0(11, X.( 2, 0, OS ) » DXP. ( 2 , , QS ) ♦ ( PHI. I X. ( 1, 0, QS I ♦ X. ( 2, 0, QS ) ) M 2 X.( I, 0, QS ) DXI. ( 1, 0, QS I ♦ 2 X.I 2 2 2. 0, OS ) DXI.f 2, 0, OS ) ) Fl.( 0(21, X. ( 1, 0, OS ) «• X. ( 2, 0, 2 QS I ) I Fl.l 0(1), X.( 2, 0, QS ) ) DXP. ( 2, 0, QS I ♦ ( PHI.( X. ( 1 2 , 0, OS ) + X. ( 2, 0, QS ) ) ♦ ( 2 X. ( 1, 0, OS ) DXI.( 1, 0, QS I ♦ 2 2 2 X.( 2, 0, QS I DXI.( 2, 0, OS I ) Fl.( 0(2), X. ( 1, 0, OS ) ♦ X. (2 , 0, OS I ) ) PHIP.t X. ( 2, 0, QS ) > ♦ Fl.( Q(l), X.( 2, 0, OS I ) OXI. 2 2 ( 2, 0, OS ) PHIP.( X. ( 1, 0, OS ) «• X. ( 2, 0, QS ) ) ♦ F.( Q(2), X. 2 2 ( 1, 0, QS > ♦ X. ( 2, 0, OS ) ) OXI.( 2, 0, QS ) PHIPl.t X.( 2, 0, OS ) ) ♦ ( 2 X.( 1, 0, OS ) OXI.( 1, 0, OS ) ♦ 2 X. ( 2, 0, OS ) OXI.( 2 2 2 . 0, QS I ) F.( 0(1), X.( 2, 0, OS ) ) PHIPl.t X. ( 1, 0, OS ) ♦■ X. ( 2 2 2, 0, QS ) ) ♦ F.I 0(2), X. ( 1, 0, OS ) ♦ X. ( 2, 0, QS ) ) Fl.l 0(1) , X.I 2, 0, OS > ) OXIP.I 2, 0, QS ) ♦ ( 2 X.I 1, 0, QS ) DXP.t 1, 0, QS 2 ) * 2 X.( 2, 0, QS ) OXP.( 2, 0, OS ) ) Fl.l Q(2), X. ( 1, 0, QS • > ♦ X . I 2, 0, OS I ) Fl.( 0(1), X.( 2, 0, OS > ) OXI.J 2 , 0, QS > ♦ ( ( 2 X •I 1, 0, OS ) DXI. I 1, 0, QS ) ♦ 2 X.I 2, 0, QS ) DXI.I 2, 0, OS ) ) ( 2 X.I 1, 0, OS ) DXP.( I* 0, QS ) ♦ 2 X.I 2, 0, QS ) DXP.I 2, 0, QS ) I 2 2 F2.( 0(2), X. ( 1, 0, OS I *- X. ( 2, 0, QS ) ) ♦ ( 2 DXI.I 1, 0, QS ) DXP.I If Of OS ) 4- 2 DXI.( 2, 0, OS ) DXP.( 2, 0, QS I » 2 X.( 1, 0, QS ) OXIP.( 1. Of OS I ♦ 2 X.( 2, 0, OS ) DXIP.l 2, , OS ) ) Fl.l QI2), X 2 2 . ( 1, 0, OS ) *■ X. ( 2f 0, OS ) ) ♦ ( 2 X. ( 1* 0* OS ) DXP.I 1, 0, QS 2 2 ) ♦ 2 X.( 2» 0, OS ) DXP.( 2, 0, OS ) ) PH I 1 . ( X. ( 1, 0, OS ) ♦ X. 80 ( 2, 0, OS 1 ) ) F.I Qtl>t X.( 2, 0, OS 1 ) 2 2 EUIC2I1) « F.( 0(2)t X. ( It 0, OS » ♦ X. I 2, Ot OS I I DX I . < 1, 0, QS I F2.( QU)» X.I 1, 0, OS > I DXP.l i, Of OS I ♦ C 2 X.I 1, 0, OS » DXI.( 1* 0, QS ) ♦ 2 X.I 2. 0, QS ) DXI.( 2, 0, QS I I Fl.( Q(2)« X. I 2 1, 0, QS ) ♦ X. ( 2* 0, QS ) ) Fl.l QI1), X.I 1, 0, QS ) ) DXP.l 1, 0* QS I M 2 X.I 1* 0, OS ) OXI.I 1, Of OS I ♦ 2 X.I 2, Ot QS ) OXI.I 2, 2 2 , QS ) ) Fl.l QI2)t X. I 1, 0, QS I + X. I 2» 0, QS I ) PHIP.t X.I 1, 2 0, OS I I t Fl.l Oil). X.I 1, 0, QS J ) OXI.I 1« 0, QS I PHIP.t X. I 1 2 2 2 , 0, OS > ♦ X. I 2» 0, QS I > ♦ F.I QI2), X. I 1, 0, QS I ♦ X. I 2, , OS I I OXI.I 1* Of QS I PHIP1.I X.I 1, 0, QS ) ) «■ I 2 X.I 1. 0, QS J OXI.I 1. 0, QS I t 2 X.I 2 ( 0* OS ) OXI.I 2, 0. QS ) I F.I QUI, X.I 1, 2 2 2 0, QS t ) PHIP1.I X. I I, 0, OS I ♦ X. I 2» 0, OS I ) ♦ F.I QI2)» X. 2 I 1. 0, QS ) ♦ X. ( 2* 0, OS ) ) Fl.l QUI, X.I 1, » OS ) ) OXIP.I 1, 0* QS ) 4- I 2 X.I If 0* QS ) OXI.I 1, Ot QS ) ♦ 2 X.I 2, 0, QS I OXI.I 2 , 0, QS ) ) ( 2 X.I I* Of QS ) DXP.l 1, 0. QS I ♦ 2 X.I 2* 0, QS ) DXP. 2 I 2, 0, QS I I F.I Q(l)f X.I 1* 0, QS I I F2. I Q(2), X. ( 1, 0, QS ) ♦ X. I 2, 0, QS I I + ( 2 X.( 1, 0, QS ) DXP.l 1 , 0, QS ) ♦ 2 X.I 2, 0, 2 2 QS ) DXP.l 2* 0, QS ) I Fl.l Q(2>* X. ( 1, Of QS I ♦ X. I 2, 0, QS ) I Fl.l QUI* X.I It 0, QS ) ) DXI.I 1, 0, QS ) * I 2 OXI.I It Ot QS ) DXP .1 1, 0, OS ) ♦ 2 OXI.I 2, Ot QS ) DXP.l 2, 0, OS I ♦ 2 X.I 1, Ot OS I OXIP.I I, 0, QS ) * 2 X.I 2, Ot QS ) DXIP.I 2, 0, QS ) » F.I Q(l)t X.I 1 2 2 t 0, QS ) ) Fl.l Q(2)t X. ( It 0. OS ) + X. I 2, 0, QS I I 2 2 EQIC2I2) - F.I Qt2)t X. I 1, 0, OS I «■ X. I 2t 0, QS ) ) OXI.I 2, 0, 81 QS ) F2.I Qllit X.( 2, 0, QS ) ) DXP.< 2, 0, QS ) ♦ ( 2 X.( 1, 0, OS ) 2 DXI.I I, 0, OS I ♦ 2 X.( 2t 0, QS ) OXI.( 2t 0, QS ) ) Fl.l QI2», X. ( 2 1, 0, OS ) ♦ X. ( 2 f 0, QS I I Fl.l QUI, X.( 2, , QS ) ) OXP. ( 2t 0* OS I ♦ ( 2 X.( 1, 0, QS > OXI.I It 0, QS I i 2 X. ( 2, 0, QS I OX I . I 2, 2 2 , OS » I Fl.l 0(2), X. I 1, 0, QS ) ♦ X. ( 2t 0, OS ) ) PHIP.( X. ( 2, 2 0, QS > ) * Fl.l Q(l)t X.( 2, 0, QS > > DXI.I 2, 0, OS > PHIP.( X. ( 1 2 2 2 , 0, QS ) ♦ X. ( 2, 0, OS ) I ♦ F.( Q(2), X. I 1, 0, QS I ♦ X. ( 2, , QS ) ) DXI.I 2, 0, OS ) PHIP1.I X.( 2f 0, OS > ) ♦ ( 2 X. ( 1, 0, OS ) DXI.I 1, 0, QS ) ♦ 2 X.I 2f Of QS ) DXI.I 2. 0, OS ) I F.I 0(1), X.I 2, 2 2 2 0, QS ) > PHIP1.I X. ( I, 0, QS > ♦ X. I 2, 0, QS ) ) ♦ F.I 0(2), X. 2 I It 0, QS I ♦ X. I 2f 0, QS ) ) Fl.l QUI, X.I 2, 0, OS I > DXIP.I 2, Or OS I M 2 X.( 1, 0, OS ) DXI.I 1, Ot OS » ♦ 2 X.I 2t 0. OS ) DXI.I 2 , Ot OS ) ) I 2 X.I I, 0, QS ) DXP.I 1, 0, OS ) ♦ 2 X.I 2, 0, OS ) DXP. 2 I 2, 0, QS I ) F.I OIL), X.I 2, 0, QS ) ) F2.I 0(2), X. ( 1, 0, OS ) + X. I 2» Ot QS ) I ♦ I 2 X.( 1, 0, QS ) DXP.I It Of QS ) 4- 2 X.I 2t 0, 2 2 QS ) DXP.I 2, 0, OS ) ) Fl.l 0(2), X. I I, 0, OS )♦ X. I 2, 0, OS I I Fl.l 0(1), X.I 2, 0, OS ) ) DXI.I 2» Ot QS I «- I 2 DXI.I It 0* QS ) DXP .1 It 0* QS I ♦ 2 DXI.I 2, 0, OS ) DXP.I 2, 0, OS I ♦ 2 X.I 1, Ot OS ) DXIP.I 1, 0, OS ) ♦ 2 X.I 2t Ot OS ) DXIP.I 2. 0, QS ) ) F.I Oil), X.I 2 2 2 » 0* OS ) ) Fl.l 0(2), X. I 1, 0, OS ) ♦ X. I 2t 0, OS ) ) 2 2 EQP2I1) = 2 Fl.l Oil), X.( 1» 0, QS ) ) PHIP.t X. I 1, 0, OS ) ♦ X. I 2 2 2, 0, QS ) > DXP.I 1, 0, OS ) ♦ 2 F.I QI2», X. I 1, 0, QS I ♦ X. I 2, 0, OS I ) PHIP1.I x.l 1, 0, OS I ) DXP.I 1, 0, OS I ♦ 2 I 2 X.I 1, Of OS 82 ) DXP.I 1, 0, OS I ♦ 2 X.< 2i 0, QS » DXP. ( 2t 0, QS ) I Fi. I 0(21, X. 2 I 1, 0, QS ) ♦ X. ( 2, 0, QS > I Fl.t Oil), X.I 1, 0, OS I ) DXP.I 1, 2 2 , OS ) * 2 PHIP.I X. ( I, Oi OS I ♦ X. t 2» Ot OS I I PHIP.I X. I 1, , QS I M 2 ( 2 X.I It Q. OS I OXP.( h Ot OS I * 2 X. ( 2, 0, QS ) OXP. 2 2 I 2, 0, QS ) ) Fl.l Ql2)t X. { 1» Oi QS I ♦ X. ( 2t 0, QS ) ) PHIP.I X .( It 0, QS ) ) ♦ 2 ( 2 X.l 1, 0, QS ) DXP.I 1» 0, QS I ♦ 2 X.( 2t 0, QS 2 ) DXP.I 2, 0, QS ) ) F.I Q(l), X.I 1* 0. QS I I PHIPi.l X. I I, 0, QS 2 2 2 ) * X. I 2, 0, QS ) ) + F.I 0(2), X. I 1, 0, QS ) ♦ X. I 2, 0* OS ) ) Fl.l 0(1), X.I 1, 0, QS ) ) DXP2.I It 0, QS ) + I 2 X.I 1, 0, QS ) 2 OXP. I 1, Of OS I ♦ 2 X.I 2, 0, QS ) DXP.I 2, 0, QS ) ) F.I QUI, X.I 1 2 2 , 0, OS I I F2.I 0(2), X. ( 1, 0, OS I + X. ( 2, 0, QS ) ) + ( 2 X.I 1 , 0, OS ) DXP2.I If 0, QS ) ♦ 2 X.l 2, 0, QS ) DXP2. I 2, 0, QS ) ♦ 2 DXP 2 2 . I 1, 0, OS ) «■ 2 DXP. ( 2f 0. OS I I F.I Oil), X.l 1, 0, QS ) » Fl. 2 2 2 I 0(2), X. ( 1, 0, OS I t X. I 2, Of QS ) I t F.I 0(2), X. I 1, Ot OS 2 2 ) ♦ X. ( 2, 0, OS I I F2.I 0(1), X.l 1, 0* QS M DXP. I 1* Ot OS ) 2 2 EQP2I2) = 2 Fl.l 0(1), X.l 2, 0, QS > ) PHIP.I X. ( 1, 0, QS > «■ X. I 2 2 2, Ot QS ) ) DXP.I 2, 0, OS I t 2 F.I 0(21* X. ( 1, 0, QS ) ♦ X. I 2t 0, QS ) ) PHIPi.l X.l 2, 0, OS ) ) DXP.I 2, 0, QS I ♦ 2 I 2 X.l 1, 0, QS 2 ) DXP.I 1, 0, OS ) «• 2 X.l 2t Ot QS ) DXP.I 2, Ot OS I I Fl.l 0(2), X. I li 0, OS I ♦ X, I 2, 0, OS I ) Fl.l Qll), X.l 2, , QS ) ) DXP.I 2t 2 2 t OS ) * 2 PHIP.I X. ( I. Ot OS I ♦ X. I 2, 0, QS ) > PHIP.I X.l 2t . OS I) f 2 I 2 X.l 1, 0, QS ) DXP.I 1, Ot OS I t 2 X.l 2t 0, OS ) DXP. 2 2 83 ( 2, 0, OS > > Fl.( Q(2>, X. C 1, 0, QS » ♦ X. I 2, 0, QS ) ) PHIP.( X .< 2, 0, OS > > * 2 ( 2 X.< 1, Ot QS ) OXP.t 1, 0, QS I ♦ 2 X. ( 2, 0, QS 2 ) DXP.t 2, 0, QS ) ) F.I 0(1), X.( 2, Ot QS ) I PHIP1.I X. ( 1, Ot QS 2 2 2 » ♦ X. ( 2, 0, QS > I ♦ F.I QI2)t X. ( 1, 0, QS ) ♦ X. ( 2, 0, QS ) I Fl.( 0(1), X.( 2t 0, OS ) ) 0XP2.( 2, 0, OS ) ♦ ( 2 X. ( 1, 0, QS ) DXP.I 1, 0, QS ) ♦ 2 X.( 2, 0, OS ) OXP.t 2, 0, QS I I F.I 0(1), X.( 2 2 2 , 0, OS ) ) F2.I 0(2), X. ( 1, 0, OS ) ♦ X. ( 2, 0, QS ) ) ♦ ( 2 X. ( 1 , 0, OS ) DXP2.( 1, 0, QS ) ♦ 2 X.I 2t 0» QS ) 0XP2.I 2, 0, OS ) + 2 OX P 2 2 . ( 1, 0, OS ) ♦ 2 DXP. ( 2, 0, QS » I F.I Q(l), X. ( 2. 0, QS ) I Fl. 2 2 2 I 0(2), X. ( It 0, QS ) ♦ X. ( 2, 0, QS ) ) «- F.I Q(2>, X. ( It Ot OS 2 2 ) ♦ X. ( 2, 0, OS ) ) F2.( Q(l)t X.( 2, 0, QS ) I OXP. ( 2, Ot OS ) 8^ APPENDIX 2 ; The condition that the roots of a certain cubic lie on or within the unit circle. 1+z Let r = — . The interior of |r| s 1 is the left half plane Re(z) < 0. Consider the region in the B - B~ plane where roots of (9»l8) are all inside |r| g 1. It is hounded by the locus of points where one or more roots are one in magnitude, that is, where z = ix, x real. Substituting for r in (9.18), we get B o ( I^> 3 + { I + V 2 V ( S- )2 + ( l " 2B o + V ^ + B 3 ■ ° ^' or (1-z)" 3 [1 + 6(B Q - B 3 )z + [U(B Q + B 3 )-l]z 2 - 2(B Q - B 3 )z 3 ] = If z / 1 (r ^ °°), then for z = ix we get Real Part 1 = [MB Q + B 3 ) -l]x 2 (1) Imaginary Part 2(B Q - B ) (x 2 + 3)x = (2) 2 From (l), x^ 0. x + 3^0 because x is real; hence B = B from (2), and MB + B ) > 1. Therefore B = B > 1/8 is the required locus. Since one root r is outside the unit circle for small B , and since this locus does not divide the B - B plane into two or more regions, all points of the plane except possibly on B - B have roots r outside the unit circle. On _1 Bq = B > 1/8 the three roots are z = «>, z = ± i(8B -l) ^ corresponding to r = -1 and a complex conjugate pair' on the unit circle. UNCLASSIFIED Security Classification DOCUMENT COMTKOL DATA R&D (Security claaallleatlon ol till; bmdy ol mkatmel mnd Indoutng mnnotmtlmn mail h, antand whmn tha oratall report It cla.alllmd I. ORIGINATING ACTIVITY (Corp on I* author) Center for Advanced Computation University of Illinois at Urbana- Champaign Urbana. Illinois 61801 3. REPORT TITLE 2*. REPORT SECURITY CLihlFICATIOh' UNCLASSIFIED 2b. CROUP Numerical Methods for the Identification of Differential Equations 4. descriptive notes (Typa of tapott mnd rnelualrm dmtmm) Doctoral Dissertation 5. AUTMORIS1 (Flrat naaw, mlddla Initial, latt nmmm) Raymond Jonathan Lermit I, REPORT date June 12, 1972 7a. TOTAL NO. OF PACES 92 7b. NO. OF REFS 21 ■a. CONTRACT OR CRANT NO. DAHC0U-72-C-0001 0. PROJECT NO. ARPA Order No. 1899 ••. ORIGINATOR'S REPORT MUMBER(S) CAC Document No. k9 •o. OTHER REPORT NOIII (Any othat nuatbara thmt mmy b» atalfnad thla raporl) 10 DISTRIBUTION STATEMENT Copies may be requested fron the address given in (l) above II. SUPPLEMENTARY NOTES 12. SPONSORING MILITARY ACTIVITY U.S. Army Research Office-Durham Duke Station, Durham, North Carolina 13. ABSTRAC T This dissertation considers computational methods designed to aid in mathematical model building. Specifically, it discusses methods of determining ordinary differential equations given their solution in the form of observed data. DD :n°o?..1473 UNCLASSIFIED Security Classification UNCLASSIFIED Security Classification KEY WORDS ROLE *T Ordinary and Partial Differential Equation;; Interpolation; Functional Approximation TMHLASSTFTFD Security Classification BIBLIOGRAPHIC DATA SHEET 1. Report No. UIUCDCS-R-72-552 3. Recipient's Accession No. 4. Title and Subtitle Numerical Methods for the Indentification of Differential Equations 5. Report Date June 12, 1972 6. 7. Author(s) Raymond Jonathan Lermit 8. Performing Organization Rept. No. 9. Performing Organization Name and Address Center for Advanced Computation University of Illinois at Urbana-Champaign Urbana, Illinois 6l801 10. Project/Task/Work Unit No. 11. Contract /Grant No. DAHC0J+-72-C-0001 12. Sponsoring Organization Name and Address U.S. Army Research Office-Durham Duke Station Durham, North Carolina 13. Type of Report & Period Covered Doctoral Dissertation 14. 15. Supplementary Notes None 16. Abstracts This dissertation considers computational methods designed to aid in mathematical model building. Specifically, it discusses methods of determining ordinary differential equations given their solution in the form of observed data. 17. Key Words and Document Analysis. 17o. Descriptors Ordinary and Partial Differential Equations Interpolation; Functional Approximation 17b. Idcntif icrs/Open-Ended Terms 17c. < 1 ISATI I ic Id/Group II. Availability statement Copies may be obtained from the address in (9) above. Distribution unlimited. 19. Security ( lass (This Report) Tlfjri ASSIFIHD 20. Security Class (This Page UNCLASSIl-THD 21- No. 92 22. Pric iges NC FORM N TIS- JB t 10-70) USCOMM-DC 40329-P7 I oc z°>\tf*