LIBRARY OF THE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN 510.84 1463 c ■:•. ENGINEERIN G AUG 51976 The person charging this material is re- sponsible for its return to the library from which it was withdrawn on or before the Latest Date stamped below. Theft, mutilation, and underlining of books are reasons for disciplinary action and may result in dismissal from the University. UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN m OCT l$lfa i \ i j iv OCT lj MAY m 7i i-idu 31990 FEB 27 fatrix ■ [ x o : $x o : ••• : $n_1 x oj b = U„ ; *x^ : ... : $ n_1 x^ (3) are both of rank n. As we will see later one important implication of the above condition is that the matrix $ must be nonderogatory ; i.e., similar to the companion matrix of its characteristic poly- nomial, [23]. A. Estimation of a canonical form of the transition matrix From (l) the observation vectors z can be expressed as, JC k-1 z, = H$ x„ + H I $ X w. . . . n k-l-i + v. i=0 (U) Let the mn - vector r he given by, t _ f t t \ Z k-n+l ! k- t z, n+2 . . k ] hence, from (U) and (5) we get (5) r,*k-n+l , ~ \ = m X + H V k-1 + V k (6) where is as defined by (2), v. ■[v ■n+1 ! \-n+2 ; -. : ,*] • W. I .£ k-n-i . k-n+1 . . k-1 J and H is an mn x n matrix given by. H = H H$ H H$ 2 He ^.n-1 -rrJ\-2 H$ n-3 (7) Constructing the mn x n matrices, \ ~~ r k-n+i : r k-n+2 : • * * : t ] ^+1 I r k-n+2 I r k-n+3 '.'"'. r k+l ] hence from (6) we obtain, t k-2n+2 \ = $ B + H \-l + h (8) \ +1 = e $ k " 2n+3 B + Ha k t i k+1 where the mn x n and n x n matrices £ and a are given by, <, ■ [v ■[v n+1 I \-n+2 \ = l w k- n+ i : w k-n +2 : ••• : v k -1 ] 1 (9) Since the system is identifiable B " exists, denoting the matrix B -1 $B by ¥; then from (8) and the fact that 0$^ +1 B = (0$^B)y, it can be easily shown that (i w - v> - i( Vi - v> + ( w - V" (10) [ b o ; b i : . • • • •: Vi] hence Let the n x n matrix B be given by from the fact that B~ B = I we obtain b. u ($ J x^) = 6. . for all i, j. l ij Consequently the matrix ¥ = B $B is of the form, V = (n-lxn-l) * (11) Thus the problem of estimating the n elements of the transition matrix of an equivalent dynamic system has been reduced to that of only estimating the n-elements ip. of the vector i>. Now from (6), (10), and (11) we get n Z k+n+l = t Vi *I + 1 (12) where, ''k+n+l : \+n+l r v i r v i [Wl- J *iVi| + [j H$ Vn-i n k+j-1 -^ (13) (-l) n [* n - * ^ _1 - * n $ n " 2 n r n-l However, since <£> satisfies its characteristic polynomial, i.e. ik I = (lU) ii ii— j. j. (13) is reduced to v, . - y ii>_. v,„._. I + 1 u. - f 4>, u. , (15) [' CO k+n+1 k+n+1 " J/i ViJ + [ P n " J/i V i-1 where , y. = / Hfc 1 w . k+j-i j = 0, 1, . . . , n (16: i=0 Let Yt. a^d r be an n- vector and n x n matrix defined by, K. K. ■[■ t t Y. = I Z. "k+n+1' Z k Z k+n+2' "\ 9 \ Z k+2n ] (17) r k = Z k t z k+l Z ^ Z k+2 t "" Z k Z k+n t t t Z k Z k+2 Z k z k+3 •••' ' ■ ■ ■ Z k Z k+n+l t z, z n , t t (18) Therefore, from (12) we obtain \ = r k * + \ where t t ■ \ I Z k W k+n+1 ' Z k \+n+2 > '" \ U k+2nJ (19) (20) Consider now the expected value of the inner product z, w . for k k+j j > n+1. From the assumption that x ; v , v , . .. ; and w , v , ... are independent random variables, it can easily be shown that E f z v °1l.J s for a11 J ^ n+1 > i- e - E [n. ] = 0, and E [Y k l = (E [r k ]} . ^ (21) Denoting E [z z ] by c . , then from (21) k K-+J J C n +J ■ j 2 Vk \ 3-1.2. ....n (22) hence the Hankel matrix E [r ] is of rank n [2I4. ] , and \\i is obtained by solving the system of linear equations (21). Motivated by (21), we will estimate \p by obtaining the least-squares solution of 1. e, where A is the pseudo -inverse of the matrix A, [25 ]. Now we introduce the following theorem, If $ is a matrix of spectral radius p($) < 1, and if ¥ is a positive definite matrix, then ty ■*■ i> a.s. (with probability one) as I -»■ °° . Proof The proof of this theorem is similar to that of Theorem 2.1 of [19]. To prove the above theorem we have to show that £ T I z i u vr +0 a - s - for J - n+1 ( 2l+ ) I k=1 k k+ J and, rj/k^i a - s - (25) k=l where I* is of rank n. First we establish a bound on the covariance matrix of z, . k From ( k ) , E[z k ] = H$ k E[x Q ] (26) E[ Vk t] = HP k Rt + v (27) where, T k p = $ k p n ($ k ) t + J 4 1 w (* i ) t i=0 Since p($) < 1, then as k -> °° E[z. z *] ■* HPH* + V < <» (28) in which, 00 p = I a 1 w ($ i ) t (29) i=0 From (15) we have £ t * t X "k Vn + 1 = I \ (v k + n + l + y n } " k=l k=l n I , T *. £ z* (▼... + y, .) (30) . ^ 1 . *•_ k k+i 1-1 i=l k=l Let us define, j > 1 (31) S £,2 " J a k " \ M "k+J-l Also let F "be the smallest a-algebra with respect to which x ; v ,v , . . . , v«. w , w , ..., w. (j > l) are measurable. Since, 1 * tl — -L 1 -1 t lim E[|S 1 |] = lim Ik E[|z v |] = < °° (32) £-*» ' £-x» k=l J and, "KflJV " V ' ITT E[ 4 +1 ruj+JV - o (33) where we have used the fact that z _ is measurable w.r.t. F and that v o-i--4.i» J - 1» i s independent of F , Therefore, {S ; F } is a martingale [26], Similarly, it can be shown that {S ; F } is also a martingale. Moreover, E [s li ] = E Kl k_1 "** W 2 = 1 k ' 2 tr [V(HP / +V)] and, E[ S | i2 ] = E[ j T 1 ^ H* 1 .^f £ = I k" 2 tr [(HP k H t +V) H<|» i W($ i ) t ] k=l are both bounded by virtue of (28). Thus by the martingale convergence theorem [26 ] lim S . exists and is finite a.s., (i = 1, 2). Also from the Kronecker Lemma [27 ], we obtain I I k=l (3fc) 1 V t n I I Z k Vj + ° — J z H$ w , . , , -> a.s. £ k ^ x k k+j-1 Hence from (l6) and ( 30 ) it is clear that (2k) is satisfied. 10 Let w , w , . . . be a sequence of independent random vectors ■which, have the same distribution as the w 's and are mutually indepen- dent of x , w , w , . . . ; v , v , . . . ; such a sequence may always be found by enlarging the probability space [26]. Now {h) could be written as, z k \ + 1 \ ~ q i 00 ""IP" 00 ~1 (35) Since p($) < 1, then by the Three Series Theorem [26] it can be proved that u_ and q are well defined random variable . Furthermore 00 lim || q || = a.s. and in the mean square, and J ||H$ || ' < °° . k^°° k i=0 Consequently il is a moving average of the w. 's and v.'s; therefore k 1 j t / >, xl and u U. Ij > 0) are metrically transitive strictly stationary- processes. From (35) ve have, Since p($) < 1, lim |q || = a.s. , and k-*» k -1 -1 N sup k~ ||u||1 j=l J we see that , -1 v t -1 lim I y z., z, J _. = lim £ 7 t , c . n ^. in k k+j *• u a., a.s. (36) £->oo ^=1 £-*=° k=l k Tt+j Furthermore, from (35) and (29) we get 00 t i r-.l r „.i„/.ixt „t [ \ Vj ] = tr [$J E h^V* 1 ) rj = tr [Aph 1 ] < °° (37) 11 Thus from the ergodic theorem, (36), and (37), we have for j > a lim C 1 I z* z v+ . = E[vl 1 vl..] = Etz* z, + J < - a.B. (38) £+~ k=l k k J ^ J J As a result , I £-**> k=l lim r ] £ r k = r = E[r v ] < " a.s. (39) and from (22) we see that r is of rank n a.s. This completes the proof of the theorem. B. The HO-Kalman approach: If, however, accurate determination of the eigenvalues of $ is rather sensitive to small perturbations of the coefficients of its characteristic polynomial (elements of ^), or if we desire to com- pletely identify an equivalent dynamic system, v = $ v + w ^k+l ^k k z = H v + v k ^k k (Uo) where y = S x (in which S is an n * n nonsingular matrix), we use the identification scheme described by [20, 21 ]. By complete identification we mean obtaining strong consistent estimates of * _1 * $ = S $S, H = HS, V, and the state vectors y . Let the m x m matrix E[z. , . z, ] be defined by, k+j k Assuming that the identification scheme has started after the system reached the steady state, then from (h) we obtain C. = H^PH 1 + V 5. n (1*2) J J,0 12 where P is an n x n positive definite matrix given by (29). Using [kl) we now construct the generalized mn x mn Hankel matrices 2n-l C. c o C 12 n °2 S n+1 C n C n+l"" °2n-l , G 2n C l C 2 C 2 C 3 ■••• C n + 1 c c .,... c„ n n+1 2n (U3) From (U2) we see that G^ , = 0$M, G n = 0$ M 2n-l 2n (WO where M is an n x mn matrix given by, m = [ph* : m* : ... : $ n_1 ph^] (U5) which is of rank n due to the observability assumption. Consequently G_ ., and G^ are also of rank n. Therefore there exists an ortho- 2n-l 2n gonal matrix R and a permutation matrix tt such that *W x : ' (W) L l ' L 2 : o where L is an n x n nonsingular upper triangular matrix. In practice R is chosen as the product of n elementary Hermitian matrices, [23]. Define the mn x mn matrix Q by - 1 - then we have, RG 2n-l*« " I n (hi) 13 From (hh) the above equation can be written as, U$S = I (U8) n where U and S are nonsingular n x n matrices given by, U = E R0 n . (k 9 ) S = MttQE n in which E = [I '. 0] is a q x mn matrix. The matrices $ , H , and q q . * *t P H ' of the system (Uo) can be determined as follows, $ = E RG_ ttQE t (50) n 2n n = E R0$ 2 MttQE t n n 2 = U$ S = s -1 $s, H * ■ (E m G 2n-l" QE ,>*" 1 (5l) = (E 0$MttQE t ) (S" 1 *'^) m m = HS, and, * *t t P H = E RG, _E X (52) n 2n-l m = E R0$ME n m = (S" 1 PS" t ) (sV) Finally the observation covariance matrix V can be readily determined by, V = C Q - H P H (53) Ik Using recursive filtering [28] the above information (50 - 53) is then sufficient to estimate the optimal sain matrix of the Kalman filter which is used to estimate the state vectors y, . k Similar to the proof of the previous theorem we can show that an identification scheme can be constructed by (k3 - 52) where the m * m submatrices C. = E[z, ,. z. ], j =1, 2, .... 2n, in the generalized j k+j k ' ° ' ' ' ' Hank el matrices (h3) are replaced by, 6 j U) " *"* X Vj ■** J-i. a. -.an (si) K — J_ where I is the size of the sample. If desired one could also obtain * -1 -t * -l -t estimates of the covariance matrices P = S PS and W = S WS * *+ # ##*-(- * from the matrix equations P H = K and P =$P$ +W [29], where K = E R G^ , E t . n 2n-l m (55) 15 III. PERIODIC SYSTEMS Consider the following periodic system of order n, x(t) = A(t) x(t) z(t) = H(t) x(t) where A(t) is an n x n periodic matrix of period p, and H(t) is an m x n matrix which is either time-invariant or periodic with the same period. We also assume that the system (55) is completely observable [30], and asymptotically stable, [31]. Let $(t) be the fundamental matrix of the system (55), hence b(t) = A(t) $(t) (56) and, $(t+p) = $(t) D (57) where D is a unique nonsingular constant matrix called the monodromy matrix of $(t), [32], and can always be expressed as D. = e pL (58) 9 in which L is a constant matrix. Since the system (55) is asymptoti- cally stable then the eigenvalues A. (i = 1, 2, ..., n) of D are of 1 (J) modulus less than unity, A. < 1. The eigenvalues of D are called 1 4 the invariants of the monodromy group of the periodic system or the multipliers associated with A(t). Let us now introduce the Liapunov transformation [24], x(t) = $(t) e" tL y(t) (59) Substituting (59) in (55) we obtain the system, y(t) = L y(t) (60) z(t) = H(t) y(t) where, H(t) = H(t) »(t) e" tL (61) 16 which can be easily shown to be periodic of period p. Let p = N At where At is a fairly small time increment. Assuming that the observa- tions z(t) are measured starting at time t and assuming that p is known, then the identification of the periodic system (55) is reduced to the identification of the N time invariant systems k = , 1 , . . . , N-l y( V p) =D^y(t k ) z(t k ) = H(t ) y(t k ) (62) where t. = t_ + k At, D± is the time-invariant transition matrix for k

y(t k ) : y(t k + P ) B n (t k ) = y(t k +p) j y(t k +2p) therefore, B (t, ) = D B . (t. ) n k $ n-l k y(t k +(n-l)p) y(t k +np) ] ■3 (63) (6k) We then say that the periodic system (55) is identifiable if B ,(t) n-l is nonsingular, i.e. for any t. D, is uniquely determined by D = B (t. ) B . (t, ). Assuming that this identifiability condition n k n-l k ' » •> holds then we have, B" 1 , (t, ) D x B . (t. ) = B" 1 . (t. ) B (t. ) n-l k 4> n-l k n-l k n k (65) (n-l^n-l) IT Define the inn-vector |z (t ) '. z (t +p) ". .. L r , (t, ) , then from (62) we have n-1 k n-1 k H(t k ) 5(t k+P ) D^ H( V (n-l)p) D^ _ z t (t k +(n-l)p)J by y(t k ) 0(t k ) y(t k ) (66) Since the periodic system (55) is completely observable we can easily verify that the mn x n matrix 0(t ), for all k, is of rank n. Assuming the existence of both plant and observation noise in system (55) then we deal with the identification of the N time-invariant systems (under a time step of p), y( V p) = D^ y(t k ) + w(t k ) z(t k ) = H(t k ) y(t k ) + v(t k ) k = o, l, ..., N-l (67) where the noise vectors w(t, ) and v(t, ) have the same statistical k k properties as in (l). Provided that the period p is known, then similar to Section II we can either obtain the companion matrix of ~D± by only estimating the n elements of the vector $ in (65) and hence the invariants of the monodromy group (eigenvalues of Da), or by using the Ho-Kalman approach together with recursive filtering to completely identify the N systems in (67). Now we have only to describe a method for estimating the period p. We will see that an algorithm for estimat- ing p will also give an estimate of cj>. Similar to (12) we obtain the following expression, n z(t k +(n+l)p) = I <$> ± z(t k +ip) + u(t +(n+l)p) (68) i=l 18 where. E[ w(t k +jp)] = J > E[z t (t k ) u(t k +jp)] =0 j > n+1 Defining the n- dimensional vector y(t ) and the n x n matrix r(t ) by, and y^(t k ) = r(t k ) = Vl ( V> V2 (t k'' •••' T 2n (t k' T l (t k> T 2 (t k> T „ (t k' T 2 ( V T 3 ( V VlV • • • • • ♦ T n ( V Vl (t k» T 2n (t k>. (69) where, T j (t k } = z ( V z( V Jp) (TO) Then similar to (23) we estimate the n-dimensional vector "by, ** = r (t k +i P ) i j ± y ] (71) Choosing various values of p; i.e. p = aAt (a = 1, 2, 3, . . . ) we obtain various estimates (a). If we obtain two identical estimates (a ) and (a ), ( || (a ) - (a ) || < e where e is a small number), and a = 2a then p = a At . A parallel computer such as the Illiac IV [33, 3^] is an ideal tool for fast estimation of the period and the identification of the N time-invariant systems (67). 19 IV. NUMERICAL EXAMPLES Example 1 We considered the following fourth-order single output system, \ = E \ + \ r is a known U-dimensional vector given by (0, 1, 0, l) and where the sequences w and v are independent normally distributed with K. K. zero mean and variances 1.0 and 0.25 respectively. The system ($ ,H) is observable where, -0.656 1 0.18k 1 0.180 1 1.000 . H=r 1000 ] and the eigenvalues of $ are 0.9 ± 0.1 i, -0.U ± 0.8 i. Using only the output observations z the following estimates were obtained for a K sample size I = 1000, 5000 (i) I = 1000 if> = [-0.655, O.T56, -0.200, 1.060] , 1 n 2 ? IMI 2 /Ikll 2 = -O^ 8 ' < vhere ^ = ^ - ^ and ||^|| 2 = [ [ *i ] )» i=l The eigenvalues obtained from the estimated companion matrix are: Xff): 0.917 ± 0.092i, -O.387 ± 0.788a, ""0.959 -0.693 0.760 -0.206 0.015 0.690 -0.139 O.9I+8 -0.356 -0.637 0.651 » -0.891 0.01+8 20 X('$ ) : 0.917 ± 0.09^i, -0.387 ± 0.788i, H* = [-0.53U, -0.773, 1.370, -1.899]. V = .100 ; |v - V|/ V = 0.60 , ( p*H*t)t = [ _ 76>0> q 5 0? 0] . (ii) I = 3000 ^ = [-0.663, 0.757, -0.175, 1.029] , H/ II* II 2 = -028 , X(V): 0.913 ± 0.112i, -0.398 ± 0.7911 , $ = 0.950 0.019 -0.727 0.755 -0.278 0.729 -0.12U -0.655 -0.870 •"1 .16k .872 .711 .021 X(* ): 0.913 ± 0.1111, -0.398 ± 0.7911. H* = [-0.539, -0.926, 1.21*3, -1.836] , V =0.225; |V - V|/V = 0.10 , ( p*H*t)t = [_ 6TtTj 0, 0, 0] • Example 2 Consider the fourth order multi-output system, Vi = $x k + w. z k = \ + ^ where the covariance matrices of w and v are given hy 0.11 and K K. 0.51 respectively, where I is the identity matrix, and the transition matrix $ is 21 ' 1.512 0.738 1.137 0.1+26 1.728 -0.978 2.603 -0.1*16 -0.81+0 -1.160 -0.81+0 -0.660 -3.056 2.056 -3.556 1.306 which was obtained by a similarity transformation on that of Example 1 For a sample size £ = 5000 we obtain the following estimates, ** = [-0.673, 0.827, -0.216, 1.01U] , 2 / lkli 2 - .01*2 , X(y): 0.902 ± 0.09^i, -0.395 ± 0.8lUi, 1.123 -0.1+31 O.38U 0.20l" 0.366 -0.010 -0.092 -0.7^5 0.299 -0.709 0.126 -0.55^ -0.01+7 -0.280 1.072 -0.226 A($ ): 0.898 ± 0.090i, -0.392 ± 0.8l3i , -O.I5I+ 0.36b -0.11+8 0.035 * * -0.059 -0.516 0.782 -0.31+1 H = 0.102 -0.1*17 0.090 -0.096 ..0.175 1.092 -I.28U 0.222 "0.531 0.011 0.002 0.578 V = 0.525 0.007 0.079 0.51+7 p p ||V - V|| /||V|| = 0.128, (where ||«|| F is the Frobenius norm, |v| p = J v. . i,J IS*" 1 * E - J*|| F /||*|| F = -039, 22 -8.U89 -57.362 -ll+ . I+69 153.753 P H = -22.609 -9.I+9I+ 63.0U7 -2 . TOO 5.471 ^ o -7.162 Example 3 Same as Example 2 except that W = V = 0.51. For I = 5000 the estimates are: ^ = [-0.651, 0.785, -0.1U5, 0.958], I 2 /I|*ll 2 = .038 , A(y): 0.891 ± 0.09H, -0.1+12 ± 0.801i, * 1.115 -0.390 0.371+ 0.21+9 "^ 0.395 -.056 -0.063 -0.71+!+ 0.319 -0.717 0.117 -0.503 -0.060 -0.270 1.060 -0.199 * - $ Ex. 2" F = 0.110, A($ ) = 0.893 ± 0.090i, -0.1+05 ± 0.800i, -0.l6l 0.371 -0.158 .013 H = -0.060 0.113 -0.510 -0.1+32 0.796 0.106 -0.318 -0.091 _ 0.172 1.083 -1 . 311 0.183 Ih*" 1 * J* - J*|| F /|*|| F = .004 , if* ~* II H H - H Ex. 2 Hf = °-° 59 • 23 V = 0.57^ 0.036 0.U87 0.051 0.5^0 o.oi+6 O.OW 0. 5U |v - v|L/|vL = 0.132 , -39.309 -272.211 -69.059 P H = o -112.063 -1+8. 036 -13.973 . Example h Same as Example 3 except that Z k = Hx k + V k where, 10 1 H = 1 10 727 .879" 311 31+6 28, 173 -37 9U3 For I = 5500 the estimates are: J = [-0.687, 0.819, -0.158, 0.971] , llil 2 /H = •<*!, X(y): 0.901 ± 0.10l+i, -O.U16 ± 0.8l3i 0.91+7 -0.660 0.1+81+ 0.068 0.017 0.808 -0.081+ -0.351+ 0.001+ -0.31+9 -0.^03 -0.6I+0 0.111 0.911+ -0.366 2k -* A($ ): 0.902 ± O.lOUi, -0.U09 ± 0.809i > H = V = -0.531 0.109 O.58U -1.186 -0.228 0.566 0.133 0.7^6 -0.711 -0.257 !|V- Vljp/llvllp = 0.151, P H X -3U1.627 80.05^ -U.71U Exampl e 5 In order to illustrate the method for finding the period of a periodic system which is both observable and identifiable, we consider for the sake of simplicity the deterministic system dt = A(t) X(t) where, A(t) = [diag (-0.95, -0.90, -0.85)] sin cot in which qj = tt/10. Starting with the initial conditions x (0) = (l, 1, l) we generated 50 observation vectors x(t) with At = 1 sec. Applying algorithm (71), with t = 0, we plotted log || <}> ic e ot versus a as shown in the figure below. The period is clearly 20 sec. and = 1.175 where, ^ = (.088, -0.46U, 1.076) (72) 25 The invariants of the monodromy group of this periodic system are the eigenvalues of (65) with as in (72), A = 0.288 ± 0.305 i, A = 0.500 . 26 V. CONCLUSION The assumptions of observability and identifiability of linear time- invariant mult i -out put dynamic systems have been effectively used to estimate a canonical form of the transition matrix for qualitative analysis or for evaluating the invariants of the system using only output data. The computational algorithm is time-saving 2 since we estimate n elements only rather than the n elements of the transition matrix either of the system itself or of an equivalent one. The results show that these coefficients can be determined fairly accurately and the eigenvalues of the companion matrix agree reasonably well with those of the actual transition matrix. Again using only output observations the Ho-Kalman approach was implemented to obtain consistent estimates of $ , and H of an equivalent dynamic system together with the observation noise covariance matrix V. Adaptive filtering could then be used for direct estimation of the optimal gain of the Kalman filter which we may use to estimate the state vectors of the equivalent dynamic system. Finally we showed that under certain assumptions the problem of identification of a high-frequency periodic system of unknown period could be reduced to the identification of several time-invariant dynamic systems and estimation of the period. 27 REFERENCES [l] R. E. Kalman, "Design of a self-optimizing control system", Trans. ASME, J. Basic Engr., Ser. D, Vol. 80, pp. 1+68-1+78, 1958. [2] M. J. Levin, "Estimation of a system pulse transfer function in the presence of noise", IEEE Trans. Automatic Control, Vol. AC-9, pp. 229-235, July 1961+ . [3] K. Steiglitz and L. E. McBride, "A technique for the identification of linear systems", IEEE Trans. Automatic Control (Short Papers), Vol. AC-10, pp. 1+61-1+61+, October 1965. [1+] P. Joseph, J. Lewis, and J. Tou, "Plant identification in the pre- sence of disturbances and application to digital adaptive systems", AIEE Trans. (Applications and Industry), Vol. 80, pp. 18-2U, March 1961. [5] K. Y. Wong and E. Polak, "identification of linear discrete time systems using the instrumental variable methods", IEEE Trans. Automatic Control, Vol. AC-12, pp. 707-718, December 1967. [6] H. Akaike, "Some problems in the application of cross spectral method", in Spectral Analysis of Time Series, B. Harris, Ed., Wiley, 1967. [7] P. C. Young, "An instrumental variable method for real-time identifica- tion of a noisy process", Automatica, Vol. 6, pp. 271-287, 1970. L8] Y. C. Ho and R. C. K. Lee, "Identification of linear dynamic systems", J. Inform, and Control, Vol. 8, pp. 93-110, February 1965. [9] D. J. Sakrison, "The use of stochastic approximation to solve the system identification problem", IEEE Trans. Automatic Control, Vol. AC-12, pp. 563-567, October 1967. [10] K. G. Oza and E. I. Jury, "System identification and the principle of random contraction mapping", SIAM J. Control, Vol. 6, No. 2, pp. 21+1+-251, 1968. [ll] G. N. Saridis and G. Stein, "Stochastic approximation algorithms for linear discrete-time system identification", IEEE Trans., Vol. AC-13, pp. 515-523 and pp. 592-59^, October 1968. [12] J. K. Holmes, "System identification from noise-corrupted measurements", J. of Optimization Theory and Applications, Vol. 2, No. 2, pp. 102-116, 1968. [13] J. K. Holmes, "Two stochastic approximation procedures for identifying linear systems", IEEE Trans. Automatic Control (Short Papers), pp. 292-295, June 1969. r o .. ll4J K. J. Astrom and T. Bohlin, "Numerical identification of linear dynamic systems for normal operating records", Proc. 2nd IFAC Symp. Theory of Self-Adapt ive Systems, pp. 76-111 , 1965 . 28 [15] R. L. Kashyap, "Maximiim likelihood identification of stochastic linear dynamic systems'", IEEE Trans. Automatic Control, pp. 25-3*+, Feb. 1970. [l6] R. K. Mehra, "Identification of stochastic linear dynamic systems using Kalman filter representation", AIM Journal, Vol. 9, Wo. 1, pp. 28-31, January 1971. [17] P. D. Abramson, Jr., "Simultaneous estimation of the state and noise statistics in linear dynamic systems", MIT TE-25, May 10, 1968. [l8] K. J. Astrom, "On the accuracy achievable in identification problems", IFAC Symposium, Prague, Czechoslovakia, June 1967. [19] W. N. Anderson, et al., "Consistent estimates of the parameters of a linear system", Annals of Math. Stat., Vol. U0, No. 6, pp. 2061+-2075, December 1969. [20] R. K. Mehra, "On-line identification of linear dynamic systems with applications to Kalman filtering", IEEE Trans. Automatic Control, Vol. AC-16, No. 1, pp. 12-21, February 1971. [21] B. L. Ho and R. E. Kalman, "Effective construction of linear state- variable models from input /output functions", Regelungstechnik, Vol. ik, pp. 51+5-51+8, 1966. [22] R. C. K. Lee, "Optimal estimation, identification and control", Research Monograph No. 28, The M.I.T. Press, Cambridge, Mass., 196U . [23] J. H. Wilkinson, The Algebraic Eigenvalue Problem, Clarenden Press, Oxford, 1965. [2k J F. R. Gantmacher, Applications of the Theory of Matrices, Interscience Publishers, Inc., New York, 1959- [25] R. Penrose, "A generalized inverse for matrices", Proc. Cambridge Phil. Soc. 51, PP. U06-U13, 1955. [26] J. L. Doob, Stochastic Processes, Wiley, New York, 1967. [27] M. Loeve, Probability Theory, 3rd edition, Van Nostrand, New York, 1963. [28] R. K. Mehra, "On the identification of variances and adaptive Kalman filtering", IEEE Trans. Automatic Control, Vol. AC-15, No. 2, pp. 175-181+, April 1970. r l 11 * "t L 29 J R. K. Mehra, An algorithm to solve the matrix equations PH = G and P = $p$t + rr tn , IEEE Trans. Automatic Control (Corresp.), pp. 600, October 1970. [30] R. S. Bucy and P. D. Joseph, Filtering for Stochastic Processes with Applications to Guidance, Interscience Publishers, New York, 1968. [31] E. A. Coddington and N. Levinson, Theory of Ordinary Differential Equations, McGraw-Hill, New York, 1955. 29 [32] A. Wintner, The Analytical Foundations of Celestial Mechanics, Princeton University Press, 19^1. [33] G. H. Barnes, R. M. Brown, M. Kato, D. Kuck, D. L. Slotnick, R. A. Stokes, "The Illiac IV Computer", IEEE Trans. Comp., Vol. C-1T, pp. 7^6-757, August 1968. [3k] D. J. Kuck, "Illiac IV software and applications programming", IEEE Trans. Comp., Vol. C-17, No. 8, pp. 758-770, August 1968. UNCLASSIFIED Security Classification DOCUMENT CONTROL DATA R&D (Security claaalllemtlon of till*, body of mbatrmcl mnd IndaMlng annotation mual fca antarad whan gja ovarall r»pon la clamalllad I. OMI6INATINC ACTIVITY (Corporal* author) Center for Advanced Computation University of Illinois at Urbana- Champaign Urbana, Illinois 61801 2a. REPOUT SECUSlTV Cl.»*SIFlc*TIOf UNCLASSIFIED 2b. GROUP S. KtPOUT TITLC On the Identification of Multi-Output linear Time- Invariant and Periodic Dynamic SYStems 4. descriptive NOTII (Typa 0/ ra p ort and tnelualra omtaa) Resaarp.h Rpt ?port. ». authopisi (Flrafna m a, mlddla Initial, Imrnt wm) Ahmed H. Sameh and Walter L. Heimerdinger * HIPOUT DATE November 1, 1972 7a. TOTAL NO. OF PACE* 36 76. no. or pert 3U •a. CONTPACT OR SPANT NO. DAHC04 -72-C-OOOl b. PPOjeCT NO. ARPA Order No. 1899 •a. ORIGINATOR'S REPORT NUMBERtS) CAC Document No. 53 Bb. OTHER REPORT NOISI (Any othar nunttrt that may ba aaaljnad Oil a raport) 10. DISTRIBUTION STATEMENT Copies may be requested from the address given in (l) above, II- SUPPLEMENTARY NOTES 12. SPONSORING MILITARY ACTIVITY U. S. Army Research Office-Durham Duke Station, Durham, North Carolina 13 ABSTRACT In this paper we describe an efficient computational algorithm for estimating the coefficients of the characterisitic polynomical of a linear time -invariant multi-output dynamic system, using only output observations, for qualitative analysis of the transition matrix or for evaluating its eigenvalues. We also give some computational results of the identification of those systems using the Ho-Kalman approach. Furthermore, an identification scheme for high-frequency periodic systems of unknown periods is described in detail. DD ,'°ir..1473 UNCLASSIFIED Security Classification UNCLASSIFIED Security Classification k e v woros Mathematics of Computation Matrix Algebra Linear Algsbra Mathematical Statistics; Probability ITNfiT.AfJfiTFIED Security Classification