a se a s gº tº º e º ºr e º 'º s º º_º ſº: ģ¿ ģº,§§}); � ſº:#### � №feſiº,ſ-º ;% º º.·∞¿ſºſ,ſºsyº Ģºſſeſſº,ſºſºïſſºſ,%ſ,%ſ,%ſ,%ſ', ķķ; ſae-·######Caeſ)§§§ № -ºſ,* ſae º, , ;*** HIOLIITTOLIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIȚITIȚIIIȚIȚI ·! © :) ~~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~№ • •as : «… • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •», () \N,\,Gj Cº-ºººººººººººººººººº-CCCCCCCCCCCCCCCCº-ºº: t \ - - Tºmliniſt # [IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII ||||||||| |||||| |||||| ||||||| all [] E8№ ... ( ; I l'T' () 1." "I" I I I ºººººººººº!!!JOUTUIŲ Errorrrrrrrrrrrrr! I VW 12"I" Z * * N.V. N. D.I.. I G • • AT Jº () I*. I? I r º, º a sº º sº º sº a º ſº Timºflºtiliſmiſſiºn iſſuintº ºlºiſilliºn • * c e º e º e º 'º e º ºr e º e º ºr e º e º sº e º 'º iſilluminimummit → *, ** *** THE UNIVERSITY OF CHICAGO SCIENCE SERIES Editorial Committee ELIAKIM HASTINGS MOORE, Chairman JOHN MERLE COULTER PRESTON KYES HE UNIVERSITY OF CHICAGO SCIENCE SERIES, established by the Trustees of the University, owesits origin to a belief that there should be a medium of publication occupying a position between the technical journals with their short articles and the elaborate treatises which attempt to cover * several or all aspects of a wide field. The volumes of the series will differ from the dis- cussions generally appearing in technical jour- nals in that they will present the complete re- sults of an experimentor series of investigations which previously have appeared only in scat- tered articles, if published at all. On the other hand, they will differ from detailed treatises by confining themselves to specific problems of current interest, and in presenting the subject in as summary a manner and with as little technical detail as is consistent with sound method. They will be written not only for the specialist but for the educated layman. ALGEBRAS AND THEIR ARITHMETICS THE UNIVERSITY OF CHICAGO PRESS CHICAGO, ILLINOIS THE BAKER AND TAYLOR COMPANY NEW YORK THE CAMBRIDGE UNIVERSITY PRESS LONDON THE MARUZEN-KABUSHIKI-KAISHA Tokyo, osaka, KYOTo, FUKUOKA, SENDAI THE MISSION BOOK COMPANY SHANGHAI ALGEBRAS AND THEIR ARITHMETICS By LEONARD EUGENE DICKSON Professor of Mathematics, University of Chicago 3%º º §§N 8. - §§ § | º º ūW3 § \ § º Sº Sý THE UNIVERSITY OF CHICAGO PRESS CHICAGO, ILLINOIS %. (ºut • * #. & T. -//2J CoPYRIGHT Ig23 By THE UNIVERSITY OF CHICAGO All Rights Reserved Published July IQ23 Composed and Printed By The University of Chicago Press Chicago, Illinois, U.S.A. PREFACE The chief purpose of this book is the development for the first time of a general theory of the arithmetics of algebras, which furnishes a direct generalization of the classic theory of algebraic numbers. The book should appeal not merely to those interested in either algebra or the theory of numbers, but also to those interested in the foundations of mathematics. Just as the final stage in the evolution of number was reached with the introduction of hypercomplex numbers (which make up a linear algebra), so also in arithmetic, which began with integers and was greatly enriched by the introduc- tion of integral algebraic numbers, the final stage of its development is reached in the present new theory of arithmetics of linear algebras. Since the book has interest for wide classes of readers, no effort has been spared in making the presentation clear and strictly elementary, requiring on the part of the reader merely an acquaintance with the simpler parts of a first course in the theory of equations. Each definition is illustrated by a simple example. Each chapter has an appropriate introduction and summary. The author's earlier brief book, Linear Algebras (Cambridge University Press, 1914), restricted attention to complex algebras. But the new theory of arithmetics of algebras is based on the theory of algebras over a general field. The latter theory was first presented by Wedderburn in his memoir in the Proceedings of the London Mathematical Society for 1907. The proofs of vii viii PREFACE Some of his leading theorems were exceedingly com- plicated and obscured by the identification of algebras having the same units but with co-ordinates in different fields. Scorza in his book, Corpi Numerici e Algebre (Messina [1921), ix+462 pp.), gave a simpler proof of the theorem on the structure of simple algebras, but omitted the most important results on division algebras as well as the principal theorem on linear algebras. An outline of a new simpler proof of that theorem was placed at the disposal of the author by Wedderburn, with whom the author has been in constant correspond- ence while writing this book, and who made numerous valuable suggestions after reading the part of the manu- script which deals with the algebraic theory. However, many of the proofs due essentially to Wedderburn have been recast materially. Known theorems on the rank equations of complex algebras have been extended by the author to algebras over any field. The division algebras discovered by him in 1906 are treated more simply than heretofore. Scorza’s book has been of material assistance to the author although the present exposition of the algebraic part differs in many important respects from that by Scorza and from that in the author's earlier book. But the chief obligations of the author are due to Wedderburn, both for his invention of the general theory of algebras and for his cordial co-operation in the present attempt to perfect and simplify that theory and to render it readily accessible to general readers. The theory of arithmetics of algebras has been sur- prisingly slow in its evolution. Quite naturally the arithmetic of quaternions received attention first; PREFACE - ix the initial theory presented by Lipschitz in his book of 1886 was extremely complicated, while a successful theory was first obtained by Hurwitz in his memoir of 1896 (and book of 1919). Du Pasquier, a pupil of Hurwitz, has proposed in numerous memoirs a definition of integral elements of any rational algebra which is either vacuous or leads to insurmountable difficulties discussed in this book. Adopting a new definition, the author develops at length a far-reaching general theory whose richness and simplicity mark it as the proper generalization of the theory of algebraic numbers to the arithmetic of any rational algebra. Acknowledgments are due to Professor Moore, the chairman of the Editorial Committee of the University of Chicago Science Series, for valuable suggestions both on the manuscript and on the proofsheets of the chapter on arithmetics. L. E. DICKSON UNIVERSITY OF CHICAGO June, IQ23 TABLE OF CONTENTS CHAPTER - I. INTRODUCTION, DEFINITIONS OF ALGEBRAS, ILLUS- II. III. IV. VI. VII. VIII. TRATIONS $ tº e º is tº ſº º 'º e Fields. Linear transformations. Matrices. Linear dependence. Order, basal units, modulus. Qua- ternions. Equivalent and reciprocal algebras. LINEAR SETS OF ELEMENTS OF AN ALGEBRA Basis, order, intersection, sum, supplementary, product. INVARIANT SUB-ALGEBRAs, DIRECT SUM, REDUCI- BILITY, DIFFERENCE ALGEBRAS NILPOTENT AND SEMI-SIMPLE ALGEBRAS; IDEM- POTENT ELEMENTS . • e = * * * * Index. Properly nilpotent. Decomposition rela- tive to an idempotent element. Principal and primitive idempotent elements. Semi-simple algebras. . DIVISION ALGEBRAS Criteria for a division algebra. Real division algebras. Division algebras of order n° and 9. STRUCTURE OF ALGEBRAS tº g g g º ſº Direct product. Simple algebras. Idempotent elements of a difference algebra. Condition for a simple matric sub-algebra. CHARACTERISTIC MATRICES, DETERMINANTS, AND EQUATIONS; MINIMUM AND RANK EQUATIONS Every algebra is equivalent to a matric algebra. Transformation of units. Traces. Properly nilpotent. THE PRINCIPAL THEOREM ON ALGEBRAS e g Direct product of simple matric algebras. Division algebras as direct sums of simple matric algebras. Complex algebras. PAGE 25 3I 43 59 72 92 II8 xii 3. TABLE OF CONTENTS CEIAPTER - * IX. INTEGRAL ALGEBRAIC NUMBERS e e. Quadratic numbers. Reducible polynomials. Nor- mal form of integral algebraic numbers. Basis. X. THE ARITHMETIC OF AN ALGEBRA tº º ſº Case of algebraic numbers. Units and associated elements. Failure of earlier definitions. Arith- metic of quaternions. Arithmetic of a direct sum. Existence of a basis for the integral ele- ments of any rational semi-simple algebra. Integral elements of any simple algebra. Arith- metic of certain simple algebras. Equivalent matrices. The fundamental theorem on arith- metics of algebras. Normalized basal units of a nilpotent algebra. The two categories of com- plex algebras. Arithmetic of any rational alge- bra. Generalized quaternions. Application to Diophantine equations. XI. FIELDS . e e º 'º º º º º $ sº * Indeterminates. Laws of divisibility of poly- nomials. Algebraic extension of any field. Congruences. Galois fields. APPENDIX I. DIVISION ALGEBRAS OF ORDER n° . II. DETERMINATION OF ALL DIVISION ALGEBRAS OF ORDER 9; MISCELLANEOUS GENERAL THEOREMS ON DIVISION ALGEBRAS PAGE I 28 I4I 2OO 22 I 226 III. STATEMENT OF FURTHER RESULTS AND UNSOLVED PROBLEMS INDEX 235 239 CHAPTER I INTRODUCTION, DEFINITIONS OF ALGEBRAS, ILLUSTRATIONS The co-ordinates of the numbers of an algebra may be ordinary complex numbers, real numbers, rational numbers, or numbers of any field. By employing a general field of reference, we shall be able to treat together complex algebras, real algebras, rational algebras, etc., which were discussed separately in the early literature. We shall give a brief introduction to matrices, partly to provide an excellent example of algebras, but mainly because matrices play a specially important rôle in the theory of algebras. 1. Fields of complex numbers. If a and b are real numbers and if i denotes V-1, then a + bi is called a complex number. A set of complex numbers will be called a field if the sum, difference, product, and quotient (the divisor not being zero) of any two equal or distinct numbers of the set are themselves numbers belonging to the set. For example, all complex numbers form a field C. Again, all real numbers form a field St. Likewise, the set of all rational numbers is a field R. But the set of all integers (i.e., positive and negative whole numbers and zero) is not a field, since the quotient of two integers is not always an integer. Next, let a be an algebraic number, i.e., a root of an algebraic equation whose coefficients are all rational numbers. Then the set of all rational functions of a º -: I 2 INTRODUCTION, DEFINITIONS |CHAP. I with rational coefficients evidently satisfies all the requirements made in the foregoing definition of a field, and is called an algebraic number field. The latter field is denoted by R(a) and is said to be an extension of the field R of all rational numbers by the adjunction of a. It has R as a sub-field. Similarly, the field C of all complex numbers is the extension ºt(i) of the field ºt of all real numbers by the adjunction of i. All of the fields mentioned above are sub-fields of C. For such fields the reader is familiar with the algebraic theorems which will be needed in the development of the theory of linear algebras. However, that theory will be so formulated that it is valid not merely for a sub- field of C, but also for an arbitrary field (occasionally with a restriction expressly stated). Mature readers who desire to interpret the theory of algebras as applying to an arbitrary field are advised to read first chapter xi, which presents the necessary material concerning general fields. 2. Linear transformations. The pair of equations i: x=a;+bn, y=ck-dº, p- |ze, with coefficients in any field F, is said to define a linear transformation i, of determinant D, from the initial independent variables &, y to the new independent variables #, m. Consider a second linear transformation T: :=dx+gy, m=y_X+6V, A-l. |ze, § 2] LINEAR TRANSFORMATIONS 3 from the variables £, m to the final independent vari- ables X, Y. If we eliminate £ and m between our four equations, we obtain the equations ir: &=a,X+b,Y, y=c, X+d,Y, in which we have employed the following abbreviations: (1) a, =aa-Hby, b. = a&-Hbö, c. =ca-Häy, d, =c(3+d6, whence (2) a, b, C; d. = DAzºo. Instead of passing from the initial variables &, y to the intermediate variables #, m by means of trans- formation t, and afterward passing from É, m to the final variables X, Y by means of transformation T, we may evidently pass directly from the initial variables a, y to the final variables X, Y by means of the single trans- formation tr. We shall call tº the product of t and T taken in that order and write i, -tt. This technical term “product” has the sense of resultant or compound. Similarly, we may travel from a point A to a point B, and later from B to C, or we may make the through journey from A to C without stopping at B. By solving the equations which define t, we get t=#2-#y, – C , 0. jº-jy, n=-B & Fijy. If we continue to regard ac, y as the initial variables and £, m as the new variables, we still have the same trans- formation texpressed in another form. But if we regard £, m as the initial variables and w, y as the new variables, 4 INTRODUCTION, DEFINITIONS [CHAP. I we obtain another transformation called the inverse of t and denoted by tT". It will prove convenient to write X, Y for 3, 3); then - I e _d v_0 _–c (l, {-1: 8–5x j} . n=B X+; Y. Eliminating £ and m between the four equations defin- ing t and iT", we find that the product ti" is I: &=X, y= Y, which is called the identity transformation I. As would be anticipated, also iT't = I. While tº *t → it", usually two transformations t and T are not commutative, trzºtt, since the sums in (1) are usually altered when the Roman and Greek letters are interchanged. However, the associative law (fr)T=t(TT) holds for any three transformations, so that we may write trT without ambiguity. For, if we employ the foregoing general transformations t and T, and T: X=Au-HBw, Y=Cu+DV, we see that (ft)T is found by eliminating first £, m and then X, Y between the six equations for t, T, T, while t(TT) is obtained by eliminating first X, Y and then £, m between the same equations. Since the same four variables are eliminated in each case, we must evidently obtain the same final two equations expressing 3 and y in terms of u and v. The foregoing definitions and proofs apply at once to linear transformations on any number p of variables: $3] MATRICES - 5 &=du& Halaša-H . . . . --dipč, &p=apić, Fapaša-H . . . . ~Happé, except that the equations of the inverse AT" are now more complicated (§ 3). 3. Matrices. A linear transformation is fully defined by its coefficients, while it is immaterial what letters are used for the initial and the final variables. For example, when we wrote the equations for tº in § 2, we replaced the letters ac, y which were first employed to designate the new variables by other letters X, Y. Hence the transformations t, T, and A in § 2 are fully determined by their matrices: m=(; ;) -(; ;) (. ... ...) ~\ c & ), * \ y : ), \ . . . . . . . . ) Cºp1, Cºp 2, . . . . ; (lºpp the last having p rows with p elements in each row. Such a p-rowed square matrix is an ordered set of pº elements each occupying its proper position in the symbol of the matrix. The idea is the same as in the notation for a point (x, y) of a plane or for a point (x, y, z) in space, except that these one-rowed matrices are not square matrices. The matrix -( aa-H by a 3-Hbó ) *TV ca-i-dy cº-Eds of the transformation i = it is called the product of the matrices m and p of the transformations t and T. Hence the element in the ith row and jth column of the product of two matrices is the sum of the products of the succes- 6 INTRODUCTION, DEFINITIONS |CHAP. I sive elements of the ith row of the first matrix by the corresponding elements of the jth column of the second matrix. For example, the element aff–H bo in the first row and second Column of mu is found by multiplying the ele- ments a, b of the first row of m by the elements 3, 6, respectively, of the second column of p, and adding the two products. The determinants D and A of the transformations t and T are called the determinants of their matrices m and u. By (2), the determinant of their product mu is equal to the product DA of their determinants. We shall call the matrices m and p equal, and write m = p, if and only if their corresponding elements are equal: a = a, b = 3, C = y, d-ö. In § 2, we employed only transformations whose determinants are not zero. This restriction is necessary if we desire that the initial variables shall be independ- ent, as well as the new variables. For, if D=o and a zºo in t, then y=a^*ca. Butlet us employ also degener- ate transformations (of determinant zero), i.e., linear relations between two sets of variables, the variables in one or both sets being dependent. Then the product of any two linear transformations, whether degenerate or not, is found as before by elimination of the inter- mediate set of variables. Hence we may apply our rule of multiplication to any two matrices, and con- clude from $2 that this multiplication obeys the associ- ative law. §3) MATRICES 7 In particular, mI = Im = m for every two-rowed matrix m if ** I ( I O ) O I is the identity malrix, or unit matrix. If the determinant D of m is not zero, m has the inverse m--( d/D —b/D ) // T1/1 = ????!! Tº = —c/D aſ D /? g The corresponding matrix without the denominators D is called the adjoint of m and designated by “adj. m.” If m is a p-rowed square matrix, the element in the ith row and jth column of its adjoint is the cofactor (signed minor) of the element in the jth row and ith column of the determinant D of m. In case Džo, the element in the ith row and jth column of the inverse mTº of m is the quotient of that cofactor by D. Given two matrices m and p such that the determi- nant m) of m is not zero, we can find one and only one matrix & = m^*p, such that mac = p, and also one and only one matrix y = p.m.T. Such that ym = p. But if |m|=o, there is no matrix a for which m3 = I, since this would imply o-2-1. Likewise there is no matrix y for which ym = I. * Hence each of the two kinds of division by m is always possible and unique if and only if |m| 3 S.S. =San * Hence there is evidently a one-to-one correspondence between the scalar matrices S. and the numbers e of the field F such that this correspondence is preserved under both addition and multiplication. In other words, the set of all scalar matrices is a field simply isomorphic with F. Moreover, ea eb _ / a b sm-ms-(i. ed ) m=(. }) Hence from any relation between matrices, some of which are scalar, we obtain a true relation if we replace each scalar matrix S. by the number e and make the following definitions: a+e b ) ea eb ec ed on-me-( C d–He ) e+m-m-i-e= ( The first relation defines the scalar product of a number e and a matrix m to be the matrix each of whose elements is the product of e by the corresponding element of m. In particular, eI = Ie =S. Use is rarely made of the notation e-Him, which is generally written eI+m. §4) DEFINITION OF ALGEBRAS 9 If m is a matrix whose determinant D is not zero; then adj. m= Dm7 by the foregoing definitions. Hence the product of m and adj. m in either order is DI. This result holds true also if D = O. Important theorems on matrices are proved in chap- ter vii. 4. Definition of an algebra over any field. According to the definition to be given, the set of all complex numbers a-i-bi is an algebra over the field of all real numbers. Again, the set of all p-rowed Square matrices with elements in any field F is an algebra over F ($8). In this algebra, multiplication is usually not commuta- tive, while division may fail. The foregoing discussion of matrices and operations on them provides an excellent concrete introduction to the following abstract definition of algebras. - The elements of an algebra will be denoted by small Roman letters, while the numbers of a field F will be denoted by small Greek letters. An algebra A over a field F is a system consisting of a set S of two or more elements a, b, c, . . . . and three operations (B, G), and O, of the types specified below, which satisfy postulates I–V. The operation (B, called addition, and the operation G), called multiplica- tion, may be performed upon any two (equal or distinct) elements a and b of S, taken in that order, to produce unique elements a Gb and a G)b of S, which are called the sum and product of a and b, respectively. The operation O, called scalar multiplication, may be per- formed upon any number a of F and any element a of S, or upon a and a, to produce a unique element a Oa or aOa of S, called a scalar product. IO INTRODUCTION, DEFINITIONS chap. I For simplicity we shall write a +b for a (Bb, ab for a G)b, aa for a Oa, and aa for a Oa, and we shall speak of the elements of S as elements of A. We assume that addition is commutative and associ- ative: I. a-Hb =b+a, (a+b)+c+a+(b+c), whence the Sum al-H . . . . H-at of ar, . . . . , at is . defined without ambiguity. For scalar multiplication, we assume that II, wa-as, aſgo)=(a6), (a)(3)=(a6)(a), III. (a+8)a=aa-H 3a, a(a+b)=aa-Hab. Multiplication is assumed to be distributive with respect to addition: IV. (a+b).c=ac-Hbc, c(a+b)=ca-Hch. But multiplication need not be either commutative or associative. However, beginning with chapter iv, we shall assume the associative law (ab)a=a (bc), and then call the algebra associative. - . The final assumption serves to exclude algebras of infinite order: V. The algebra A has a finite basis. This shall mean that A contains a finite number of elements v., . . . . , v, such that every element of A can be expressed as a sum a.º.-H. . . . . -I-aniºn of Scalar products of v, . . . . , Vn by numbers ar, . . . . , an of F. The reader who desires to avoid technical discussions may omit the proof below that postulates I-V imply property VI, and at once assume VI instead of W. $4] DEFINITION OF ALGEBRAS II VI. The algebra. A contains elements ur, . . . . , u, Such that every element 3 of A can be expressed in one and only one way in the form (3) &=&ur-H . . . . --ànun, where £r, . . . . , ś, are numbers of the field F. This implies that if x is equal to (4) 'y= nºur-H tº º tº e +mnun, then & = m, . . . . , 8, -ma. Adding the n terms of a to those of y, and applying I and III, we get (5) 3+y=(&-H m)u;-- . . . . --(£n-Hºma) un. An element z such that ac-Hz=a for every x in A is called a zero element of A. Comparing (3) with (5), we see that w-Hy=3; if and only if m, =0, . . . . , ma = 0. Hence the unique zero element is z=o ur-H . . . . --O un. It will be denoted by oin the later sections. We shall now deduce certain results from I-V which will enable us to prove VI. We first prove that I : x=3 for every & in A. By V, a = 2a:Wi. Then, by III, and IIa, I &=XI (a;vi)=2(I ai)w;=Xaiv;= x. Write z =o v, for i = 1, . . . . , m, and z=z, + . . . . . +zº. By III, for a =o, 6 = I, we have a =o a +a. Take a = aſſ; and note that, by II, (6) o agº; + (o ai)Wi-o v;-2, . Hence aſſ; =z;+a;0;. Summing for i = 1, . . . . , m, we get & =z-Hac. Suppose that also a =w-Haº for every & in I 2 INTRODUCTION, DEFINITIONS ſcHAP. I A, whence z =w-Hz. By the former result with a =w, we have w-z-Hw, whence waw-Hz by I. Hence w=z. Hence A contains a unique zero element z such that a =z-Hac for every & in A. * By Summing (6) for i=1, . . . . , m, and applying III, we get o. a =z for every & in A. Next, by IIs, z;&=(o. v.)&=(o w;)(I 3)=(o I)(vºx)=O(via)=z. Summing for i = 1, . . . . , m, and noting that 3+3 =3, we get 23:=z. Similarly, acz; =z, whence acz=z. For any number p in F, * p3 = p(o vi)=(p o)";=o v;=2; , p2=2. Define –3 to be the scalar product of — I by ac. By III, for a = 1, 6––1, we get z=a+(−a). Define ac-y to mean &–H (–y) and call it the result d of subtracting y from 3:. By adding y to each member of 3-y=d, and applying the preceding conclusion, we get &—y-Hy=x-Hz=&=d-Hy. Conversely, if x=d-Hy, add —y to each member; then • &—y=d-Hy-H (–y)=d-H2=d. Hence any term of one member of an equation may be carried to the other member after changing the sign of the term. We are now in a position to prove VI. Either the w; in V will serve as the desired us, or there exists at least one relation XY;0; -23;U, in which Y;#6; for Some value < m of i. Since we may permute the vi, we may assume without loss of generality that Ynzº 6. § 5] LINEAR DEPENDENCE I3 Then there exists a number p of the field F such that p(6,-Y.) = I. We transpose terms, apply III, multiply On the left by p, apply II, and get % - I * Xp(x,-3) • Vj =Um . f = I & If *m- I, we may therefore eliminate v, from 2a:U; and obtain a linear function of V., . . . . , Vn-1 with coeffici- ents 6; in F. If two such linear functions are equal without being identical, a repetition of the argument shows that we may eliminate One of V., . . . . , Vn- from 26;Ug. Evidently this process ultimately leads to a set of elements u, . . . . , u, having property VI. This definition of an algebra, with V replaced by the much stronger assumption VI, is due to G. Scorza.f However essentially the same definition of an algebra over the field of real numbers had been given in Encyclo- pédie des Sciences Mathématiques, Tome I, Volume I (1908), pages 369-78. 5. Linear dependence with respect to a field. Ele- ments er, . . . . , e, of an algebra A. Over F are said to be linearly dependent with respect to F if there exist numbers ar, . . . . , as, not all Zero, of F Such that a...e.-- . . . . --ages=o. If no such numbers as exist, the e, are called linearly independent with respect to F. An example is given in § 8. 4 * * If m=1, we proved that z=y. Hence, by V, every element of A is the form a.º. - a 2–2, whereas A was assumed to contain at least two elements. This contradiction shows that v, in V serves as ur in VI and that n = I. f Corpi Numerici e Algebre (Messina, Ig21), p. 180; Rendiconti Circolo Matematico di Palermo, XLV (1921), 7. I4. INTRODUCTION, DEFINITIONS |CHAP. I THEOREM. If u, , . . . . , u, are linearly independent with respect to a field F, the n linear functions (7) li– 8:14-H . . . . H-Bºu, (i = 1, . . . . , n), with coefficients in F, are linearly independent or dependent according as the determinant 3=|8;| is not zero or is zero Žn F. For, if a1, . . . . , an are numbers of F, Sº->ºt tº ſº gº º +S- i = 1 i = I ? = I is zero if and only if % 77 (8) X. Sirai FO, . . . . ; X. Sinai = O , - # = I , i = I The determinant of the coefficients of ar, . . . . , a, in equations (8) is 8. Hence the ordinary rule for solving linear equations by determinants gives 8a, = o, . . . . , Ban=o. If 3% o, ø, . . . . , an are all zero, so that l, . . . . , l, are linearly independent. But if 3 -o, the n linear homogeneous equations (8) have solutions” al, . . . . , an not all zero, whence lº, . . . . , l, are linearly dependent. 6. Order and basal units of an algebra. In view of VI, in § 4, the algebra A over F is said to be of order n, and u, . . . . , u, are said to form a set of n basal wnits of A. n * Dickson, First Course in the Theory of Equations (1922), p. 119. . § 7) MODULUS I5 The last name is given also to any set of n linearly independent linear functions (7) of ur, . . . . , u, with coefficients in F. Then the determinant of those coefficients is not zero, and (7) can be solved for ur, . . . . , u, in terms of li, . . . . , l. Hence every element Xa;us of A can be expressed as a linear function of l, . . . . , l, with coefficients in F. This replacement of one set of basal units u, , • ? wn by another set l, . . . . , l, is called a transformation of units. The work will be carried out in full detail in § 61. THEOREM. Any n-HI elements of A are linearly dependent with respect to F. For, l, . . . . , l, ºr are evidently dependent if l, . . . . , l, are. In the contrary case, we saw that l, I can be expressed as a linear function of l., . . . . , l, with coefficients in F, so thatl, . . . . , l, 1, are dependent. 7. Modulus. An algebra A may have an element e, called a modulus (or principal unit), such that ex= ace = a for every element 3 of A. For example, the unit matrix I (§ 3) is a modulus for all square matrices having the same number of rows as I. If there were a modulus s other than e, then se = e, while se =s by taking & =s in the earlier relations. Hence s = e, so that there is at most one modulus. It is often designated by I since it plays the rôle of unity in multiplication. - If an algebra A over F has the modulus e, the totality of elements ae, where a belongs to F, constitutes an algebra of order I. Since ae-Ha'e = (a+a')e, ae - aſe = aa’e, this algebra of order I is called simply isomorphic with the field F. I6 INTRODUCTION, DEFINITIONS ſcHAP. I 8. Examples of associative algebras. The totality of p-rowed Square matrices with elements in any field F is an associative algebra of order p” over F, when addi- tion, multiplication, and scalar multiplication are defined as in § 3. We may choose as a set of p’ basal units ej(i, j = 1, . . . . , p), where ej denotes the matrix whose elements are all zero except that in the ith row and jth column, while that element is 1. For p = 2, I O , O I O O / o o 611 = y 612 = . I j 621 F y 622 F e O O O O I O O I Then ( ; ) = aerº-H 6erz-H Year-H ô622 is zero only when a = 8-y-6-0, whence the four eg are linearly independent with respect to F (cf. § 9, end). Second, the field C of all complex numbers {-|-mi may be regarded as an algebra of order 2 with the basal units u, = I, u, = i, over the field F of all real numbers. For, the assumptions I–IV are satisfied when the Roman letters denote any numbers of the field C and the Greek letters denote any real numbers. Third, any field F may be regarded as an algebra, over F, of order I, whose basal unit is I (or any chosen number 240 of F). & 9. An algebra in terms of its units. Choose any set of basal units it, . . . . , u, of an algebra A of order n over the field F. By VI, any elements a, and y of A can be expressed in one and but one way in the respective forms § 9] ALGEBRA IN TERMS OF UNITS I7 (9) x=XXiu, y=XXiu, i = 1 i = I where {, , . . . . , śa are numbers of F called the co- ordinates of a (with respect to the chosen units). By $ 4, (IO) &-Hy= > (#-Fmi)u;, &-y= > (śī-mi)u;. By IV and IIs, we have (II) &y= X. &m; uill; . i,j =I re By VI, 77 - (12) tliu; = S Yijius (i,j=I, . . . . , n), k = I • where the nº numbers Yiji belong to F and are called the constants of multiplication of the algebra A (with respect to the units u, . . . . , u,). The nº relations (12) are said to give the table” of multiplication of A (with respect to the units us, . . . . , u,). From (II) and (12), we get, by III, and II, (13) & 3:y= > ŠinjYijk tº . From (9.) we obtain, by III, II, and II, (14) ex-so-> (ºu (p in F). * We may use an actual table as in § 25. I8 INTRODUCTION, DEFINITIONS [CHAP. I The set of elements (9.) form an algebra A over F with respect to addition, multiplication, and scalar multiplication, defined by (Io,), (13), and (14), respec- tively, since postulates I-V of § 4 are easily seen to be satisfied. Hence we may operate concretely on the elements of an algebra by the rules of this section without recourse to § 4. * To illustrate these rules for the algebra of all two- rowed square matrices with elements in F, we write the matrices m, u, m-Hu, and mu of § 3 in terms of the basal units ej defined in § 8 and obtain & % m=aen-Fbéra-FCear-Háleza, & 4 pl = aerº-H 6e, 2+ Year-H ô622 y s 3 m+p = (a+a)eri-H (b+8)era-H (c-Fºy)ear-H(d+6)eas, mu= (aa-H bºy)er-H (aft-Höö) era-H (ca-H dy)ear-i- (cf}+d6) 622. The last equation may also be verified by means of the following table of multiplication of the units: (15) €ijéj} = €ik, 6:36th =o (t==j). 10. New form of the foregoing matric algebra. Consider the complex matric algebra of all two-rowed square matrices whose elements are complex numbers. We employed above the set of basal units err, era, ear, eas. Then e, ,--e., is the unit matrix or modulus, which will here be designated by I. - We shall introduce the new set of basal units, (16) I =en-Hé22 3. 1/1 = V-a(e...-e) 5 tla-era- 862; 9 1/3 = V-a(e.-- Bear), § II] QUATERNIONS I9 where a zºo, 3;4 o. We have (17) -(º- 3-) *-( O ..) 7 * ~ \ o 1/-, ), *T \–3 oſ’ *-(º- º ) e By actual multiplication of matrices we readily get — 0. O *=( )=-a, tº:- — 3, u}=-aff, usua-us. O – C. Since matric multiplication is associative, we get 1/11/3 – 1/11/1142 - = 0.742 y 1/31/2 F. 1/11/21/2 - = §u, 2 – agua–ujua-usſ-Bu) or usu; =aua, 0.1421/1 = 1/31/. e 1/1 =u3( sºmº a) 5 1/21/3 F: – 1/2 o 1/21/1 F §u, & Hence the multiplication table of the units I, u, u, u, is w:- — a , u}=-8, u}=-aff, u, ua- us, usu; = -us, (18) { urus=-aua, usu; =aua, usu; = Bui, usua=-34, I • 14, -11, • I = 14, (r=I, 2, 3). The linear Combinations of I, u, u, us with complex coefficients constitute an algebra which is merely another form of the complex matric algebra with the units 611, 612, 621, 622. - But if we restrict the co-ordinates of o-Häu, +mu,-- {us to be numbers of any field F which contains a and 8, we obtain an associative algebra over F. II. Quaternions. If in (18) we take a = 8 – I and write i, j, k for u, u, us, we obtain the multiplication table do ſº-º-º-, j-, ji==k, ii-j, łk=-j, jk = i, kj=-i, Ii =ir = i, etc. 2O INTRODUCTION, DEFINITIONS [CHAP. I of the basal units of quaternions q= a +37–H mi-H ºk. The totality of elements q with a , . . . . , ; in any field F is the associative algebra of quaternions over F. When 0, . . . . , ſ are all Complex, real, or rational, q is called a Complex, real, or rational quaternion, respectively. Define the conjugate q’ and norm W(q) of q to be q'- 0–£i–mj-ºk, N(q)=qq'-g'q= q^+3+7°-Hº. The conjugate of a product q4, is readily verified to be equal to the product gºg' of the conjugates in reverse order. Thus N(qq.)=qqiq'g'. Since q.g., is a number N(q) of F it may be moved to the right of q'. Hence N(qq) = N (g). N(q.). In other words the norm of a product of any two quaternions is equal to the product of their norms. Let F be a field composed only of real numbers. Then a sum of Squares is zero only when each square is zero. Thus if q2O, then N(q) #o and q has the inverse -I – I , Q (n)} : Hence, if q zºo, q2 =q, has the unique Solution & =qTºq, and y! =q, has the unique Solution y=q, gT". Thus each of the two kinds of division by qzºo is always possible and unique in the algebra of quaternions over any real field. In particular, a product of two real quaternions is zero only when one factor is zero. I2. Equivalent and reciprocal algebras. Two alge- bras A and A' over the same field F are called equivalent (or simply isomorphic) if it is possible to establish between their elements a (I, I) correspondence such that d- § 12| EQUIVALENT, RECIPROCAL ALGEBRAS 2I if any elements & and y of A correspond to the elements a' and y' of A', also the elements 3+y, wy and aw of A correspond to the elements w'-Hy', & y' and ax' of A', for every member a of F. Equivalent algebras have the same order, and their elements zero correspond. If one of two equivalent algebras has a modulus, SO does the other, and the moduli correspond. t Any algebra A over F is equivalent to itself under any linear transformation of units with coefficients in F (§ 6). For example, if we take a = 8 – I in § Io, we see that the algebra of all two-rowed matrices whose elements are complex numbers is equivalent, by means of the transformation (16) on the units, to the algebra of all complex quaternions. But since that transformation has imaginary coefficients, it does not set up a corre- spondence between real matrices and real quaternions. The two real algebras are in fact not equivalent; various products ene,(r=1, 2; S-I, 2) of real matrices are zero, although each factor is not zero, while the product of two real quaternions, each not zero, is never zero. Two algebras A and A' over F are called reciprocal if it is possible to establish a (I, 1) correspondence between their elements such that 3:--y, cy, as now corre- spond to w'-Hy', y'a', ax'. - If in the multiplication table (12) of the units of an algebra A. Over F, we replace each product usu; by ušuš, we obtain the multiplication table of the units w!, . . . . , u, of an algebra A' over F which is reciprocal to A. For example, from (15) we get / eft ej =és, eft eff=o (t==j). 22 INTRODUCTION, DEFINITIONS |CHAP. I From these relations we obtain again (15), aside from the lettering of the subscripts, if we write e,, for e, i.e., if we interchange the rows and columns of our matrices. . Hence the algebra of all p-rowed matrices over F is self- reciprocal under the correspondence which interchanges the rows and Columns of its matrices. - Two algebras which are either both equivalent or both reciprocal to the same algebra are equivalent to each other. *. * 13. Second definition of an algebra. Each element &=X&ºut of an algebra A over F, defined in § 4, has a unique set of co-ordinates č, . . . . , śa in F with respect to a chosen set of basal units ur, . . . . , u,. Hence with a may be associated a unique n-tuple” [ć, . . . . , ś, of n ordered numbers of F. Using this n-tuple as a symbol for ac, we may write equations (Io,), (13), (14) in the following form: (20) [ć, . . . . , śal+lmi, . . . . , in - =[&-Hm., • * * * * ën-Hill, (21) [š, , • * * * 2 n] . In , . ge & ſº , ºnl ſº º - S ŠinjYiji, • * * * > S ºnjºin 5 ?, j = I i,j = I (22) plä, • * * * * &l=[ć, , • * * * * &lp =[pé, , . . . . . pšul, p in F. These preliminaries suggest the following definition by W. R. Hamilton of an algebra A over F: Choose any nº constants Yift of F, consider all n-tuples [ć, . . . . . £,] of n ordered numbers of F, and define addition and * For an algebra of two-rowed matrices, the numbers of each quad- ruple were written by twos in two rows. § 14) COMPARISON OF THE TWO DEFINITIONS 23. multiplication of n-tuples by formulas (20) and (21), and scalar multiplication of a number of p of F and an n-tuple by formula (22). To pass to the definition in § 4, employ the particular n-tuples (23) u =[I, o, . . . . , ol, *ſo, I, o, . . . . , o, . . . . . un-ſo, . . . . , o, Il as basal units. By (20) and (22), [8, . . . . , śl= £ºu, + . . . . --àun. Then (20), (21), (22) take the form (Io,), (13), (14), and, as noted in § 9, all of the assumptions made in § 4 are satisfied. Hence an algebra of n-tuples is an algebra according to § 4 and conversely. Hence there exists an algebra over F having as con- stants of multiplication any given nº numbers Yift of F. The algebra will be associative if the Y’s satisfy the con- ditions (§ 58) obtained from (uku) us=u;(uſus). 14. Comparison of the two definitions of an algebra. Under the definition in § 4, an algebra over a field F is a system consisting of a set of wholly undefined elements and three undefined operations which satisfy five postu- lates. Under Hamilton's definition in § 13, an algebra of order n over F is a system consisting of nº constants Yift of F, a set of partially” defined elements (£r, . . . . , ś, and three defined operations, while no postulates are imposed on the system other than that which partially determines the elements. This definition really implies a definite set (23) of basal units. A transformation of units leads to a new algebra (equivalent to the initial algebra) with new values for the nº constants Yiji. * Each element is an n-tuple of numbers of F. In particular, if F is a finite field of order p, there are evidently exactly p” elements. 24 INTRODUCTION, DEFINITIONS (CHAP. I But under the definition in § 4, no specific set of basal units is implied,” and we obtain the same algebra (not merely an equivalent one) when we make a trans- formation of units with coefficients in F. That definition by wholly undefined elements is well adapted to the treatment of difference algebras (§ 25) which are abstract algebras whose elements are certain classes of things. The same definition without postulate V is convenient in the study of algebras of infinite order (not treated in this book), an example being the field of all real numbers regarded as an algebra over the field of rational numbers. * Tô emphasize this point, we may understand postulate V (that a finite basis exists) to mean that there is an upper limit to the number of linearly independent elements which can be chosen in the algebra, CHAPTER II LINEAR SETS OF ELEMENTS OF AN ALGEBRA In the later investigation of an algebra A, we shall often find it necessary to consider a “linear set” of its elements which is closed under both addition and scalar multiplication. Hence we shall develop here the calculus of linear sets, including their addition and multiplication. I5. Basis, order, and intersection of linear sets. If a , . . . . , an are any elements of an algebra A (not necessarily associative) over a field F, the totality of their linear combinations Xiaº; , whose coefficients \; are numbers of F, is called the linear set” with the basis 3, . . . . . , an and is designated by (x, , . . . . , &m). The linear set with the basis o is composed only of the element oand is called the zero set and is designated by (o) or o. * The order of a linear set zºo is the maximum number of linearly independentſ elements which can be chosen in the set. The zero set is said to be of order zero. Hence if a., , . . . . , an are linearly independent, the linear set (x, , . . . . , a.m.) is of order m. The set (x) is of order I or o, according as wāo or & =o. For example, let A be the algebra of all real quater- nions (§ II). The quaternions a +87, in which a and 6 range over all real numbers, form the linear set S = (I, i) * Called complex by Wedderburn and system by Scorza. i With respect to the field F, as will be understood throughout. Compare $5." 25 26 LINEAR SETS [CHAP. II Jº". of Order 2. The quaternions ai–H 8; form another linear set T= (i, j) of order 2. LEMMA I. If x, , . . . . , an are n linearly inde- pendent elements of a linear set S of order m(o-ºn-m), we can find elements an: , , . . . . , ºn of S such that S=(x,, . . . . , &m), where 3, . . . . , an are linearly findependent. For, S Contains elements e linearly independent of 3, . . . . , ºn ; Select any e as 3,4-1. Unless m = n+1, S Contains elements flinearly independent of ac, , . . . wn-Hº ; select any fas &n+2 ; etc. If a linear set T contains all of the elements of a linear set S, we shall write T-S, Ss T. If T contains S and also elements not in S, we shall write T-S, S * > *r- > *-o (Yi, oi, Tº in F), f = r j=l-HI k =l +I the element -27sts of T would be equal to the element 2);Ci-H20;S; of S and hence would be an element X6;C, of their intersection C. But, by the assumption on T, the is and c are linearly independent, whence each Tº = o and each 6; =o. Then the displayed equation becomes XY;c;--20;s; = 0, So that, by the hypothesis on S, each 'Y; =O and each of = 0. The result proved for S-HT shows that its order is m–H71 —l. , r I7. Linear sets supplementary in their sum. If, for r > 2, S: , . . . . , S, are linear sets of an algebra A, we define the sum S. + . . . . --S, by induction on r by means of (3) Si-H. . . . . --S, =(S,-- . . . . --S,-1)+S,. - Let m; denote the order of Si, and m the order of S,-- . . . . --S, . By the preceding theorem, ms m, H- . . . . --m, , and the equality sign holds if and only if zero is the only element in common with S.-- . . . . --S;- and S3 for j = 2, . . . . , r, and hence, by Theorem I, if and only if each element of S,-- . . . . --S, can be expressed in a single way in the form s, + . . . . --S,, where si is an element of S;. In this case m = m, + . . . ... + m, the linear Sets S., . . . . , S, are said to be supplementary in S,-- . . . . --Sr. In particular, S, and S, are supplementary in S.--S, if and only if S^S. =o. For example, (i, j) and (I, k) are supplementary in their sum (I, i, j, k). § 18] PRODUCT OF LINEAR SETS 29 LEMMA 2. If S and T are linear sets of an algebra A and if TsS, we can find a linear set X such that T and X are supplementary, i.e., S =T+X, T/SX=o. This follows from Lemma I by taking T=(, , , , , , ), X=(x+... . . . . . .) However, if T 3S, X is not uniquely determined by S and T since we may replace the foregoing special X by (341+b++, • • - 2 *-i-in), aft, are any m—n linearly independent elements of X. 18. Product of linear sets. If S and T are any linear sets of an algebra A, the linear set of minimum order, which contains all elements obtained by multi- plying each element of S by each element of T, is called the product of S by T, and is denoted by ST. Hence, in the notation (1), (4) (S, , . . . . , Sm)(# , . . . . , t). =(sil, , . . . . . sih, sat, , . . . . , Smt.), and the order of ST is smn. From (1), (5) (S+T) U =SU+TU, U(S+T)=US+UT. Usually STATS. When A is an associative algebra, (ST) U =S(TU). . Consider the special case in which S = (s) is composed of the scalar products of s by the various numbers of the field F. Then ST=(s)(#, , . . . . , i.)=(st, , . . . . , St.) 3O LINEAR SETS ſcHAP. II coincides with the products of s by the various elements XTit; of T. Hence we shall often write sſ in place of (s).T. By (5), P=[(x)+(y)|U=(x)'U+(y) U = xU+yU. Since the elements of (x+y) U. occur in P, (6) (x+y) Usa:U+yU. This becomes osa. U when y = -2, whence yū =&U. Hence in (6) the sign may be < and not =. LEMMA 3. If the order of sſ (or Ts) is less than that of T, there exists an element &#o of T such that sº -o (or &s=o), and conversely. - For, we may write T = (i., . . . . , t), where i., . . . . , tº are linearly independent with respect to F. If sºczo for every & =XTiti in which T, , . . . . , tº are numbers not all Zero of F, then St., . . . . , Sin are linearly independent, and ST is of the same Order n as T. Conversely, if sæ =o for at least one such 3, then st, . . . . , sin are linearly dependent, and ST is of order -s A & Hence A; has the invariant sub-algebra P which has a modulus. If it were a proper sub-algebra, As would be reducible” (§ 22), contrary to hypothesis. Hence P= A;. But P is invariant also in Bj, which is irreducible. As before, P=B}. Hence each algebra B; is identical with one of the A1, . . . . , Am. - For further theorems on reducible algebras, see Appendix III. 25. Difference algebra. This abstract concept is analogous to that of quotient-groups in the theory of finite groups. To provide a preliminary illustration, consider the (associative) algebra A over a field F with the multiplication table 141 1/2 1/3 la 7/1 ?/r 1/2 1/3 1/4 1/2 1/2 O O 1/2 1/3 1/3 O O 1/3 1/4 144 - 1/2 1/3 ºr The product usu; is found in the body of the table at the intersection of the line through the left-hand label u, and the column having the label us at its top. For example, usu, -u, uſu, --us. It has the invariant sub-algebra \ B= (u, us). .* . To each number & =&u,+ . . . . --ɺu, of A we make correspond the number a' =#v.--É', of the (associative) algebra \D=(U, , w): v: - VI, V,V4 = V4, Wºr–V4, wjav, , *By $ 23 with B=B}, by . sy=b;z • y. Let x and y be any elements of A, whose modulus is dº, Hence the foregoing formula gives by ag(xy) =b;(a,z) : y and hence also b;a; wy=(b;a;)3 y. But byg;=a;b; is the modulus of P. This proves the first formula (2) for algebra A;. The other two follow similarly. § 25] DIFFERENCE ALGEBRA 37 over the same field F. To the sum of any two numbers 3 and y = miſu, + . . . . --mºu, of A evidently corre- sponds the sum of their corresponding numbers a' and y’=m,0,--n,”, of D. To p3, where p is in F, corre- sponds pa'. To - - 3:y= (&mi-H &m)u;-H (&m2-H £2mi-H $274– &m2)u. + (&ms-H £3mi-H £3m4+ &m3)ws-H (&mi-H &m.) 1/4 corresponds . (&mi-H &m.)0,-H (&m,-- &m)", == &'y'. Hence our correspondence (which amounts to suppressing all scalar multiples of the units u, and us of B) is pre- served under addition, scalar multiplication, and multi- plication. The algebra D so determined by A and B is called their difference algebra and is designated by A — B. Next, let us employ, in place of B, the algebra S=(u, u,), over F, which is not invariant in A since tugu, -us is not in S. To a we now make correspond the number 2, -św,--àº, of the algebra * * 2 sºmº *Eº *-e 2 * Do– (ws 5 w) e w;=o, 703704–793, 70,793 =w3, w;=o, So that in effect we suppress all scalar multiples of the units u, and us of S. To acy now corresponds (&ms-Hämi-F#m-F#ms)ws-H(&m,-- $47)w, , . which is not equal to way, =(&m,--&m;)ws, so that the correspondence is not preserved under multiplication. Nor is Do an associative algebra since w;wi-o, 703-04 - 704–703-04 =ws. 38 INVARIANT, REDUCIBLE ſcHAP. III To treat the general case, let B be a linear set of elements of an algebra A over a field F. Two elements 3, and y of A are called congruent or incongruent with respect to B as modulus (or briefly, modulo B), according as 3–y is or is not an element of B. In the respective cases, we write &=y (mod B), 3-Ey (mod B). If a =y and w=z (mod B), then y–3 = — (3–y), y—z=(x—z)–(3–y) are elements of B, so that y=x, y=z (mod B). The first shows that the members of a congruence may be interchanged. The second shows that all those elements of A which are congruent to a given element a modulo B are congruent to each other; they are said to form a class [x] modulo B. Hence all elements of A may be distributed into non-overlapping classes modulo B. If a is any number of F, and if x=y, ac'sy' (mod B), then ax=&a=ay=ya, 3+2' =y+y' (mod B). Hence the product, in either order, of a and any element y of the class [x] is in the class [a.æ]=|aca], while the sum of any element y of class [x] and any element y' of class [x] is in the class [x-Ha'). Accordingly, we define the scalar product aſz)=[x]a of the number a of F and the class [x] to be the class [a.æ], and define the sum of the classes [x] and [x] to be the class [x-Ha']. Hence the linear function Xaſzil of the classes [x] with coefficients a; in F is the class [Xa;&l. Let T be a linear set supplementary to B in A, so that T^B=o and every element a of A is expressible § 25] DIFFERENCE ALGEBRA 39 in one and only one way as a sum of an element b of B and an element l of T (§§ 16, 17). If also a, =b, +t, and if a=a. (mod B), then f=t, (mod B), and t—t, is common to B and T and hence is zero. Hence if a =b+t and a, =b, +t, are in the same class, t = ty. Thus there is a (I, I) correspondence between the classes of A modulo B and the elements of T. * { If a; =b;+ti, where bi is in B and t, is in T, the class Xa;[as] corresponds to 2ait; in T. The number of linearly independent tº is n–m if B is of order m and A is of order n. Hence we may select n–m classes of A modulo B such that every class of A modulo B is expres- sible in one and only one way as a linear function of those n–m classes with coefficients in F. We now assume that B is an invariant sub-algebra of A. Again let &=y, 3'Ey' (mod B), whence y=x-Hb, y’=ac'--b', where b and b' are elements of B. Then yy' = xx'+&b'+by'Exx' (mod B), since ab' and by’ are elements of the invariant sub- algebra B of A, whence their sum is in B. Hence the product of any element y of class [æ] by any element y' of class [x] is an element of the class [xx']. Accordingly, we define the product [x] | x' of the class [x] by the class [æ] to be the class [x,x]. g THEOREM I. If B is an invariant proper sub-algebra of order m of an algebra A of order n over a field F, the classes of A modulo B are the elements of an algebra of order n—m over F when addition, Scalar multiplication, and multiplication of classes [æ] are defined by [x]+[x']=[x-Ha'], aſz]=[x]a=[aw], [x]|x|=[xx'], a in F. 4O INVARIANT, REDUCIBLE |CHAP. III For, postulates I–IV of § 4 are seen to hold, and, as shown above, n–m classes serve as a finite basis. The resulting algebra of classes is called the difference algebra A – B, and also the algebra complementary to B in A. Evidently A – B is an associative algebra when A is one. Let T be any linear set supplementary to B in A. We saw above that the elements of T are in (I, I) correspondence with the classes of A modulo B, and this correspondence is preserved under addition and scalar multiplication, but not in general under multiplication since T need not be closed under multiplication. How- ever, we may regard the elements of T as the elements of an algebra T' in which addition and scalar multiplication are defined as in T, while the product in T’ of any two elements a, and y of T' (i.e., the same elements of T) is defined to be the element of T which belongs to the class modulo B containing the product in A of a; and y. This algebra T' is therefore equivalent to A – B and is said to be obtained by taking T modulo B. Since A = B+T, this amounts to taking A modulo B. In our introductory example, A = (u, u, u, u,), B=(us, us). Then T- (u, u,) is supplementary to B in A. By chance, T is itself an algebra and plays the rôle of T'. Thus A – B is equivalent to T, as is implied in the discussion of the example. As a generalization of this example, we have the following THEOREM 2. If A is the direct sum of algebras B and T, then T is equivalent to A–B, and B is equivalent to A–T. For, BA = B(B+T) = B’s B, AB = Bºs B, so that B (and similarly T) is an invariant sub-algebra of A. Moreover, the product (in A) of any two elements a, and § 26] INVARIANT SUB-ALGEBRAS 4T y of T is in the sub-algebra T, which therefore plays the rôle of T’ above. A better illustration of T' is furnished by the associa- tive algebra A = (u, u, us): wºu;=uiu; =ui (i = 1, 2, 3), 1/21/3 =u31/2=1}=o, tº-us. Then B = (us) is evidently invariant in A. The simplest T is (u, u,), which is not an algebra since u:=us. Then T' = (v, V.), where v,v)=vſö, -uj (j = 1, 2), ºff-o, the final equation replacing u% =us when we take T modulo B = (us). 26. Theorem. If B, and B, are invariant proper sub-algebras of A and if B. × B, then A — B, contains an invariant proper sub-algebra which is equivalent to B, - B2. For, B, is evidently invariant in B. Elements of B, congruent modulo B, are elements of A congruent modulo B, whence each class of B, modulo B, is con- tained in a unique class of A modulo B2. Hence those classes of A modulo B, which contain the various classes of B, modulo B, constitute (in A — B.) a proper sub- algebra S equivalent to B. - B2. To prove that S is invariant in A — B, let & and y be any elements of A and B, respectively. Then acy and ya! are elements of B, since it is invariant in A. Passing to the corresponding classes [x] and [y] of A modulo B, we see that [x] is an element of A – B, and that [y], [æ] [y], and [y] [æ] are elements of S, whence S is invariant in A — B2. 42 INVARIANT, REDUCIBLE |CHAP. III 27. We next prove the converse of the last theorem: THEOREM. If B, and S are invariant proper sub- algebras of A and A – B, respectively, then A has an £nvariant proper sub-algebra B, such that B, 3B, and B, -B, is equivalent to S. For, all those elements a of A which belong to classes [x], of A modulo B, giving elements of S constitute a sub-algebra B, of A. Since S is a proper sub-algebra of A – B2, B, 3A. Since [o]=B, we have B, 3B, . If [x] is an element of S, and [y] is an element of A – B, the invariance of S in A — B, shows that [acy] and [ya: are elements of S. Hence if a. is in B, and y is in A, then acy and ya; are in B, which is therefore invariant in A. - 28. Simple algebras. An algebra having no invari- ant proper sub-algebra is called simple. Every algebra of order I is simple since it has no proper sub-algebra. The theorem of § 26 evidently implies COROLLARY I. If B2 is an invariant proper sub- algebra of A and if A – B2 is simple, then B, is a maximal finvariant proper sub-algebra of A. We readily prove the converse: COROLLARY 2. If B, is a maximal invariant proper sub-algebra of A, then A — B, is simple. For, if A – B, were not simple, it would have an invari- ant proper sub-algebra S and, by the theorem of § 27, A would have an invariant proper sub-algebra B, X B2, whereas B. is maximal. CHAPTER IV NILPOTENT AND SEMI-SIMPLE ALGEBRAS; IDEMPOTENT ELEMENTS We shall develop here the properties of important special types of algebras which play leading rôles in the theory of general algebras. That theory depends also upon a knowledge of the properties of various kinds of idempotent elements each of which is equal to its own Square. 29. Index. If A is any associative* algebra, A*s A, whence A. A*s A. A., or A3s A*, and similarly A*s A* for every positive integer k. If the inequality sign held for every k, the orders of A, A*, A3, . . . would form an infinite series of decreasing positive integers. Hence there exists a least positive integer a such that A* = A", and therefore A >A*>A3}× . . . . »A*-*>A*, A* = A*(t): a). This a is called the index of A. For example, consider the associative algebra, A =(u, , u2): tuft- usua-uzu; = u = 674, over a field F containing 3. . If 3% o, A*=(u)=A3; if 3 =o, A*=o = A3. In either case, A > A*, and A is of index 2. 30. Nilpotent algebras. If A *=o, A is called nil- potent. In particular, if A*=o, A is called a zero algebra; the product of any two of its elements is zero. * Henceforth in the book, multiplication is assumed to be associative; unless the contrary is expressly stated. 43 44 NILPOTENT AND SEMI-SIMPLE (CHAP. Iv The algebra in the preceding example is nilpotent if and only if 3 =o. The algebra B = (V, , v.): ví-V, V,V2-v20. =w; -o is nilpotent and of index 3. THEOREM. If an algebra A has a maximal nilpotent invariant sub-algebra N, every nilpotent invariant sub- algebra N, of A is contained in N. For, by Theorem I of § 20, N--N, is an invariant sub-algebra of A. To prove that it is nilpotent, let N. denote the intersection of N and W., and let P be any product formed of two or more factors N. and N, but not a power of either. Since N is invariant in A and occurs as a factor of P, we have Ps N. Similarly, Ps W. Hence Ps N2. Thus (N+N)*: W*-i-N}+N, , C = 2 . > If a is the greater of the indices of the nilpotent algebras N and N., we have N*=N;=o, (W--N.)*: N, (N+N)*: Nºs Nº =o, so that N+N, is nilpotent. It was seen to be invariant in A. But N is a maximal nilpotent invariant sub- algebra of A. Hence N, I” . . . . For, if w is any element of I', we' – e'w=o by the definition of I'. Then o-we’ - e=w(e-Hu)e=we, o He e'w =ew, so that w is in I. Also, u is in I, but is not in I" since ue' =užo. 35. Lemma. If e is a principal idempotent element of A, every element zºo of I, L, and R in (4) is properly nilpotent. By (3), each element of LR is annihilated by e and hence belongs to I. Since e is a principal idempotent, I is o or nilpotent. Hence there exists a positive integer k such that (LR)*=o, (RL) +=R(LR)*L=o, so that also RL is o or nilpotent. Since R is composed of all those elements of A for which Re=o, we have AR e=o, whence ARs R, § 37] SEMI-SIMPLE ALGEBRAS 5I A RLs RL. Similarly, LA s L, RL . A skD. Hence RL is o.or a nilpotent invariant sub-algebra of A. By (5) and (3), AL= RL--A . eL=RL, RA = RL-H.Re - A =RL. Hence AL and RA, like RL, are o or nilpotent, so that each element of L and R is o or properly nilpotent. The same is true of their intersection I. Now ARs R implies ek's R. Similarly, Les L. This proves the COROLLARY. If e is a principal idempotent element, each element of the first three parts I, eR, Le of (4) is zero or properly nilpotent. If all are zero, A =eAe has the nodulus e. 36. Theorem. Every algebra without a modulus. has a nilpotent invariant Sub-algebra. Let A be an algebra which is not nilpotent. By § 31, A contains an idempotent element and hence, by § 34, contains a principal idempotent element e. By the preceding corollary, either e is a modulus for A, or A contains properly nilpotent elements and therefore (§ 32) has a nilpotent invariant Sub-algebra. 37. Semi-simple algebras. An algebra having no nilpotent invariant proper sub-algebra is called semi- simple. Hence (§ 28) a simple algebra is semi-simple. For example, a direct sum of two or more simple algebras Ai, no one being a Zero algebra of Order I, is not simple since each A; is invariant, but is semi-simple (§ 40). Consider a semi-simple algebra A which is nilpotent. If the index of A exceeds 2, then A > A*zºo, and A* is a nilpotent invariant proper Sub-algebra of A, whereas A 52 NILPOTENT AND SEMI-SIMPLE ſcHAP. IV is semi-simple. Hence A is a zero algebra (i.e., A*=o). Then any element a zºo of A determines a nilpotent invariant sub-algebra (a) of order I. Since the latter is not a proper sub-algebra, it coincides with A, which is therefore of order I. º THEOREM I. A semi-simple algebra is nilpotent if and only if it is a zero algebra of order I. Consider a semi-simple algebra A without a modulus. By $36, it has a nilpotent invariant sub-algebra, which is not proper and hence coincides with A. Hence the preceding theorem yields THEOREM 2. Any semi-simple algebra has a modulus tunless it is a zero algebra of order I. 38. Theorem. If an algebra A is neither semi-simple mor nilpotent, and if N is the maximal nilpotent invariant sub-algebra of A, then A – N is semi-simple and has a nodulus. º For, suppose A —N has a nilpotent invariant proper sub-algebra S of index 0. By $ 27 (with N in place of B.), A then has an invariant proper sub-algebra B, N such that B, -N is equivalent to S and hence is nilpotent and of index o'. We recall that the elements of A –W are the classes [x] modulo N, each determined by an element 3 of A. In particular, let b be an element of B. Then class [b] is in B, -N, whence [b]*=[b"|=|o], so that b% is in N. Let a be the index of the nilpotent algebra N. Then b”=o, and B, is nilpotent, contrary to the definition of N. If A —N has no modulus, it is a zero algebra Z of order I (§ 37), whence Z =o. Then, if a be any element of A, [æ]=[x]*=[o], so that wº and hence also a would be nilpotent, whereas A is not nilpotent. $37 SEMI-SIMPLE ALGEBRAS 53 39. Theorem. A semi-simple algebra A, which is not simple, is reducible. For, A has an invariant proper sub-algebra B and has a modulus by Theorem 2 of § 37. Hence AB = B = BA. Suppose that B has a nilpotent invariant Sub-algebra Is B 3A. Evidently BIB is invariant in A; it is a proper sub-algebra since BIBs IBs I. Thus BIB is o or nilpotent. But A is semi-simple and has no nilpotent invariant proper sub-algebra. Hence BIB =o. Since A has a modulus, AIA is not zero and is evi- dently invariant in A. Also, AIA s ABA = BA = B-3A. Thus - (AIA)3=AIA - I - AIA s BIB =o. Hence AIA is a nilpotent invariant proper sub-algebra of A, whereas A is semi-simple. This contradiction proves that B has no nilpotent invariant Sub-algebra and (§ 36) hence has a modulus. Our theorem now follows from $22. 40. Theorem. A semi-simple algebra A, which is not simple, is a direct sum of simple algebras no one a zero algebra of order I, and conversely. For, A has a modulus and by §§ 39, 24 is a direct sum of irreducible algebras As each having a modulus (and hence not a zero algebra of order I). By the proof in § 39 with B=Ai, Ai is semi-simple. Since A, is irreducible, it is simple (§ 39). Conversely, if each A; is simple and is not a zero algebra of order I, then A = A, (B.A. (B . . . . is semi- simple. For, if I is an invariant sub-algebra of A, then I = I, (BI 2GB . . . . , where I;:A; . Since AI = Arſ, HA aſ a+ . . . . sI, 54 NILPOTENT AND SEMI-SIMPLE |CHAP. IV we have Ajlis I; . Similarly, IgAſs I; . Hence I, is - invariant in the simple algebra A; and hence is zero or Aj. Let I be nilpotent and of index a. Then o-I*=XI. Hence each I; is nilpotent, while A3 is not. Thus I =o. 41. Theorem. If e is an idempotent element of a semi-simple algebra A, then eAe is semi-simple. Since (eAe)*=e AeA : eseAe, eAe is an algebra con- taining eee = e, which is a modulus of it. Suppose it is not semi-simple, but has a nilpotent invariant (proper) sub-algebra N. Since N is invariant in e4e, which has the modulus e, W eAe = W. Hence NAN=Ne . A • eN=NeAe . W=N*, NAN-N- . NAN=Nº. Since A has a modulus by Theorem 2 of § 37, A*=A. Thus (ANA)*=ANANA = AN*A , (ANA) =A N*AN : A = AN3A, and, by induction, (ANA)'-AN’A. Since N is nil- potent, we see that, for r sufficiently large, (ANA) =o. Since A has a modulus, ANA contains N and hence is not zero. Thus ANA is a nilpotent invariant sub- algebra of A. This is impossible, since A is semi-simple and not nilpotent. COROLLARY. If A is simple, also eAe is simple. For, if N is invariant in e4e, which has the modulus e, eANAe=eAe - N - eAe:N3eAe, ANA ‘A. Thus ANA is an invariant proper Sub-algebra of A, which is impossible since A is simple. § 42] PRIMITIVE IDEMPOTENT ELEMENTS 55 A 42. Primitive idempotent elements. An idempotent element e of an algebra A is called primitive if there exists in A no idempotent element u(uže) for which €14 = ?? = 746. LEMMA. An idempotent element e of A is primi- tive if and only if eAe contains no idempotent element zá6. For, if u = eaežeis idempotent, where a is in A, then eu = u = ue, so that e is not primitive for A. Conversely, if e is not primitive, so that A contains an idempotent element u že such that eu = ue = u, then eAe contains the idempotent element eue = u že. For example, let A = (u, u,), where u: = u, u}=u, tl, u, =o a uzur. If au, +6lla is idempotent, it is equal to its Square aºu, +8°u, whence a =O or 1, 3-o or I. Hence the only idempotent elements are u, u, and the modulus m = u +ua of A. Now m is not primitive, since A contains idempotent elements uszám having m as modulus (or since m/4m = A has idempotent elements uszím). But u, is primitive, since u, u, -ozºu, u,m= u, Žm ſor since u, Au, = (u,) has no idempotent except u.]. Similarly, u, is primitive. By $34, m is the only princi- pal idempotent. . THEOREM I. If an algebra A contains an idempotent element, it contains at least one primitive idempotent element. - For, if A contains an idempotent element e which is not primitive, the lemma shows that eAe contains an idempotent element u že. Since e is a modulus for eAe, eu = ue = u, whence MAu = e uAu es eAe. 56 NTLPOTENT AND SEMI-SIMPLE [CHAP. Iv Here the equality sign is excluded since w(e–u)u = (u-u)u =o, u.Au-A, by Lemma 3 of § 18. Also, u.Auzo since u = u žo. Hence uAu is a proper sub-algebra of eAe. If the idempotent element u of A is not primitive, the lemma shows that uau contains an idempotent ele- ment vzºu such that (by the preceding argument) UAV is a proper sub-algebra of u Au. Since the orders of the algebras eAe, u.Au, VAv, . . . . form a series of decreasing positive integers, the process terminates and leads to a primitive idempotent element of A. In the preceding example, m is not primitive, but is the sum of two primitive idempotent elements u, and u2 such that u, u, =O = u-u. This illustrates the following THEOREM 2. A non-primitive idempotent element e of A is a sum of primitive idempotent elements whose products in pairs are all zero. - For, by the proof of Theorem I, P=eAe contains an idempotent element e, which is primitive for A, whence e, Že. Note that e? = e is in P and is a modulus for P. Thus d = e−e, is in P and de, -o, e.d-o. Since d” – (e–e)d-d, d is idempotent. Also, dAd-P by the proof of Theorem I with u replaced by d. If d (like e.) is primitive for A, the theorem is proved, since e=e, +d, e.d =de. =o. - But if d is not primitive for A, a repetition of the argument shows that dAd contains an idempotent element e, which is primitive for A, such that d, = d – e. is idempotent, drea = 0 = e2d., and d. Ad, 0, it would be the square of a real number ai, whence o-é-aš-(e-à)(ei+a;)=o, ei==Eal, § 45] REAL DIVISION ALGEBRAS 63 whereas the units I and e, are linearly independent. Hence e = – 6}, where 6; is real. Write E = ei/3. Then E} = — I. If n = 2, the algebra (I, E.) is equivalent to the field of all complex numbers. Henceforth, let n > 2, and denote the basal units by I, I, J., . . . . , where, (3) I*= – I, Jº- – I, . . . . . Since I-EJ is a root of a real quadratic equation, (I-HJ)*= —2+IJ-HJI = a(T-HJ)+8, (I–J)*= —2–IJ–JI = Y(I–J)+6, where a, 6, Y, 3 are real numbers. Adding, we get (a+y)I+(a–Y).J-H 8-H 6–H4=o. Thus a = y =o since I, J, I are linearly independent. Hence (4) IJ-HJI = 2é, (I-HJ)*=2é–2, (I–J)*= —2é–2, where e is a real number. As above, H–2e – 2 4. Then D contains a fifth unit l such that l’= –1 and, by the proof which led to (4), il-Hli =#, jl-Hlj=m, kl-Hlk = {, where É, m, are real numbers. Then * lk=li j=({-il); = {j-i(m—jl)={j-mi-H kl. Adding lº to each member, we get 2lk={j-ni-H; . Multiplying each term by k on the right, we get –2l={i+nj+ ſk, whereas l is linearly independent of I, i, j, k. THEOREM. The only division algebras over the field of all real numbers are that field, the field of all complex numbers, and the algebra of real quaternions. 46. Derivation of division algebras from known ones. For example, consider the field R(0) obtained by extend- ing the field R of all rational numbers by the adjunction of a root p of a quadratic equation whose coefficients belong to R and which is irreducible in R and has a real root p. Then the algebra of quaternions over the real $47] DIVISION ALGEBRAS OF ORDER 772 65 field R(p) is a division algebra which may be regarded as an algebra over R with the eight basal units I, p, i, ip-pi, j, jp-pj, k, kp=pk. In what precedes we may replace R by any sub-field S of the field of all real numbers for which there is a quadratic equation with coefficients in S, irreducible in S, and having a real root p. If that equation is of degree r, we obtain a division algebra over S whose 4r basal units are I, p, . . . . , p"T" and their products by i,j, and k. Similarly, from each division algebra of order nº obtained in the next section we may deduce division algebras of order rn”. - 47. Division algebras of order n°. We shall define a type of division algebras D of order nº over any field F such that they, together with those derived from them by the process of $46, give all known division algebras other than fields. - By way of introduction, note that if { is one root of a’-say-Hp = 0, the second root is 0(#)=s—É, since the Sum of the two roots is s. For the same reason, if we Subtract the second root from s, we get the first root, whence 6|6(£)]=0(s—É)=s—(s—É)=$. The first member is denoted by 6*($), a notation not to be confused with the square [6(3)} of 9(#). As a generalization of the quadratic equation, Con- sider an equation p(a)=o of degree n, with coefficients in a field F, having the roots (5) 8, 9(8), 6-(3), º&)=06 (6)), . . . . , 6-(3), 66 DIVISION ALGEBRAS ſcHAP. V where 9(8) is a polynomial with coefficients in F such that "(t)=#. Then if also d(0) is irreducible in F, we shall call Ó(a)=o a cyclic equation in F. The case n = 2 was discussed above. A numerical example for n = 3 is furnished by (15) below. Consider the algebra” D over F with the nº basal units (6) y'aj (i,j=o, I, . . . . , n–I), such that - () ()=0, 2001–0, . . . . , ſº-(x)=0, dº)=x, (8) &y=y0(x), y”= y (Y in F). First, let n = 2, and let F be a field not having the modulus 2. By adding to 3, a suitably chosen number of F, we may evidently assume that wº—6, where 6 is in F, but is not the square of a number of F. Then 6(a) = −a, andt (9) D=(1, x, y, ya): 3% = 6, &y=-ya, y' =y. The linear functions of a with coefficients in F form an algebra of order 2 equivalent to the field F(x). Hence the general element of D may be designated by z = u-Hyv, where u and v are in F(x). If v =o, u žo, 2 has the inverse u- in F(x). If v zºo, then z=wV, where w is of the form w =q+y, where q=a+63, with a and 6 in F. Write q' = a – 8%. Then º qy=yq', (y--q)(y-q')=y-qq'. Hence w has an inverse if Yºgg'. * * Discovered by the author and called a “Dickson algebra” by Wedderburn. f We may identify D with algebra (18) of § Io by taking a = -ó, 3= — ), u, = x, us=y, us=xy. Then u}=-aºy”--aff. We saw there that the associative law now yields the complete multiplication table (18). Conversely, since (18) is a matric algebra, it is associative. $47] DIVISION ALGEBRAS OF ORDER nº. 67 THEOREM I. For n = 2, D is a division algebra if y is not the norm qq' = a”–66% of a number q of F(x). This condition on Y and the foregoing condition that 6 is not the square of a number of F are evidently both satisfied when F is the field of all real numbers and ºy and 6 are both negative. In particular, if y = 6= — F, D is then the algebra of real quaternions and is a division algebra. For any n, the associative law and (8) imply &”y=&y0(x)=yſ6(x)}, . . . . , xy=yſ6(x)]". Multiplication by numbers of F and summation give (Io) f(x)y=yf{6(x)], for every polynomial f with coefficients in F. By induction, Go ſºy-yſºre). Hence, if f(x) and h(x) are any polynomials in 3 of degree 37, with coefficients in F, - (12) yf(x) . yſh(x)=y+f|0"(x)]h(x). Conversely, it is readily verified that the associative law holds for the algebra D over F for which multiplica- tion is defined by (12) under the agreement that y't' is to be replaced by Yyºtº-" if s--r=n, and that the final product f. is found as in ordinary algebra with a subse- quent reduction of the degree in a to n – I by use of the equation d(x)=o of degree n. In this sense, relations (7) and (8) define an associative algebra D over F with the nº units (6). 68 DIVISION ALGEBRAS [CHAP. V THEOREM 2. For n = 3, D is a division algebra over F if Y is not equal to the norm of any element of the cubic field F(x). $ Here the norm of f(x) means f(x)f(0)f(6*), where 6°–6(0(x)]. First, l = y)\(x)+1(x) has an inverse if it is not zero. For, if X(x)=o, u(x) is not zero and has an inverse in the field F(x). If X(w);40, it has an inverse. Write k(x) for –p)\-". Then l = (y-k)}\ will have an inverse if y—k has one. By (II) and (8), [y—k(x)|[y*-Hyk(0°)+k(0)k(0°)]= y—k(a)k(0)}(0°) is a number zºo of F, so that y—k has an inverse. Second, we are to prove that z=y+ya(x)+8(x) has an inverse. Write way—a (6). Then, by (II), (8), and 63(x)=x, wz=yp+ 0 , p = 3(x)—a(0°)a(x), ---.0%). If p = 0 =o, then y = a(6)a(5°)a(x) would be the norm of a(0). Hence yo-Ho is not zero and has an inverse v by the first case. Then v.w3=1, so that 2 has the inverse ww. 48. Division algebras of order 9. To show that there actually exist division algebras of order 9 of the foregoing type D, note that any seventh root z I of unity satisfies the equation (3) = -º-º-º-º-º-º-º-o: Dividing the terms by § and rearranging, we get ++a+++++++ = &#######1–0. § 48] DIVISION ALGEBRAS OF ORDER 9 69 Making the substitution (14) tº-s, tº-e-, *::=9–ss, we get (15) £3+ £4–23–I =o. If e is a root #1 of {7 = I, also e” and e” are roots of it and hence of (13). By (14), the roots of (15) are # =e-- 5 à-e-H-3–2, see-H-8-2, € 6 6 while à-e-H-8-2 . Hence, in accord with (5), the roots of (15) are # =#, £2 = 0(£)={*–2, #3 = 0(£)=6|6(8)]=6*(£), while 6(3) = 03(3) =#. Hence (15) will be a cyclic equation for the field R of all rational numbers if it is shown to be irreducible in R. But if the function (15) were reducible, it would have a linear factor £–r, where r is in R and hence is the quotient aſb of two integers without a common factor > I. Since r = aſb would be a root of (15) d;3 # = —a”-- 2ab-i-bº would be an integer. But as has no factor > 1 in common with b. Hence b = +1, r ==Ea. Since r is therefore an integral root of (15), r3+r°–2r = I, 70 DIVISION ALGEBRAS [CHAP. V so that r must divide I, whence r ===I. By trial, neither +I nor – I is a root. Hence (15) is irreducible in R. Our next step is to compute the norm N(f) of a poly- nomial f(£) with rational coefficients. Let m denote their positive least common denominator. Then f(£) is equal to the quotient of {(£)= pá-Hgći–Hr by m, where p, q, r, m are integers having no Common divisor > I. Thus g (16) m3N(f) =N(º) ={(8): (#2); (8) º The last product will be obtained from the Constant term of the cubic equation having the roots {(£), {(£), f(£). This cubic will be found by a simple device. When £ is any root of (15), we seek the cubic satisfied {=p?--q£-Hr. From £º we eliminate $3 by means of (15) and get {{=(q—p)?--(r-H2p){-|-p. Similarly, :*=(r-i-3p–g)?--(24–p)£-H4–p. Transposing the left members, we conclude that the determinant of the new coefficients of I, §, § is zero: r—& q p p r–H2p–3. q—p = O . q—p 24–p r–H3p–g-f $48] DIVISION ALGEBRAS OF ORDER 9 7I Its expansion is of the form – £3+ . . . . --N(t) =o. Hence N(, ) is the value of the preceding determinant for =o, whence - N(t)=p3–2p*q-H6pºr–pg?—pqr-H5prº-H43–24°r—qrº-Hrá. Since –ps +p = p^H p3 (mod 2), etc., we have N(º)=p-Hpg-Hpqr-Hpr—Hg-Hgr-Hr =1+(p+1)(q+1)(--) (mod 2). Hence if any one of p, q, r is FI, then N(t) = I (mod 2). But if p, q, r are all even, and hence m is odd, N(t) is divisible by 8 since each of its terms is of the third degree in p, q, r. Hence, by (16), N(f) is never equal to an even integer not divisible by 8. THEOREM. If Y is an even integer not divisible by 8, the algebra over the field of rational numbers defined by 3:3+3:4–23–I =o, a y = y(x”–2), yā= y, is a division algebra of order 9. 49. Summary. We have obtained non-commutative division algebras of Orders 4, 8, and 9, each over appro- priate fields. It is proved in Appendix II that, besides these and fields, there are no further types of division algebras of order is 9. It is shown in Appendix I that the algebra defined by (7) and (8) is a division algebra for every n when Y is suitably restricted. CHAPTER VI STRUCTURE OF ALGEBRAS We shall prove Wedderburn's important theorem that every simple algebra is the direct, product of a division algebra and a simple matric algebra, and con- versely. Also general theorems on the structure of any algebra which are needed in particular for the proof of the principal theorem on algebras (chap. viii). 50. Direct product. If B and M are linear sets of an algebra such that every element of B is commutative with every element of M and such that the order of the product BM is equal to the product of the orders of B and M, then BM is called the direct product of B and M and designated by either BXM or MXB. We assume henceforth that B and M are algebras. Then BM. BM = BºMº's BM, whence BXM is an algebra. The elements of BXM can be expressed as linear combinations of the basal units of M whose coefficients are arbitrary elements of B, or vice versa. **. For example, the direct product of the algebra (I, i, j, k) of real quaternions and the real algebra (1, V-1) can be expressed as the algebra of complex quaternions. The foregoing assumption about orders implies that every element of the algebra A =BXM can be expressed in one and only one way as a product of an element of B by an element of M. Hence if A has a modulus, both B and M have moduli, and conversely. 72 § 51] STRUCTURE OF SIMPLE ALGEBRAS 73 As in the example, suppose that B and M are sub- algebras of A and have the moduli b and m, respectively. Then the latter coincide with the modulus a =bm of A. For, * * a—m = a(a–m)=bm(bm—m)=b°m”—bm” =bm—bm=o, whence m = a. Similarly, mb(mb–b)=o, whence b =a. 51. Structure of simple algebras. Let A be a simple algebra over a field F such that A is neither a division algebra nor a zero algebra of order I. By Theorem 2 of § 37, A has a modulus u. By Theorem 3 of $43, w is not a primitive idempotent element of A. Hence by Theorem 2 of $42, (I) w=u;-- . . . . --un (n = 2), where ur, . . . . , u, are primitive idempotent elements all of whose products in pairs are zero. For brevity, write Aij == tli/Aug. g () s Evidently Au;A is invariant in A and is not zero since it contains ui, and hence Coincides with the simple algebra A. Thus Aij Aji=ui g Au; A • ?/k =u34tle =Aik 3 (2) AğA# =O(jżh) 5 Aij4.jh =Aik tº Next, A =XA; since A. = uAus 2Aijs A is 74 STRUCTURE OF ALGEBRAS [CHAP. VI To prove that the linear sets Aij are supplementary in their sum A, suppose that Ars has an element zºo in Common with the sum of the remaining Ag: 1,374, −2.4%iju; (x, & in A), summed for i, j = 1, . . . . , n with [i,j]zºr, s]. Then, multiplying by u, on the left and by us on the right, we get u,37/s = O. By Theorem 3 of $43, Aii = u(Au; is a division algebra with the modulus us. Since A#Aji=Aizºo, each Ajzºo. For izºff, A* =o, so that Aij is a zero algebra. LEMMA I. If 3.5 is any element of A#, then P=&#Aji is zero or Ai. ! For, by (2), Ajāji=Aii, whence PsAii. Also, by (2), PA;=&# g Aft|Ai =&;Aji=P o If PZo, let p?o and a be any elements of P and Aii, respectively, whence pa is in PA; = P. If P&Aii, and if n is in Aii, but not in P, then p3 = n is not solvable for a contrary to the fact that Ağ is a division algebra. A similar proof gives LEMMA 2. If xj is any element of A#, then A;&# is zero or Aj. LEMMA 3. If &# and aft are elements zºo of Aij and Ajº, respectively, then a gºo. For, suppose that the product is zero. Then (3) 3%AAj=o, since otherwise aft Akj= A; by Lemma I, whence Ak; would contain an element ack; for which 3.j}%j =ll; , ozººij =&ſll;=&;%;#3%j=o e § 51] STRUCTURE OF SIMPLE ALGEBRAS 75 Let yº; be an arbitrary element zºo of Ay. By (3), a jºyº; = o, aft #o. Hence the argument just made shows that yºAji=o, whence Agaji=o. Then, by (2), Ass=o, contrary to an earlier result. From the three lemmas we evidently have LEMMA 4. If &# is any element zºo of A#, then (4) &#Aji=Aii, Ajiwij=Ajj. By (4) and Lemma 3 of § 18, Aji has the same order as either Ai: or Aj, since Lemma 3, with k = i, shows that no element wizo of A# makes agºji=o, and similarly no element yj; #o of Aji makes yj;&#=o. Since the A5 are supplementary in their sum, we have - LEMMA 5. The nº algebras A; all have the same order t, and A itself is of order th”. Write eii for us (i = 1, . . . . , n). Let era, . . . . . en be elements #o of Ara, . . . . , Arn, respectively. By (4) for i = I and & =ej, we have egAji=Art. Thus, if j- I, Aji Contains an element ej, such that (5) 61;6; Féir (j = I, • * * > y n), which holds also for j = I since err is idempotent. Define an element ep, of Aza by (6) 6pq = 6p1614 (p, q = 2, . . . . , 71; pzíq). Hence we now have nº elements eş(i, j = 1, . . . . , n). If jæh, AşAhs =o by (2), whence (7) €ijëhk =O (j;4 h) e Since u = 2ess is the modulus of A, and esser; =o for ki> I by (7), 76 STRUCTURE OF ALGEBRAS [CHAP. VI ( 8) -- €r;= 2ékker; = 6;1613 , 6;r = ei: Pekk = 6;rérr , which also follow from the definition of A; as eitàeg. By their definition above, es;zo, ei, zºo. By Lemma 3, eiteri is not zero; it is an element of A; by (22). By (5) and (8), (eitei)*=ei, • 6:6;r 61; Féir 61161; = 6;161; , whence eigeri is idempotent. Since A# is a division algebra having the modulus eit, we have ei.e.: =ed by Corollary 2 of $43. Combining this result with (6) and (8), we have (9) €ij= 6;161; (i,j=1, . . . . , n). We conclude from (9) and (5) that €ijéj} = 6; 6,3631 €ik=érénérh = 6;16th = 6; , (Io) €ij6% = 6;} . The nº elements et are linearly independent” since each is not zero and since they belong to n° algebras A5 which are supplementary in their sum. Since the ej satisfy relations (7) and (Io) and are line- arly independent, they are the basal units of an algebra M of order nº over F which is equivalent to the algebra of all n-rowed square matrices with elements in F (§ 8, § 9, end). Such an algebra M shall be called a simpleſ 'matric algebra of order n°. * Also since eph-Xage; eº-ahkem by (7) and (IO), for aj in F. f The word “simple” is justified by § 52, and is needed since there are further algebras whose elements are matrices. $51] STRUCTURE OF SIMPLE ALGEBRAS 77 To each element art of Art corresponds the element - º, (I I) b = X. 6:10.1161; G * = I Conversely, b uniquely determines ar, since, by (7) and (IO), ember =endrier, Hair y er, being the modulus of Art. This one-to-One Corre- spondence is evidently preserved under addition and Scalar multiplication, and also under multiplication since / / / (12) >eidiſed Xeiraſreli–Peñarſenated=>ei;(anaſ)éli. Hence when ar, ranges over Art, the totality of elements (II) form an algebra B equivalent to Ari. Hence B is a division algebra. If in (12) we take aſ, to be the modulus ea of Art, we see that the modulus Xeh-Xei.e.,ed of M is the modulus of B. Since each element (II) of B is commutative with each element eſs of M. Let aft', . . . . , a!? be a set of basal units of Art. By (II), they correspond to elements b%, . . . . , b% which evidently form a basis of B. Now A is of order in” by Lemma 5. It will follow that A has a , basis composed of the in” products b%; if we prove the latter are linearly independent. But, by (13), S ôjºbºeji= S ôjiejiaº'eri . i, j, k i, j, k 78 STRUCTURE OF ALGEBRAS [CHAP. VI If this sum is zero when the 6's are in F, we multiply it on the left by e.p and on the right by eat and get S ôwº-o, öin-o. * Hence A is the direct product of B and M. At the outset we assumed that A is not a division algebra. If it be such, we may evidently regard A as the direct product of A itself by the algebra M, of order I whose single unitis the modulus u of A. To each element. au of M, where a is in the field F, we make correspond the one-rowed matrix (a); hence M, is equivalent to the algebra of one-rowed matrices with elements in F. THEOREM. Any simple algebra A over a field F, not a zero algebra of order I, can be expressed” as the direct product of a division algebra B over F and a simple matric algebra M over F. The moduli of the sub-algebras B and M of A coin- cide with the modulus u of A. It may happen that either B or M is of order I, the single unit being u. When F is the field of real numbers, all division alge- bras were found in § 45. Hence we have the COROLLARY. Apart from a zero algebra of order I, every simple algebra over the field of all real numbers is a simple matric algebra, or the direct product of the latter by either the binary algebra equivalent to the field of all complex numbers or by the algebra of all real quaternions, and hence is of order n°, 2n”, or 4n”. * In a single way in the sense of equivalence. For, if also A = B; XM1, where B, is a division algebra and M, is a simple matric ahyebra, then B, is equivalent to B, and Mr with M. The proof communicated by Wedderburn to the author is too long to insert here. § 52] DIRECT PRODUCT IS SIMPLE 79 52. Converse theorem. If A is the direct product of a division algebra B over F and a simple matric algebra M over F, then A is a simple algebra over F, not a zero algebra of order I. For, M has a set of basal units eş satisfying relations (7) and (IO). Let D be any invariant sub-algebra of A, and d any element zºo of D. Then d =Xbjeij, where the bi; are elements of B. Let b denote the modulus of B. Since each element of B is commutative with each element of M, the invariant sub-algebra D contains been º d o bers= : bbijbergeijers=bareps º i,j Hence D contains by M. Since dzo, we may choose q and r so that barºo. Given any element b! of the division algebra B, we can find an element 3 of it such that acbar =b', whence Bb, + B. Since D is invariant in A and Contains bar M, it contains Bm by M = Bb, mM =BM = A, where m =Xei; is the modulus of M. Hence D = A, so that A is simple. Moreover, an element of 3 of A is commutative with every element of M if and only if x belongs to the sub- algebra Brm. For, a =Xbjeij, where each bij is in B. Then épgº – > * # = > *. &épg= > *. i,j t J These sums are equal for all values of p and q if and only if biº-bº (by the coefficients of eº) and baj=o(jæg), whence & = b, 2ei: = bººm. 8o STRUCTURE OF ALGEBRAS |CHAP. VI The special case B = (b) of the theorem and this sup- plement shows that M is simple and that an element of M which is commutative with every element of M is a scalar multiple of its modulus m. The special case M = (m) shows that any division algebra B is simple.* 53. Idempotent elements of a difference algebra. Let A be an algebra, over the field F, which is neither nilpotent nor semi-simple. Thus A has a maximal nilpotent invariant proper sub-algebra N. By $38, A – N is semi-simple and has a modulus. Write [3] for the class, containing w, of A modulo N. THEOREM I. If e is an idempotent element of A, then [e] is an idempotent class of A – N. For, ſelf-ſe’l-ſe] and [e];4|o] since e is not in N. - THEOREM 2. Every idempotent class [u] of A —N contains idempotent elements of A. For, ſo];4|u}=|u}}= . . . . =[uſ]. Hence u/zo for every positive integer r, so that u is not nilpotent. The linear set S = (u, u', . . . .) is evidently closed under multiplication and hence is an algebra. But S is not nilpotent since u is not, and hence contains an idempotent element e (§ 31). Thus e=a, u-Faºu?-- . . . . H-ahu” (a; in F), [e]= a,[u]+ . . . . --aſſu"|= alu), a = a-H . . . . --aft. aſu)=[e]=[e]*= a”[u]*= a”[u], a = a”. But a =o would imply [e]=|o] and hence that e is nil- potent, whereas it is idempotent. Hence a = 1, [e]=[u], so that e is an idempotent element of A belonging to [u]. *To give a direct proof, let b'zo and b, be any elements of B. There exists an element & of B such that wb' = b, . Hence if b' belongs to any invariant sub-algebra D, also wb' = b, belongs to D, whence D= B. r § 53] IDEMPOTENT ELEMENTS 8I THEOREM 3.* If u is a primitive idempotent element of A, then [u] is a primitive idempotent element of A –N. In view of the lemma in § 42, it suffices to prove that, if [v] is any idempotent element of [u] (A –N) [u], then [v] coincides with [u]. We have [v]=[u][x][u]=[uxul, where 3 is in A. By the proof of Theorem 2, the algebra Y=(y, y', . . . .), y=uwu, Contains an idempotent element w of A belonging to |y|. Since y is an element of u Au, the element w of Y is in uAu. By the hypothesis that u is primitive, w=u. Hence THEOREM 4. If e is a principal idempotent element of A, then [e] is a principal idempotent element of A – W and is identical with its modulus. - For, in the decomposition of A relative to e, A =I+ek+Le-He/Ae, each element of the first three parts is o or properly nil- potent by the corollary in §35, and hence is in N. Hence we obtain all classes [æ] of A – N by restricting & to eAe. Each element of A – N is therefore of the * We make no use of the converse that if u is an idempotent of A such that [u] is a primitive idempotent element of A-N, then u is a primitive of A. For, if w= u-u is an idempotent of u Au, [v] is one of [u] (A–N) [u] and coincides with the given primitive idempotent lu) of A —N. Thus u-v is in N. But u–w is equal to its square. Hence 71 - ?) = O. 82 STRUCTURE OF ALGEBRAS [CHAP. VI form [e] [a] [e], whence [e] is the modulus of A – N and therefore a principal idempotent of it (§ 34). 54. Condition for a simple matric sub-algebra. THEOREM. If A has the maximal nilpotent invariant sub-algebra N and if A – N contains a simple matric algebra M, then A contains a sub-algebra equivalent to M. By hypothesis, M has the basal units leg, each a class of A modulo N, such that (14) ſeigl left|=leil, ſeigl ſell=o (jżl; i, j, l, k=1, . . . . , n). The class ſeal contains an idempotent element e, of A by Theorem 2 of § 53 or by (18) with r = I. We shall prove that A contains idempotent elements err, . . . . , ean all of whose products in pairs are zero, and such that ei is in the class ſeii!. To prove this by induction on n, let A contain idem- potent elements eit, . . . . , er-, r-, whose products in pairs are zero and such that eit is in the class ſets. Let S denote the sum of these eii. Then (15) eis-ei=sei, sº-s (i = 1, . . . . , r-1). Select any element b, of class ſer, and write” & a, = (I–s)b,(I–s)=b,-sb,-b,s-Hsb,s. By (15), we evidently have (16) €i;0, FO = (1,6; (i = 1, . . . . , r-I). Since S and b, are in the classes [er]+ . . . . -- ſer—r, ,-] and [em], respectively, whose product in either Order is zero by (14), we see that ſa,]=[b, +[er]. Hence * The use of the abbreviation (I–s)b for b-sb does not imply that A has a modulus. § 54] SIMPLE MATRIC AILGEBRA 83 |a,] = [a, , so that aft—a, is an element z of N, whence 2* =o. Evidently z is commutative with ar. By (16), (17) 6;2 = O =26; (i = 1, . . . . . r—I). Employing series* which stop with the term in 2", write (18) •-º-º-º-º-º- . . . . ) Then eſſ, Her. By means of (16) and (17), we find that * ... 6;6, r = O = €rré; (i = 1, . . . . , r-1). Since a,z is in the invariant sub-algebra N, er, is in the class [a, =[er]. This completes the proof by induction of the foregoing italicized result. For pâq, choose any element tº of the class ſepal and write and for epºlºgen. Then (19) epºdrºw-aba, |apal-ſeppleballéal=|éºn], - [a,arl=learlſerl=ſen]+[er], [aria,]=[em], by (14), so that - cº-ºrts, , aridir-er-Hør, where 2, and 2, are in N. From (19), we get * By the binomial theorem the inverse of VH-42 is (1+42)73–1–3(42)+(–3)(–3–1)(42)2+.... = 1–22+122- . . . . . But if the field has the modulus 2, we replace (18) by err=ar-H2+2+24+2}+ . . . . . 84 STRUCTURE OF ALGEBRAS [CHAP. VI (20) €ppºpa = dpa , apqêqq = dpa. Thus endinar = airar. , andºn = arrar, whence (21) airar, -en (I-Hz,), draw-(I+3)er. By (20) and (21), H art airdr, Harr-Harigºr, dridir ar, Harr-H2 rari. Since these are equal by the associative law, (22) (lr1%ir = 22ndrr, drigºr-ºrari. If z is N, so that z*=o, the product of a(1--z) by (I-Hz)~| =1 —z-Hz”— . . . . --(–1)*-*zº- is a. Hence by (22), (23) ar:(1+z)^*=(1+z)~'art. For r > 1, write (24) €ir = dir; er-ar.(I-H2,)" . Then by (21.) and the case ena, = a, of (20), we get (25) ever, Hawari(I+3,...)"=en, 61161; F 61r . Now en of (24) is equal to the second member of (23). Hence by the case area = an of (20) and by (21.), we get (26) eren - (1+3*)T'area =er, , enter-(I+3)Tºaria, = €rr . Finally write en for ene, when p> 1, q> 1, p=q. This and (252) and (26) give 6;== (i, j = I, • * * * , n). By this and (25), we get 6;éj} = €16.5 - 6;16th = €ir éir €th = €161k = €ik. § 55] CASE A —N SIMPLE 85 Finally, if jzh, eijeh, Heireſ €hréis = 6; 6.jéjj : éhéh, exh-o, since ejeh-o. Hence the ei; are basal units of a simple matric sub-algebra of A. 55. Structure of any algebra. By $40, a semi-simple algebra is either simple or is a direct sum of simple algebras no one of which is a zero algebra of order I. The structure of each such simple algebra is known by § 51. Hence we know the structure of all semi-simple algebras. - - - THEOREM. Let A be an algebra over a field F. Such that A has a modulus a and is not semi-simple. Hence A has a maximal nilpotent invariant proper sub-algebra N. Sup- pose" that A, N is simple. Then A is the direct product of a simple matric algebraf M over F by an algebra B over F having a modulus, but no further idempotent element. By $ 51, A – N is a direct product [B]X[M], where [B] is a division algebra and [M] is a simple matric algebra, and their moduli coincide with the modulus [a] of A – N. By $ 54, A contains a sub-algebra M equivalent to [M]. Denote the basal units of M by eg. Write e=Xeş. Then e°= e, ea = e− de , (e—a)*= a – e. By induction, (27) (e—a)*=(-1)^++(e—a). * The general case is reduced to this in § 57. f Any two determinations of M are equivalent by the final footnote in § 51. 86 STRUCTURE OF ALGEBRAS |CHAP. VI This implies e = a since [e] = [a], so that e—a is in N and hence is nilpotent. - Let & be any element of A and write (28) - wba-26iºeli. Then (29) > %pg&pg= s €ip%64;&pg= > épp3Céam = €3.6 = a&a=%, p, q p, q, i p, q %pq6ij = €ip%3aj= €ijéjp%éaj = €3%pg , So that wºn and ej are commutative for all values of p, q, i, j. The proof of the second theorem in § 52 shows that w is commutative with every ej if and only if x=2…e. But e = a is the modulus of A. Hence the ar, are the elements of a sub-algebra B of A which is composed of all those elements of A which are commuta- tive with every element of M. Since every wºn is commutative with each unit es; of M, it belongs to B. Hence, by (29), every element of A is expressible in the form (30) 2bpºena (bºn in B). If two such sums are equal, they are identical. For, their difference can be expressed as such a sum. Hence let (30) be zero. Multiply it on the left by ei; and on the right by en, and note that bºg may be permuted with eg. We get birefi = 0. Summing as to i, and noting that e = a, we get bir =o for all values of j and r. Hence A =BXM. By the final remark in . § 5o, B and M have the same modulus a as A. Since [B] is a division algebra, it has no idempotent element other than § 56] COMBINED THEOREM AND CONVERSE 87 its modulus by Corollary 2 of $43. Hence if e is any idempotent element of B, [e]=|al, and we have (27) and therefore e=a. - 56. If A is semi-simple, its N is zero. Then if A – W is simple, also A is simple. Hence we may com- bine the preceding theorem with that in § 51 as follows: THEOREM. If A has a modulus and A – N is simple, where N is the maximal nilpotent invariant sub-algebra if it exists, but is zero in the contrary case, then A is the direct product of a sub-algebra B having a modulus, but no further idempotent element, by a simple matric sub- algebra M. - The converse is true. In the proof we may assume that B has a maximal nilpotent invariant sub-algebra N, since otherwise B is a division algebra by Theorem 2 of $43 and A is simple (§ 52), whence the converse holds with N = o. The N of A = BXM is N, XM. For, if a. is in W, also (28) is in the invariant algebra N and, being also in B, is in N, (§ 32). Conversely, if an is in N, and hence in W, then 2xpaepa is in N. - Hence A – N = (B-N.) XM. But B – W, is semi- simple and its single idempotent element is its modulus; hence it is a division algebra by Corollary I in § 43. Thus A – N is simple (§ 52). - 57. Let A be any algebra which is neither semi- simple nor nilpotent. Then A has a maximal nilpotent invariant proper sub-algebra N. By the corollary in § 42, A contains a principal idempotent element u which is either primitive (and we then write u = u,) or else is a sum of primitive idempotent elements ur, . . . . , un whose products in pairs are all Zero. 88 STRUCTURE OF ALGEBRAS [CHAP. VI The semi-simple algebra A – N is either a simple algebra (A –N), or a direct sum of simple algebras (31) (A–N), , . . . . , (A–N), . By $53, the idempotent element [u] of A – N is its modulus and is a sum of primitive idempotent elements [u], . . . . , ſun] of A-N whose products in pairs are all zero. Each [us] belongs to one of the algebras (31). For, if [up]=XV, where w; is in (A –N), then * w;w;=O(izºj), [up]=[up]*=XVá, vi-vi. Hence those of the vi which are not zero are idempotent. But if two or more of the vi are idempotent, [us] would not be primitive by the Remark in § 42. The subscripts I, . . . . , n may be chosen so that [u], . . . . . [up] belong to (A-N), , [up.4-il, . . . . . [up.4-pl belong to (A-N), , etc. Write e, -ul-H © tº a º +up, , e2=up-H-F tº e º 'º +º, +p., • * * * , et-up-H tº ſº º Q + ºn , where r = p, + . . . . --pº-r-ţ-I. Then er, . . . 6; are idempotent elements of A whose products in pairs are all zero and whose sum is u. Since [e], . . . . , ſell belong to the respective alge- bras (31) and since their sum is the modulus [u] of the direct sum A – N of those algebras, they are the moduli of those algebras ($ 21). Also, § 57] GENERAL CASE 89 t (32) [e](A–N)[e]=ſel) (A–N)Mel=0 (izj). k = I In the decomposition of A relative to u (§ 33): A = I-HuR-H Bu-HuAu, the first three linear sets belong to N by the corollary in §35, whence (33) A =N1+uAu, NišN. We shall employ the abbreviations * Au-Aeſ, Nu-We, N.-> Nº. By (32) and the fact that N is invariant in A, we have e^{e}:s N(izºj), so that every element p = eae; of Aij is in N, whence eipe; = p, and Aij = Ng (izºj). Hence (34) - wAu = 2A;=N2+2A; , (35) A = N'-H2A; , N’=N1+N2=s N. If an element aſ of A# is properly nilpotent for Aj, it is properly nilpotent also for A. For, by (35), each element 3 of A is of the form ac'-->3', where ac' is in N" and wi is in Ali. Since A#,A# = o(jżi), afte-aja'+ajaj. Since 3' is in the invariant sub-algebra N of A, aja' is in N. Hence [ajæ]=[ajæ;|. Since as is properly nil- potent for Aff, ajaj is nilpotent, and the same is therefore true of class [ajaj and hence of ſajaj. Thus powers of a;3 with sufficiently large exponents are elements of N, whence agº is nilpotent. Since a was arbitrary in A, this proves that as is properly nilpotent for A. 90 STRUCTURE OF ALGEBRAS [CHAP. VI *-*. The same argument” shows that if an element a of wAu is properly nilpotent for it, a is such for A. For, by (34), a =v-HXai, where v is in N, and as is in Aii. For ac, in Ai, 2.x, is in uAu, and a2a: = p +2a;&; is nil- potent, where p is in N. This sum differs from ax by an element of N. Hence [aw] and therefore aa is nilpotent, whence a is properly nilpotent for A. Let Nj denote o or the maximal nilpotent invariant Sub-algebra of A#, according as there is not or is such a sub-algebra. As proved above, N3=N. Next, if Ni; is not zero, it is a nilpotent invariant sub-algebra of A#. For, since N is invariant in A, Nijs V, Aj;N;=e; Ae;N ejse;Nejs Nij, and similarly NjA;s Nij. Moreover, Ajº N = Nij. For, if an element v of N is in Aij, so that v =ejaei, then ejve; = y, and v is in Nij. Hence Ni; is the foregoing maximal Wi. Similarly, uMu is the intersection of u Au and N, and is evidently invariant in u.A.u. Hence uMu is zero or the maximal nilpotent invariant sub-algebra of uAu, according as there is not or is such a sub-algebra. The distribution of the elements of A; into classes is the same modulo Ni; as modulo N. For, if a. and y are elements of A# belonging to the same class (or different classes) of A modulo N, then 3–y is in A; and is in *To give another proof, let I be any nilpotent invariant sub-algebra of uAu. Then IB =o for a certain positive integer 8. Hence (I–HN)8s N, since N is invariant in A. Thus I-HN is nilpotent. To prove it is invariant in A, use (33). Then A (I-HN) = (W,+uAu)(I-HN) su.Au-I-HNs I-H N. Similarly, (I–HN)A s I-H N. Since I+N is a nilpotent invariant sub- algebra of A, it is contained in N (§ 30). Hence Is N. § 57] GENERAL CASE QI (or not in) N and therefore is in (or not in) Nij, whence 3 and y belong to the same class (or different classes) of A; modulo Nij, and conversely. * The class of A modulo N which is determined by an element ejwe; of A# is (36) ſe;|[x][e;]. Now [a] is in A —N which is the direct sum of algebras (31). Also, ſe;|2(A–N);ſe;]=[eſ] (A–N);[eſ]=(A–N); . Hence (36) is an element of (A –N)}. Conversely, any element of the latter is of the form (36) with 3: in A, and hence is in a class of A modulo N determined by an element ejace; of Aff. Thus, by the preceding para- graph, (A -N); is equivalent to Aj-Nă, which is there- fore simple. Applying $56, with A replaced by Aj, we obtain the - THEOREM. Let A be any algebra which is neither semi-simple nor nilpotent and let N be its maximal nil- potent invariant sub-algebra. Then A —N is a direct sum of t simple algebras (t= 1), and A contains a principal fdempotent element u = e, -- . . . . --el, where the e: are idempotent elements whose products in pairs are all zero. Then A =N'--S, where N’s N and S is the direct sum of the t algebras eae (j = 1, . . . . , t) and each e;Ae; is the direct product of a simple matric algebra by an algebra having the modulus ej, but no further idempotent element. Moreover, ejAe; (or u Au) has the maximal nilpotent invariant sub-algebra e Ne; (or uſu) or no such sub-algebra, according as ejMej (or u/Wu) is not or is zero. Also, N = W’+2e;Ne;. CHAPTER VII CHARACTERISTIC MATRICES, DETERMINANTS, AND EQUATIONS; MINIMUM AND RANK EQUATIONS We shall prove that every associative algebra is equivalent to a matric algebra and apply this result to deduce important theorems on characteristic, minimum, and rank equations from related theorems on matrices. In § 66 we shall establish a criterion for a semi-simple algebra which will be applied both in the proof of the principal theorem on algebras (chap. viii) and in the study of the arithmetics of algebras. - 58. Every associative algebra is equivalent to a matric algebra. The essential point in the proof of this equivalence is brought out most naturally by explain- ing the correspondence, first noted by Poincaré, between the elements of any associative algebra A over a field F and the linear transformations of a certain set (group). Let the units ur, . . . . , un of A have the multiplica- tion table %, (I) Mill;= > * (i, j= I, • * * * * n). k = I Then A is associative if and only if u(uſu)=(usu)u, for all values of i, s, r, and hence, by (1), if and only if 7% 47, * (2) > *-> ºn (?, s, r, k=1, . . . . . n). j= I j= I 92 § 58] EQUIVALENCE TO MATRIC ALGEBRA 93 Let x be a fixed element and z, z' variable elements &=2< , 2–2; sus, 2' =2}}uj of A. By (1), z=az' is equivalent to the n equations (3) Tº: tº-X'irº (k=1, . . . . , n), i,j - which define a linear transformation T. from the initial variables {1, . . . . , ºn to the new variables {..., . . . . , ğ. The determinant of T, is 7, >. &Yijk * = I Given the numbers { and #(k, i=1, . . . . , n) of F such that A(x) #o, we can find unique solutions § of the n equations (3). In other words, there exists a unique element 2' of A such that 32' =z, when z and & are given and A(x) zºo. Similarly, the equation z' =yz" between the foregoing z' and y = 2m, us, 2" =X/u, , is equivalent to the n equations Ty: g-> nºt! (j= I, • * * * * n), r, S (4) A(x)= (j, k=1,. . . . , n). which define a transformation Ty from the variables {{, . . . . , § to the final variables {{", . . . . , §. By eliminating the j, we get the equations of the product (§ 2): TxTy: {k= X. §ms?ijºyºſ' (k=1, . . . . , n). 2, 7, 7, S 94 CHARACTERISTIC, RANK EQUATIONS (CHAP. VII This transformation will be proved to be identical with Ty, where p = xy. This becomes plausible by elimination of z' between z = x2' and 2' =yz", whence z=a. y2" = p2" by the associative law. To give a formal proof, note that to p = 21;u; corresponds the trans- formation * Tº: {k= Xºrºntº , Tj= >. Šimsºis; , ?, S j, r in which the value of T; was computed from p = 3cy by use of (1). Then T.Ty=T, since the coefficients of £im, ſº are the sums (2). Hence the correspondence (3) between any element ac of the associative algebra A and the transformation T, has the property that to the product acy of any two elements corresponds the product T, Ty of the corre- sponding transformations. Thus the set of these trans- formations is such that the product of any two of them is one of the set.* - There is a second correspondence between any ele- ment 3 of A and the transformation obtained from z=z'z: (5) tº: tº-> tº (k=1,. . . . , n). i,j * Such a set is called a group if it contains the identity transforma- tion I and the inverse of each Ty. If A has a modulus e, then Te=I since g=ez'-2' gives th={{{k=1,. . . . , n). If A(x) #o, we saw that there exists a unique element w of A such that ww=e. Then TxTw-I, so that Twis the inverse of Tx. Hence all the transformations Tº for which A(x);4o form a group. Then also Twſz = I and wa-e for a unique w, whence A'(x), defined below (5), is not zero. Conversely, A'(x) zºo implies A(x) #0 if A has a modulus. § 58] EQUIVALENCE TO MATRIC ALGEBRA 95 Similarly, from z' =z"y we obtain ty. Then z=z"q, q=y&. This makes it plausible that tºty=t. A formal proof follows from (2) as before. The determinant of (5) is denoted by A'(x). If it is not zero, there exists a unique element z' such that z'z =z. We shall denote the matrix of transformation (3) by R, and that of (5) by S., whence (6) R,-(pg), ow-X tº (k, j-I, . . . . , n), having the element psy in the kth row and jth column; (7) S.–(gº), ow->iºn (k, j=1, . . . . , n). We shall call R, and S, the first and second matrices of 3. (with respect to the chosen units ur, . . . . , u,). Since the matrix of a product of two transformations is equal to the product of their matrices (§ 3), we have (8) RºRy–Rºy, SxSy=Sy: º The determinants A(x) and A'(x) of R, and S. are called the first and second determinants of a (with respect to us, . . . . , un). Since R, is the matrix of transformation (3), R. =o implies that {} is zero identically in the Ö, and hence that o Haz' for every z' in A. Similarly, S. =o implies that o –2'a for every z' in A. In particular, THEOREM I. If A has a modulus, either R. =o or S. =o implies w = O. * 96 CHARACTERISTIC, RANK EQUATIONS (CHAP. VII / Since each element of R, or S, is linear and homo- geneous in the co-ordinates § of a by (6) or (7), we have (9) Raw := a R, 5 R,+R, = Rx-Hy 3 for every number a of F, and the similar equations in S. By (8,) and (9), the correspondence between elements w, y, . . . . of algebra A and matrices Rx, Ry, . . . . is Such that wy, aw, and 3:-Hy correspond to R,Ry, ak, and R,+Ry, respectively. Moreover, if A has a modulus, this correspondence is one-to-one. For, if R = Ry, then o = Rx-Ry= Rx-y, whence ac—y =o by Theorem I. Hence by § 12 we have - * THEOREM 2. Any associative algebra A with a modulus ſis equivalent to the algebra whose elements are the first matrices R. of the elements a of A, and is reciprocal to the algebra whose elements are the second matrices S. of the elements ac of A. For example, let A be the algebra of two-rowed matrices - * (l, b - 0. 8 _ ( 0.1 8, m=(. }) a-(, ..) wº-(. ..) Then p, = mp and pſ, = um lead to transformations Tn on the variables a, Y, 3, 6, and in on a, 3, Y, 6, having the matrices /a b o o (l, C O O R.-|** O O s__|b d o o - o o a b l’ * \o o a c | ? o o c d o o b d where Rn is with respect to the units err, ear, era, eas of $8, and Sn is with respect to eit, era, ear, eas. By inspec- § 58] EQUIVALENCE TO MATRIC ALGEBRA 97 tion A is equivalent to the algebra with the elements R, and is reciprocal to that with the elements Sn. If A does not have a modulus, we employ the associa- tive algebra A* over F with the set of basal units to, u, . . . . , un, where the annexed unit uo is such that (IO) u}=uo, usu;=u; =uiuo (?–1, . . . . , n), and hence is the modulus of A*. Write (II) w” =&uo-Ha!, z*={suo-Hz, z*={{uo-Hz', where w, z, z' are the elements of A displayed above (3). Then acºz”= {.{{uo-H3:...+3,2'-Haz'. Equating this to 2*, we obtain the transformation to-8.8%, tº-tº-tº-X'irº i,j (k=I, . . . . , n). (12) The matrix of the coefficients of {}, {{, . . . . , § is R. The latter are the elements of an algebra equivalent to A* by Theorem 2. Now acº is in A if £, =o. Hence the elements & of A are in one-to-one correspondence with the matrices (13) Rº-| }. Pu . . . . Pin 98 CHARACTERISTIC, RANK EQUATIONS (CHAP. VII Note that (13) is obtained by bordering matrix R, in (6) with a front Column of É's and then a top row of zeros. Write ac' -23;uj. Then ww'->psus, ps- > pºjš. j We verify at once that the product R. Riº is R., since it is obtained by bordering matrix Rºy = R,R, with a front column of p's and a top row of zeros. Again, (9) imply the corresponding equations in R*. THEOREM 3. Any associative algebra A (without a modulus) is equivalent to the algebra whose elements are the bordered first matrices (13) of the elements & of A, and is reciprocal to the algebra whose elements are the bordered second matrices S. of the elements & of A. Here Sí is obtained by bordering matrix S. with a front column of £'s and a top row of zeros, and hence may be derived from (13) by replacing each pº by ag. THEOREM 4. Every transformation T. is commutative with every transformation ty. Hence (14) R. Sy=SyF, for all elements & and y of A if and only if A is associative. For, if we apply first transformation z=az' and afterward transformation 2' =z"y, we obtain Tºty: z=& - 2"y. But if we apply first ty: z=z'y and afterward T. : z' = az", we get * t,T.: z=zz" y. The group of the transformations T, and the group of t, are said to be a pair of reciprocal groups in Lie's § 59] CHARACTERISTIC EQUATION 99 theory of continuous groups. This was the origin of the term “reciprocal algebras” (§ 12). 59. Characteristic determinant and equation of a matrix. Let x be an n-rowed square matrix with elements in a field F. Let Go be an indeterminate. Write (15) f(0)=|3–0|| for the determinant of matrix &– alſ. Thus f(a) is a polynomial of degree n in a with coefficients in F. It was proved at the end of § 3 that (16) (x-asſ)adj. (2–01)=f(a)I. Each member may be expressed as a polynomial in a whose coefficients are matrices independent of a). Hence the coefficients of like powers of a) are equal. Thus, if m is any matrix commutative with w, the corresponding polynomials obtained by replacing a by m are identical, and the same is true of the members of (16). But if we take m = 3; and replace @ by a in the left member of (16), we obtain the matrix o. Hence f(x)] =o. We shall call f(a) and f(a) =o the characteristic determinant and characteristic equation of matrix x. THEOREM. Any matrix & is a root of its characteristic equation. It is understood that when a is replaced by & the constant term c of f(a) is replaced by cI. 60. Characteristic matrices, determinants, and equa- tions of an element of an algebra. Let g(@) be any polynomial with coefficients in F which has a constant term cao only when the associative algebra A over F has a modulus e and then the corresponding polynomial g(x) in the element 3 of A has the term ce. Then the first and second matrices of g(x) are (17) Rzę)=g(R.), Sº-g(S.). ; Too CHARACTERISTIC, RANK EQUATIONS (CHAP. VII For, if k is any positive integer, (8) imply Ryk =R}, Sºk-S. Multiply each member by the coefficient of 6% in g(@), sum as to k, and apply (9) and the similar equations in S. We get (17). First, let A have a modulus. Choose in turn as g(x) the characteristic determinants 8(a) and 6'(a) of matrices R, and S., respectively. Then, by (17) and § 59, t RSG) =ö(R.) =o, Sãº) F ô'(S.) = O . Hence 5(x) = 0, 6'(x)=o by Theorem I of $ 58. Second, let A lack a modulus and extend it to an algebra A* with a modulus u, defined by (IO). Choose in turn as g(x) the characteristic determinants of matrices R. and S., which by (13) are evidently equal to — w8(a) and — w8'(a), respectively. By the facts used in the proof of Theorem 3 of $58, equations (17) hold when R and S are replaced by Rº and Sº, respectively. Hence (§ 59), Rºzsº FO y Stagº) F O . Since A* has a modulus, Theorem I of $ 58 shows that the subscripts are zero. THEOREM.” For every element 3 of any associative algebra A, x6(x)=0, w8'(x)=0. If A has a modulus, also ô(x)=o, ö'(x)=0. - : & * For another proof, with an extension to any non-associative algebra, see the author's Linear Algebras (Cambridge, 1914), pp. 16–19. That proof is based on the useful fact that if we express &uj as a linear function of ur, . . . . , un and transpose, we obtain n linear homo- geneous equations in ur, . . . . , un the determinant of whose co- efficients is 6(3). Similarlv, starting with uſa, we obtain 6'(x). Com- pare $95. {-, & {** {2 ** tº e g $61, TRANSFORMATION OF UNITS IOI Let & be an element of any algebra A which need not be associative nor have a modulus. The matrices R,-oſ=(pig-wój), Sz-oſ = (ag-obj), in which 35 = 1, Ög=o(k2.j), are called the first and second characteristic matrices of 3, while their determinants 6(a) and 6'(a) are called the first and second characteristic determinants of 3:. Thus the first characteristic matrix of a is obtained by subtracting a from each diagonal element of the first matrix R, of 3. * When A is associative, 5(a)=o or @6(a)=0 and ô'(a)=o or wë'(a)=o are called the first and second characteristic equations of 3, according as A has or lacks a modulus. ... • These terms are all relative to the chosen set of basal units ur, . . . . , un of A. However, we shall next prove that 6(a) and 6'(a) are independent of the choice of the units. 61. Transformation of units. This concept was introduced in § 6. But we now need explicit formulae. Let ur, . . . . , un be a set of basal units of any algebra A, not necessarily associative, over a field F. We may introduce as new units any n linearly independ- ent elements of A: % (18) uſ-X rºu, (i = I, • * * * * n), j=1 where the Tij are numbers of F of determinant tº zo. Then equations (18) are solvable for the us; let the solu- tion be Io2 CHARACTERISTIC, RANK EQUATIONS (CHAp. VII (19) u-> Niu (t=I, . . . . , n), i = I where the Nui are numbers of F. Elimination of the w; between (18) and (19) gives * ... ſo iſ tº * — (20) >w-. if t = } (i,j=1, . . . . , n). By means of (19), any element & =X&ºut of A can be expressed in terms of the new units uſ as follows: (a) => x.uſ-X suſ, 8–XNº. t, i = 1 i = I t = I By (18) and (I), % % '...' — * tlitſ; = S TirTjsłł,!/s= S TirTjs'Yrsh!!h . 7, S = I r, s, h = I Replacing up by its expression from (19), we get - $7, /.../ – / / ! - — (22) uſu;= S ‘Yijkl} , Yij} = > TirTysºrshhhh, k = I 7, s, h = I which gives the multiplication table of the new units. 62. Characteristic determinants are invariants. Let R. and S. be the first and second matrices of a with respect to the new units uſ, . . . . , u, defined by (18). We seek the sum analogous to (6), but written in the accented letters £', 'Y' defined by (21) and (22): % / / , , / - pk;= > &nº=2\titirTysºrsh?\hkä i = I . § 63] MATRICES IO3 Summed for i, t, r, s, h = I, . . . . , n. Applying first (20) and afterward (6), we get p;= S Tjsºrsh)hkār- > Tjsphs}\hh . r, s, h s, h Write lift for Nº, and tº for tº. Let T be the matrix having tº as the element in the sth row and jth column. By (20), Xtill-o or I according as jæt or j = i. Hence T* is the matrix having lit as the element in the ith row and tth Column. Then pº =>liph tº gives R!-T-R.T., S. =T-S, T, the second being derived similarly by using (7) instead of (6). Thus, if a., is an indeterminate, R.—a I = T-(R.—aſ)T, S.–Goſ = T-" (S.–&I).T. Passing to determinants, we get | R.—wſ |=|R.—aſ |, |S.–01 |=|S.–01 |. THEOREM. Each characteristic determinant of an element 3 of an algebra, not necessarily associative, over a field F, is invariant under every linear transformation of imits with coefficients in F. The same is therefore true of their constant terms A(x) and A'(x). 63. Lemma on matrices. If ar, . . . . , an are the roots of the characteristic equation f(a)=o of an n-rowed square matrix m whose elements belong to a field F, and if g(@) is any polynomial with coefficients in F, then the roots of the characteristic equation of the matrix" g(m) are g(a), • * * * * g(an). * With the term c1 if the constant term of g(@) is c. IoA CHARACTERISTIC, RANK EQUATIONS (CHAP. VII By chapter xi, we may extend F to a field F in which f(a) g(@) decomposes into linear functions of w: ſº-º-o.... (-e), cº-de-B).... (e–). If I is the n-rowed unit matrix, we have in F' g(n)=p(m–5.1) . . . . (m–61). Passing to determinants, we get |g(m)|= 3"|m–3.I . . . . . m— 3,I = 8*f(8.) . . . . f(8). But, by the initial formulae, f(8;) = (a, -83) e - - tº (an–83), - g(a)= 8(a)-3.) . . . . (ak-3). Hence (23) g(n)=f(a) . . . . (a). Let : be a variable in the field F and write h(a), 3) for g(@)—É. Then h(m, 3) = g(m)—ÉI, so that the characteristic determinant of g(m) is the determinant of h(m, 8). Applying (23) with the polynomial g(m) replaced by h(m, É), we see that the determinant of the latter is equal to the product - h(al, 3) . . . . h(an, 8)=[g(a)-8] . . . . [g(an)–8]. Equating the latter to zero, we therefore obtain the char- acteristic equation of matrix g(m). Hence its roots are g(a), . . . . , g(g). - 64. Roots of the characteristic equation of g(x). THEOREM. Let g(@) be a polynomial of the type in § 60. Let F, be an extension of the field F such that the first (or second) characteristic equation of the element & of § 65] TRACE, NILPOTENT IO5 the algebra is solvable in F, and has the roots a, . . . . , an. Then the roots of the first (or second) characteristic equation of g(x) are g(a), . . . . , g(an). For, the first characteristic equation of 3 is |R.—wſ -o, which is the characteristic equation of matrix R, and has the roots ar, . . . . , an. Hence by § 63 with m = Rx, the roots of the characteristic equation of matrix g(R.) are g(a), . . . . , g(an). By (17) they are the roots of Ra(x)—wſ |=o, which is the first characteristic equation of g(x). COROLLARY”. An element & is nilpotent if and only if every root of either characteristic equation of 3 is zero. For, if x'=o and if there be a root p?o, the corre- sponding characteristic equation of a would have the root p’zo, whereas either characteristic equation of the element ois evidently a "=o. Conversely, if every root of either characteristic equation is zero, that equation is evidently a "=o, and by the theorem in § 60 x is a root of the latter or of its product by Go. t 65. Traces, properly nilpotent elements. The sum of the diagonal elements of the first matrix R, of w is called the (first) trace of ac, and is denoted by t. The first characteristic equation of w is |R.—wſ -(–1)"ſoft-toº--- . . . . ]=o. Hence t, is equal to the sum of the roots, and (§ 62) is independent of the choice of the basal units of the algebra. * This follows at once from the theorem in § 68. Ioé CHARACTERISTIC, RANK EQUATIONS (CHAP. VII In the proof of the next theorem it will be seen that we must exclude fields F having a modulus p, i.e., an integer p such that p3 =o for every x in F. When p is a prime, one such field is composed of the classes of residues of integers modulo p, as explained in detail in § IIo. Any sub-field of the field of all complex numbers has no modulus. THEOREM. An element 3 of an associative algebra A over a non-modular field F is zero or properly nilpotent if and only if tºy =o for every y in A. First, let a be zero or properly nilpotent, so that acy is nilpotent. Then all the roots of the first characteristic equation of acy are zero by the corollary in § 64, whence their sum tºy is Zero. Conversely, let try=o for every y in A. Since (xy) =&y, , y = (y1)"Tºy, t, =0, where z = (xy)", for every positive integer r. In the theorem of § 64 take g(@)=ay and replace a by acy; hence the roots of the first characteristic equation of 2 = (xy)' are the rth powers of the roots of that of acy. The sum is of the former roots was seen to be zero. Hence the sum s, of the rth powers of the roots of the first characteristic equation f(0)=a^+y,60°-4-H . . . . --yn =o of acy is zero for every positive integer r. For any field F, we have Newton’s identities, Sj-H YES3–1+Y2S3–2+ tº e º 'º + yj-,Sr-HjY;=o (j=1, . . . . , n). § 66] SEMI-SIMPLE ALGEBRAS Io'7 Since each s, =0, we have jºy; =o. Hence Y; =o, since F has no modulus. Thus f(0) = a "=o. Since every root of this characteristic equation of acy is zero, the corollary in § 64 shows that ay is nilpotent for every y, whence & is zero or properly nilpotent. But if F has a modulus the prime n, Y, need not be zero, although Y;= o(j ălţi 5 'y= > 71;ll; , 3:y= > &mjuill; e 7 j i,j Relations (9) evidently imply (24) tax= air 3. lz-Ey- a-Fly e Hence if the right trace of usu; is Tij, (25) to-> win. i,j = I Io8 CHARACTERISTIC, RANK EQUATIONS (CHAP. v1.1 This is zero for every y in A if and only if 7% (a6) > rift-o (j=1,. . . . , n). ? = I Hence & =X&uizo is properly nilpotent in A if and only if relations (26) hold (with £r, . . . . , ś, not all zero). THEOREM. Let the n-rowed square matrix (T5), in which Tij is the trace of usu;, be of rank” r. An algebra A over a non-modular field has no properly nilpotent elements (and hence is semi-simple) if and only if r =n. Also, A has a maximal nilpotent invariant sub-algebra N of order v if and only if u = n –r-o. The value of r depends solely tupon the constants of multiplication of A. - The reader is now in a position to follow the proof in chapter viii of the principal theorem on algebras. For an important application to the arithmetic of algebras, we shall need the explicit expression for Tj, which is the trace of usu; = 2)situ; and hence is the sum of the diagonal elements of the first matrix of the element obtained from & = 28tu; by replacing § by Yºji. A diagonal element of the first matrix of w is given by (6) with j = k. Hence \ 4% (27) Tsj= X. ^'sji'Yikk . ‘i, k = I * A matrix is said to be of rank r if at least one r-rowed minor is not zero, while every (r-HI)-rowed minor is zero. Then r of the #: in (26) are expressible uniquely in terms of the remaining n—r, which are arbitrary. See Dickson's First Course in the Theory of Equations (1922), p. II6. - § 67] MINIMUM EQUATION OF MATRIX IO9 67. Minimum equation of a matrix. Any square matrix m with elements in a field F is a root of its char- acteristic equation (§ 59) and hence is a root of a unique equation d(a)=o of lowest degree whose coefficients belong to F, the leading coefficient being unity. This equation is called the minimum (or reduced) equation of m. It is understood that when a is replaced by m, the constant term of p(a) is multiplied by the unit matrix I. LEMMA. If A(m) =o, where X(w) is a polynomial with coefficients in F, then X(a) is exactly divisible by q.(6). For, let g(@) and r(a) denote the quotient and re- mainder from the division of X(w) by p(a), where r(a) is either Zero identically or is of degree less than that of Ö(a). Then X(a)=q(a)(p(a)--r(a). Hence r(m)=o, so that r(a) is zero identically. THEOREM I. The minimum equation of an n-rowed square matrix m is q(q)=0, where q(a) is the quotient of the characteristic determinant f(a) of m by the greatest common divisor g(@) of its (n-1)-rowed minors. Denote the adjoint matrix (§ 3) of m—wl by (m-oſ), . Each of its elements is divisible by g(@). Hence (m—wſ),= g(@)M, where M is a matrix whose elements are polynomials in a without a common factor other than a number of F. Hence (16) with & = m becomes g(@)M(m—wſ)=f(a)IEg(@)g(@).I. IIo CHARACTERISTIC, RANK EQUATIONS (CHAP. vii We may delete the common factor g(@) from this identity in matrices since it is equivalent to nº equations between elements of the n-rowed matrices. Thus (28) M(m–01)=q(a)I. As in § 59 this identity holds true after w is replaced by any matrix commutative with m, say m itself. Hence q(m)=o. By the lemma, q(o) is divisible by 5(6). If p is another indeterminate, we have - q(q)-(b(p)=h(o), p)(p-o), where b(o, p) is a polynomial in a and p with coefficients in F. We may replace p by m and, since p(m) =o, obtain Ö(a)I=p(0), m)(m—wſ). From this and (28), we deduce q(a)V(2, m)(m–wſ)=(p(a)M(m–01). We may delete the common factor m—aſ whose deter- minant is not zero identically in a. Since the elements of M have no common factor, q(a) must divide (b(ac). Our two results show that q(a) and d(a) differ only by a factor belonging to the field F. Hence the theorem is proved. THEOREM 2. Every root of the characteristic equation f(0)=0 of a matrix is a root of its minimum equation q(q)=0, and conversely. For, if we pass from (28) to determinants, we have |M|.. f(a)={d}(a)]”. The converse is true by Theorem I. § 69] RANK EQUATION III 68. Minimum equation of an element of an algebra. Let & be an element of an associative algebra A over F. If A has a modulus, any polynomial g(@) with coefficients in F which vanishes when a = R. vanishes for a = x by (17) and Theorem I of $ 58, and conversely. Hence the minimum equation of R, is the minimum equation of 3. By the preceding Theorem 2, every root of the former is a root of the characteristic equation of R, which is the first characteristic equation 5(a)=o of a by § 60, and conversely. The same holds for S. and 6'(a)=0. If A lacks a modulus, we employ Rī instead of R, and note (§ 60) that (17) still hold. 3. THEOREM. Every root of the minimum equation of an element 3 of any associative algebra is a root of either characteristic equation of 3 and conversely. 69. Rank equation. By $ II the quaternion q= g-Hºi-Hinj-H ºk, in which g, ğ, m, are independent real variables, is a root of wº—2 gay-H (gº-H3+m2 +!?)=o, and is evidently not a root of an equation of the first degree. This quadratic equation is called the rank equation of the general real quaternion q since its coeffi- cients are polynomials in or, #, m, { and the coefficient of Gº is unity, and since q is not the root of an equation of lower degree whose coefficients have these properties. Consider any associative algebra A over a field F. Let ur, . . . . , un be a set of basal units of A. Let {1, . . . . , ś, be variables ranging independently over F. By $ 60, the element a =X&ui of A is a root of wö(a)=o, II2 CHARACTERISTIC, RANK EQUATIONS (CHAP. VII where 6(a) is the first characteristic determinant of a and is a polynomial in a whose coefficients are poly- nomials in #1, . . . . , ś, with coefficients in F. Hence there exists a least positive integer r such that 3, is a root of an equation of degree r, (29) coaſ-H clay-º-H . . . . =o, with or without a constant term according as A has or lacks a modulus, where each Ci is a polynomial in £1, * . , ś, with coefficients in F, while Co is not zero identically. --- When #1, . . . . , śn are indeterminates, co, c., . . . . have a greatest common divisor g by Theorem V of § II4. Write ci-gqi. Then (29) becomes gFOG)=o, where (30) R(w)=q,6'--q,ay-º-H . . . . . Here go, q1, . . . . have no common divisor other than a number of F, and go is not zero identically. These properties remain true when we interpret #1, . . . . , §n as independent variables of F, provided F be an infinite field as we shall assume henceforth.* By means of w =X&u; and the multiplication table (1) of the units us, we may express R(w) in the form 2f;ui, where fi is a polynomial in §, , . . . . , ś, with co- efficients in F. Since gR(x)=o, each gf;=o. By III of § II2, the corresponding function gf of indeterminates §, , . . . . , §n is Zero identically, so that one factor is * For, if f, g, h are polynomials in £r, . . . . . £, with coefficients in F, and if f=gh when the £’s are indeterminates, evidently fagh when the É's are independent variables in F. What we need is the converse, and it is true by III of § II2. § 69] RANK EQUATION II3 zero by the theorem in § III. Since g is not zero identi- cally, each fºo and R(x)=o. LEMMA. If X(x)=0, where X(w) is a polynomial in a whose coefficients are polynomials in §r, . . . . , §n with coefficients in F, then X(w) is exactly divisible by R(w) when $, . . . . , §n are indeterminates. For, let g(@) denote the greatest common divisor of X(w) and R(w). By V of § 114, there exist poly- nomials s(a) and t(a) whose coefficients are poly- nomials in $, . . . . , ś, with coefficients in F and a polynomial p in #1, . . . . , §n with coefficients in F such that s(2)N(0)+t(2)|R(0)=pg(@). Hence pg(x)=0. By the paragraph preceding the lemma, g(x)=0. Hence the degree of g(@) in a is not less than the degree of R(w) in view of the definition of the latter. But the degree of the divisor g(@) is not greater than that of the dividend R(w). Hence the degrees are equal. Then by IV of § II4 with p = I, K = 1, R(w) is the product of g(@) by an element of F. Since A(6) is divisible by g(@), it is divisible by R(w). As noted above, w8(a) is a polynomial having the properties assumed for X(w) in the lemma, and hence is divisible by R(w). Since the coefficient of the highest power of a in w8(a) is +1, we conclude that that of R(0) is a divisor of +I. Hence go is a number of F and may be made equal to be unity by dividing the terms of R(w) by it. THEOREM. Let A be any associative algebra over an infinite field F. If £r, . . . . , śa are independent vari- ables of F, the element & =2< is a root of a uniquely II4 CHARACTERISTIC, RANK EQUATIONS (CHAP. VII determined rank equation R(0) =o in which the coefficient of the highest power a)' is unity, while the remaining coefficients are polynomials in #1, . . . . , ś, with coeffi- cients in F. Also, & is not a root of any equation of degree us. Either characteristic determinant is A= (3,-6)(3,-4) (§-6) e Evidently every element 3 of A is a root of a = 0. NOW A=(0–0°)(a)--I+£,--É.-Hä)+p (mod 2), where p=S0-81328; 2 S = I-H 281-H+2.É.82. Thus sw-šić,ése=o for every x in A. Another such linear equation satisfied by a is 0.3 =o where q = (1–3) (I-82) (I-83). 70. Let & be an element of A whose co-ordinates £r, . . . . , §n are independent variables in F. As in § 68, the rank equation R(w) =o of a is the minimum equation of matrices R. and S. (or of R: and Sº if A has no modulus). The discussion” in § 67 is seen to hold * An indirect proof of the lemma consists in seeing that it is a trans- lation of that in § 69. § 71] SIMPLE MATRIC AI, GEBRA II5 when m is interpreted as one of the preceding four matrices, say R., since the leading coefficient of Ö(a)= R(w) is unity, while the remaining coefficients are now polynomials in §r, . . . . , śn with coefficients in F. THEOREM. The distinct factors irreducible in an infinite field F of the left member of either characteristic equation of a coincide with the distinct irreducible factors of the rank function R(w). 7I. Rank equation of a simple matric algebra. By § 59, any n-rowed square matrix & = (xj) with elements in F is a root of (31) R(0)=(-1)*|aj-650 |=o, öä - I, 6;=O(izºj). Let the sº be nº independent variables of an infinite field F. We shall prove that R(0)=0 is the rank equation. This will follow from the lemma in § 69 if we prove that R(w) is irreducible in F. It suffices to prove that its constant term ==|ag| is irreducible in F. In view of the footnote in § 69, this follows from the LEMMA. The determinant |x} of nº indeterminates *(i,j=1, . . . . , n) is a polynomial f(xi, º, . . . acan) which is irreducible in every field F. Suppose that f is a product of two polynomials g and h with coefficients in F. Since f is of degree I in each indeterminate, we may assume that g is of degree O and h of degree I in a.... No term of the expansion f of |x; contains the product of wr, by an element ºr of the first column. Hence g is of degree o in art, since otherwise acrºc., would occur in a term of gh =f. Thus h is of degree I in wrº. Since crºr, does not occur in a term of gh = f; g is of degree o in every wro. • ? II6 CHARACTERISTIC, RANK EQUATIONS (CHAP. VII THEOREM. The rank equation of the algebra of all n-rowed square matrices (xã) with elements in any infinite field is its characteristic equation (31). Hence by § 70 the characteristic determinant of a is the nth power of R(w) apart from sign. 72. Rank equation of a direct sum. If an associative algebra A with the modulus" e over an infinitef field F is a direct sum of algebras A1, . . . . , Al, and if R(a)=o is the rank equation of A, and R. (a)=o is that of Ai, then R(0)=R,(2) . . . . R(0). The co-ordinates § (j = 1, . . . . , ni) of the general element w; of A; are independent variables in F. The general element w = 23:; of A has as co-ordinates the independent variables #5 (j = 1, . . . . , ni ; i = I, . , t) in F. If also y=Xyi, then &y =2&yi, whence &*=X&# , oa R(x) =XR(xi). Hence each R(x)=0. By the lemma and the footnote in § 69, R(w) is divisible by the R;(0) and hence by their least common multiple L(a) when the £5 are indetermi- nates. Write L(a)=Riſø)0;(0). Then L(x)=0, whence L(x) =XL(x)=0, so that L(a) is divisible by R(w) by the same lemma. The two results show that R(0) is the least common multiple of the R. (a), - The theorem will therefore follow if we prove that no two of the R(w) have a common divisor of degree - o. Suppose that R, (2) and R.(a) have a common divisor D(@) of degree - o. Since R.(a) is of degree o in the * The theorem may failif there is no modulus since the rank equation of a zero algebra is always (Jºao. f The theorem fails for the algebra (u.) G (u.) GP (us), u}=u, over the field of order 2, since its rank equation is linear (end of § 69), while that of (u) is co-É.-o. § 73] RANK EQUATION II 7 £3, and R. (a) is of degree o in the £3, D(@) is of degree o in both sets and hence involves the single indeterminate a). But R. (a) = &º-Hc,0"T"-H . . . . , where c, . . . . are homogeneous polynomials in the #3 and hence vanish when each & =o. Hence D(@) is a divisor a " of ay". This is impossible since A, has a modulus and hence R.(6) has a constant term not zero identically by the corollary in § 69. 73. Rank equation unaltered by any transformation of units. For an associative algebra A with the con- stants of multiplication Yiji, let R(o; #, Yiji)=o be the rank equation which is satisfied by a =w, where &=X&u; is the general element of A. Under a trans- formation of units (§ 61), let x become 3' =>{{u}, and let R become p(a); #, Yºji). For a = x', both p and R(0; §, Yºjº) are zero; unless they are identical, their differ- ence is zero for a = x'. Passing back to the initial units, we obtain a function of degree ‘r which is zero for a = x, contrary to the definition of r. Hence the rank equation is independent of the choice of basal units.” * Another proof follows from the theorems of §§ 62 and 70 and the fact that each irreducible factor of an invariant is an invariant Com- pare Bôcher, Introduction to Higher Algebra (1907), p. 218. CHAPTER VIII THE PRINCIPAL THEOREM ON ALGEBRAS 74. Introduction. We shall prove that any associ- ative algebra over a non-modular field F is either semi- simple or the sum of its maximal nilpotent invariant sub-algebra and a semi-simple algebra, each over F. For the special case in which F is the field of all complex numbers, a more elementary proof is given in § 79. - We shall need to employ extensions of the given field F. In this connection, note that the theorem of § 66 implies the COROLLARY. Let A be an algebra over a non-modular field F. Let F, denote any field containing F as a sub- field. Denote by A, the algebra over F, which has the same basal units” (and hence the same constants of multiplica- tion) as algebra A over F. Then A, is semi-simple if and only if A is semi-simple. But if A has a maximal nil- potent invariant sub-algebra N, that of A, is the algebra over F, which has the same basal units as N. * 75. Direct product of simple matric algebras. Let A be a simple matric algebra over F with the mº basal units as such that (§ 51) (1) aga, -o (jær), aijajs-dis (i,j, r, s–I, . . . . , m). Let B be a simple matric algebra over F with nº basal units b,(r, s = 1, . . . . , n), Satisfying relations of * They may be assumed to be linearly independent with respect to FI by § I3. - - f II.8 $75, DIRECT PRODUCT OF MATRIC ALGEBRAS 119 type (1), such that each b, is commutative with every a; and such that the mºnº products aijb, are linearly independent with respect to F. Then those products are the basal units of the direct product A X B (§ 50). Take them as the elements of a matrix (en) which is exhibited compactly as the com- pound matrix - (i. (ai;)bia . . . . ...) (aij)bni (ai;)bha . . . . (dii)ban in which the entries themselves are matrices: diſbrº diabrs . . . . dimºrs (3) (ağ)bºs – * amibrs anzörs & Cº G → ammºrs From our two notations for the same element, we have P=aijörs Féi-i-m(r-1), j+m(s—1) Q=dubiu-es-Hºm(-), l-Hºm(u-I) • Evidently PQ =o unless k =j, t =s, and then PQ=diºn-elimº-), Hina-º. But k =j, i=s imply j+m(s–1) = k– –m(t–1) and con- versely, since j and k are positive integers sm. Hence the e's satisfy relations of type (1) and are therefore the basal units of a simple matric algebra. THEOREM. The direct product of two simple matric algebras of orders m” and nº is a simple matric algebra of order mºn”. I2O PRINCIPAL THEOREM ON ALGEBRAS [CHAP. VIII 76. Division algebras as direct sums of simple matric algebras. THEOREM. If D is a division algebra over a non- modular field F, there exist a finite number of roots of equa- tions with coefficients in F whose adjunction to F gives a field F, such that the algebra D, over F, which has the same basal units as D, is a direct sum of simple matric algebras over Fr. Select any element 3 of D not the product of the modulus e by a number of F. By $ 60, a is a root of either characteristic equation, and hence of a certain equation d(a)=o of minimum degree s- I having coefficients in F. Let F be the field obtained by adjoining to Fall the roots \,, . . . . , \, of 5(a)=0. Let D’be the algebra over F having the same basal units as D. Then (x-Me) . . . . (3,-\,e)=(p(x)=o in D'. Since & is not the product of e by a number X: of F (footnote in § 74), no one of the 2–Xie is zero, and yet their product is zero. Hence D' is not a division algebra by Theorem 4 of $43. - The division algebra D is simple (§ 52). Hence by § 74 D' is semi-simple and (§ 40) is either simple or a direct sum of simple algebras over F. Each such simple algebra is the direct product of a division algebra D. by a simple matric algebra, each over F' (§ 51). The order of each D, is less than that of D'; this is evident for the second case in which D' was a direct sum, and also for the first case in which D' was simple, provided the matric factor is of order × I; but the remaining case i = I, D' =D., is excluded since D" is not a division algebra. § 76] DIVISION ALGEBRA AS DIRECT SUM I 2 I If each D, is of order I, our theorem holds for F, = F'. In the contrary case, we employ an extension F" of F' such that the algebra over F", having the same n(n-1) basal units as D., is not a division algebra. To it we apply the argument just made for D'. Since the division algebras introduced at any stage are all of orders less than those of the preceding stage, the process terminates, so that we reach a final stage in which the division algebras are all of order I. Each division algebra of the prior stage is therefore a direct sum of simple matric algebras. Our theorem now follows from that in § 75. 77. Theorem.* If A is an algebra having a single idempotent element e over a non-modular field F, then A can be expressed in the form A = B+N, where B is a division algebra and N is zero or the maximal nilpotent invariant sub-algebra of A. The theorem is obvious when A is of order I, since then A = A +o and A is a division algebra. To prove the theorem by induction, assume it for all algebras of type A which are of orders less than the order of A. We first show that we may take N*=o. Let N*zo and write (4) A = B^+N, B' /\N=o, N=N1+N2, N, AN2=o. Since AN*=AN . Wis N. N and N*A*N*, Nº is an in- variant sub-algebra of A. The classes; (3) of A modulo N* are the elements of A – N*. In particular, the classes (n.), each uniquely * In § 79 there is a far simpler proof for the case of algebras A over the field of all complex numbers. f The notation (3) marks the distinction from classes [x] modulo N. I 2.2 PRINCIPAL THEOREM ON ALGEBRAS. [CHAP. VIII determined by an element n, of W., form the maximal nilpotent invariant sub-algebra (N.)=N-N* of A – Nº. Let (B') denote the set of classes modulo N* determined by the elements of B'. Then, by (4), A–N*=(B)+(N). Since N°zo, the order of A – N* is less than that of A and hence, by the hypothesis for the induction, we can choose a division sub-algebra (B") of A – Nº such that * A —N*=(B")+(N.). Write C= B^+N. Then, by (4), A =C+N*, C/NN*=o. Those elements c of C, for which classes (c) modulo N* belong to (B"), form a linear set B" of A. But we saw that, when either (B') or (B") is added to (W.), we get A – N*, whence (B")=(B') modulo (N.). Hence B" =B' modulo N, so that A = B^+N by (4). We had (B")*=(B") in A — Nº. Hence B^*= B'' modulo N* in A. Since Nº is invariant in A, (B^+N2)*: Bº'+N2. Hence A'EB"-HN* is an algebra. It is a proper sub- algebra of A, since A' * (i, k=1, . . . . , 6). t = I We may express (6) in the form (9) a;=&;+V; (i = I, • * * * * c), where v; is in N. Since W, is invariant in 4. - dids=2;(º-Fhis, where nig and nºt below are in N. Hence, by (8) and (9), C C - / ! — -A (I;(lh = S 'Yiktatº-nik, 71;} = ??ik — > T/##!'; . t = I , t = I But the product agai of two elements of A can be expressed in one and only one way as a linear combina- tion, with coefficients in F, of the basal units of A, which are composed of those of N and ar, . . . . , ac. Hence the Yist are numbers of F. But F, was derived from F by the adjunction of a finite number of roots of equations with coefficients in F. Hence F = F(#1, #2, . . . .), where I, §, #2, . . . . are linearly independent with respect to F. We may therefore write -vj = vio-Hviićr-Hviz$2+ • * * * * where the vij are in N. Write Ži-ai-Hvio, B=(z, %2, . . . . ; 2.), § 78] PRINCIPAL THEOREM I25 where z is in A and B is a linear set of elements of A over F. Hence A = B+N. Using also (9), we get w;=zi-Hºn; , n;=-vi-vio=Viré-Hviz$2+ . . . . . Substituting in (8), we get (z+ni) (25+m}) = >. Yiu (3,-Hº). t= I Since mink =o by Ní =o, the left member is the sum of zzº (which is in A and hence is free of $1, $2, . . . .) and the linear homogeneous function zºni Hnize of $1, $2, . . . . . Equating the parts free of #1, #2, . . . . , we have - C Žižk = > 'Yitz, , B*= B. t = I Hence A is the sum of the algebras B and N. It was noted above that A – N = B is a division algebra. 78. Principal theorem. Any associative algebra A over a non-modular field F, which is neither semi-simple nor nilpotent, can be expressed as the sum of its maximal nilpotent invariant sub-algebra N and a semi-simple sub- algebra K over F, which is not a zero algebra of order I. While K is not unique, any two determinations of it are equivalent. By $57, A has a principal idempotent element u and A =N1+uAu, N, is N, while if there is a maximal nilpotentinvariant sub-algebra of u Au, it is contained in W. Hence our theorem will follow for A if proved for u.Au, which has the modulus u. I26 PRINCIPAL THEOREM ON ALGEBRAS [CHAP. viii It remains to prove the theorem for algebras A having a modulus. By $38, A – N is semi-simple and has a modulus. First, let A –N be simple. By $ 55, A =MXB, where M is a simple matric algebra and B is an algebra having a modulus, but no further idempotent element. By $77, B = D+N, where D is a division algebra and N, is zero or the maximal nilpotent invariant sub- algebra of B. By $56, N= MXN. By $ 52, MXD is simple and is not a Zero algebra of order I. Hence A =M ×(D+N) is the sum of the simple algebra MXD and W. Second, let A – N be semi-simple, but not simple. By $57, A = N'-HS, where N’s N and S is the direct Sum of algebras Ar, . . . . , At, where each A; is of the type MXB just discussed and hence is the sum of a simple algebra K. and Ni, where N, is zero or the maximal nilpotent invariant sub-algebra of A; if it exists. More- over, N=N'-->Nº. Hence A = K+N, where K =XK. is a direct sum of simple algebras, no one a zero algebra of order I, and hence is semi-simple and not a zero algebra of order I (§ 40). - 79. Complex algebras. Any algebra over the field C of all complex numbers a-Hbi is called complex. A complex division algebra D is of order I and is generated by its modulus. For, if f(0)=o is the equation of lowest degree satisfied by an element & of D, f(a) is not a product of polynomials f(a) and f(a) each of degree = 1, since f(x)f,(x)=o implies that one of f;(x) and f,(x) is zero in the division algebra D. But if f(a) is of degree - I, it is a product of two or more linear § 79] COMPLEX ALGEBRAS I 27 factors in C. Hence f(a) is of degree I and 3 is the product of the modulus by a complex number. Every complex simple algebra, not a zero algebra of order I, is a simple matric algebra. For, by § 51, it is the direct product of a division algebra (here of order 1) by a simple matric algebra. A complex semi-simple algebra which is not simple is a direct sum of simple matric algebras (§ 40). The characteristic and rank equations of any semi- simple complex algebra are known by §§ 71, 72. We are now in a position to give an elementary proof of the principal theorem that every complex algebra with a modulus is either semi-simple or is the sum of its maxi- mal nilpotent invariant sub-algebra and a semi-simple sub-algebra. In the proof in § 78 of a more general theorem, use was made of the theorem in § 77 which may be proved far more simply for a complex algebra A. We may assume that the order of A is r_> I. Then A is not simple since a simple matric algebra of order r: I contains idempotent elements eit other than its modulus 2e;. In a semi-simple algebra which is not simple, the modulus of each component simple algebra is idempotent. Since A is not semi-simple, it has a maximal nilpotent invariant sub-algebra N. But A – N is a complex division algebra (middle of $ 77), which is therefore of Order I. Thus N is of Order r—I. Hence A is the sum of N and the division algebra generated by the modulus of A. For normalized basal units of any complex algebra, See chapter X. CHAPTER IX INTEGRAL ALGEBRAIC NUMBERS 80. Purpose of the chapter. We shall develop those properties of algebraic numbers which are essential in providing an adequate background for the theory of the arithmetic of any rational algebra to be presented in the next chapter. The latter theory will there be seen to be a direct generalization of the theory of algebraic numbers. In order to make our presentation elementary and Concrete, we shall develop the theory of quadratic numbers before taking up algebraic numbers in general. 8I. Quadratic numbers. Let d be an integer, other than +1, which is not divisible by the square of any integer > 1. As explained in § 1, the field R(Vd) is composed of all rational functions of Vd with rational coefficients. Such a function can evidently be given the form --- _e +f Vd Q g+h/d 2 where e, f, g, h are rational numbers, and g and h are not both zero. Multiplying both numerator and denomina- tor by g—hiſ d, in order to rationalize the denominator, we obtain q=a+b/d, where a and b are rational. Evi- dently q and a–bºd are the roots of (1) a 4–2d2+ (a”—db°)=o, whose coefficients are rational. For this reason, q is called a quadratic algebraic number. I 28 § 81] QUADRATIC NUMBERS I29 We shall assume that the coefficients of (I) are in- tegers, and in that case Call the root q a quadratic integer. Then 2a and 4(a”—db”) are integers. Thus 4db” is an integer. But d is an integer not divisible by a perfect square > 1. Hence 4b* has unity as its denomina- tor, so that it and 2b are integers. Thus a =#a, b =#8, where a and 6 are integers. Since a” – db” shall be an integer, a”–d6° must be a multiple of 4. If d is even, a” must be even and hence a multiple of 4. Thus also d6° must be a multiple of 4. But d is not divisible by the square 4. Hence 6° is even. Thus a and 8 are both even. Hence, if d is even, q is a quad- ratic integer if and only if a and b are both integers. If d is of the form 4k+3, then at-d6 and hence also a”-- 6° must have the remainder zero on division by 4. According as an integer is even or Odd, its square has the remainder o or I. Hence a and 6 are both even. If d is of the form 4k+I, then a”–d6°, and hence also a”– 6°, must have the remainder zero on division by 4, so that a and 3 are both even or both odd. Hence q=a+b/d is now a quadratic integer if and only if a and b are both integers or both halves of odd integers. These two cases may be combined by expressing q in terms of the quadratic integer 6 defined by (2) - 0=}(t+/d), d=4|k+1, instead of in terms of Vd itself. First, if a. and b are integers, then x = a – b and y = 2b are integers and q = x+y}. Second, if a =# (2n+1) and b =# (2s +1) are halves of odd integers, then & = r—s and y = 2s-HI are integers and q = x+yff. I3O INTEGRAL ALGEBRAIC NUMBERS (CHAP. IX THEOREM I. If d is an integer #4 I, not divisible by a square > 1, all quadratic integers of the field R(Vd) are given by 3-Hyff, where w and y are rational integers and 0= V d when d is of one of the forms 4k+2, 4k--3, while 0 is defined by (2) when d is of the form 4k+1. - The quadratic integers of R(Vd) are said to have the basis I, 6 since they are all linear combinations of I and 6 with integral coefficients ac, y. Note that every number of the field is expressible as a linear combination r , I-H st with rational coefficients r, s. THEOREM 2. The sum, difference, or product of any two quadratic integers of the field R(Vd) is a quadratic ſinteger. For, if a., y, z, w are all integers, the sum of q = x+y6 and t =z-Hwë is r-Hst, where r = x+z and s =y+w are inte- gers. Likewise, q—t is a quadratic integer. Finally, the product qt is the sum of az-H(xw-Hyz)0 and yu,0°, and, by the previous result, will be a quadratic integer if 6*, and hence also yºff", is one. The latter is evident if 0 = V/d, and is true also for case (2) since then 6–0+k, where k =}(d−1) is an integer. - 82. Algebraic numbers. We shall generalize the preceding concepts and theorems. When the coefficients of an algebraic equation are all rational numbers, the roots are called algebraic numbers. For an equation (3) 3:”—Harº”T*-H . . . . --an =o with integral coefficients, that of the highest power of & being unity, the roots are called integral algebraic numbers. § 82] ALGEBRAIC NUMBERS I31 Note that any integer a is the root of the equation 2–a =o of type (3) and hence is an integral algebraic number. THEOREM 3. If an integral algebraic number a is a rational number, it is an integer. For, if a =b/d, where b and d are integers without a common factor > 1, and if a is a root of (3), then, by multiplying its terms by d"T", we get |--º-º-adiº-- . . . . — and "T". Since the right member is an integer, we conclude that d=== I. Hence a = + b is an integer. We have the following generalization of Theorem 2: THEOREM 4. Any polynomial f(a, 3, . . . . , k) with integral coefficients in any integral algebraic numbers a, 8, . , k is itself an integral algebraic number. For, let a be a root of equation A (a)=o of degree a, 8 a root of B(8)=o of degree b, . . . . , and k a root of K(K)=o of degree k, where each equation has integral coefficients, and the leading coefficient is unity. Write n = ab . . . . k and denote by Q, . . . . , an the n numbers a"3". . . . . k” (a, -o, I, . . . . , a -1; bi =0, I, . . . . , b–I; . . . . ), arranged in any fixed Order. By means of A@ =o, we can express a”,a", . . . . as polynomials in a of degree * (i,j=1, . . . . , n), k = I t where each Y is an integer. The field R(0) is therefore an algebra of order n over the field R of all rational numbers with the Set of basal units ur, . . . . , u, and multiplication table (2). § 87] CASE OF ALGEBRAIC NUMBERS I43 By $ 60, & is a root of the first characteristic equation ô(a)=o of degree n. When the co-ordinates § of 3. in (1) are arbitrary rational numbers, 6(a) has rational coefficients and is irreducible in R. For, if reducible, it would continue to be reducible when we give to the #: the values of the co-ordinates of 6, whereas 6 was assumed to satisfy an equation of degree n irreducible in R and hence, by Theorem 7 of $84, 6 satisfies no equation of degree 37, with rational coefficients. This proves that the rank equation is (–1)*6(a)=o. The coefficients of 6(a) are polynomials in the £, and the Yiji with integral coefficients and hence are integers when the £i are all integers, i.e., when win (1) is an integral algebraic number. Hence the set S of all integral algebraic numbers of any algebraic field R(0) has property R. It has property U since u, = I. It has property C by Theorem 4 of § 82. Next, any set of numbers 3 of the field R(0) which has properties R, C, U is either S or a sub-set of it. For, by R, the coefficients of the rank equation of a are integers and the coefficient of the highest power of the unknown is unity (§ 69). Hence & is an integral algebraic number. Thus S is the unique maximal set. THEOREM. If an algebra is an algebraic field, its unique maximal set of integral elements is composed of all the integral algebraic numbers of the field. 88. Units, associated elements, and arithmetics. Two integral elements of an algebra A whose product is the modulus I are called units of A. Any product of units is a unit. For, ult, =vV, =ww, - I imply uUw w, v, u = I. I44. ARITHMETIC OF AN ALGEBRA [CHAP. X If w is an integral element and if u is a unit, then acu and u2 are called right and left associates of a., respectively. If also u' is a unit, & is said to be associated with uzu'. ASSociated elements play equivalent rôles in questions of divisibility. For instance, if also v and w are units whose product is I, a =yz implies uscu' =uyv wºu'. For example, if i=1/–1, the field R(t) is a rational algebra of order 2 whose integral elements are a = a + bi, where a and b are integers ($81). Then a is a unit if its product by a -bi is unity. There are exactly four units, viz., +I, +7. The four associates of 3 are +3. and +ix ==F(b–ai). If in an algebra A the integral elements whose determinant* is not zero may be associated in the fore- going sense with the various integral elements of a sub- algebra, we shall say that the latter elements form an arithmetic associated with the arithmetic of A. 89. Example. Consider the rational algebra A with two basal units I and e, where e”=o. The rank equation of a = a +be is, (x-a)*=o, whose coefficients are integers if and only if a is integral. The unique maximal set of elements having properties R, C, U is evidently composed of the ac =a+be in which a is integral and b is rational. Every such as is therefore an integral element of A. For any rational k, u = I-H ke is a unit since its product by another integral element I —ke is I. Let a zºo and take k = –b/a. Then acu =a. Hence if the determinant a” of a is not zero, 3 is associated with the integer a. Thus a can be decomposed into primes in only one way apart from unit factors. * Either A(x) or A'(x) may be understood since both are simultane- ously not zero or both zero by the footnote in § 58. $ 90 FAILURE OF EARLIER DEFINITIONS I45 Hence the arithmetic of algebra A is associated with the ordinary arithmetic of integers. This result illustrates the fundamental theorem (§ 104) that the arithmetic of A is associated with that of the sub-algebra whose elements are derived by Sup- pressing the components (here be) which belong to the maximal nilpotent invariant Sub-algebra of A. 90. Failure of earlier definitions of arithmetics. Du Pasquierº defined a set of integral elements of a rational algebra A to be one having properties C, U, M, and (in place of R) B. The set has a finite basis (i.e., it contains elements q,, . . . . , qi, such that every element of the Set is expres- sible in the form 2Ciqi, where each C; is an integer. We shall test this definition by the special algebra in § 89. Then any set having properties B, C, U is readily seen to have a basis I, q=r-Hse, where r and S are fixed rational numbers and Szío. Since g” is in the set by property C, we must have q’=a+bg, where a and b are integers. This equation is equivalent to r” =a+br, 2rs=bs. Hence 2r = b, r* = −a. If the rational number r were not integral, its Square would not be equal to the integer —a. Since r is integral, the basis I, q may be replaced by I, g–r. Hence every set has a basis of the form I, se, where S is rational and zºo. This set, designated by (I, se), is evidently contained in the larger set (1, #se), which in turn is contained in the still larger set (1, #se), etc. Hence there is no maximal * Vierteljahrsschrift Naturf. Gesell. Zürich, LIV (1909), II6–48; L'enseignement math., XVII (1915), 340–43; XVIII (1916), 201-60. I46 ARITHMETIC OF AN ALGEBRA |CHAP. X set. In other words, the algebra does not possess integral elements, Suppose we omit the requirement M and define the integral elements of our algebra to be those of any chosen one of the infinitude of non-maximal sets. It has been proved by the author” that factorization into indecom- posable integral elements is not unique and cannot be made unique by the introduction of ideals however defined. . The same insurmountable difficulties arise for sets having properties B, C, U", M, where f U' requires that the set shall contain all the basal units, one of which is the modulus (I and e in our example). This definition was employed by A. Hurwitz for the arithmetic of quater- nions (§ 91), Since now e shall occur in the set (I, se), s must be the reciprocal of an integer. Then also #s, #s, . . are reciprocals of integers. Hence (1, #se) is a set containing (I, se), and as before there is no max- imal set. Note that the aggregate of the elements in the infini- tude of sets (I, se) obtained by the definition given by either Du Pasquier or Hurwitz is the set of integral elements obtained in § 89 by the new definition. This suitable enlargement of each of their sets enabled us to overcome their serious difficulties. This is analogous to the gain by each of the successive enlargements of the primitive set of positive integers to the set of positive * Journal de Mathématiques, Series 9, Vol. II (1923). Also that similar insurmountable difficulties arise for many other algebras under the definition by Du Pasquier. t f Unlike properties R., C, U, B, property U' is not preserved under every transformation of the basal units. Hence U' is not a desirable assumption. § 91] ARITHMETIC OF QUATERNIONS I47 and negative integers, then to the field of all rational numbers, then to the field of all real numbers, and finally to the field of all complex numbers. - 91. Arithmetic of quaternions.” By $ II, q= o'--Éi-Hºmj-i-ſk and its conjugate q' = a –Ši-mj- (k are the roots of - (3) aft—2 gaſ-HN(q)=0, N(q)=qq'- 0°-H3+nº-Hº. Since the coefficients of the rank equation (3) are integers when o', £, m, are integers, the Set I of all quaternions having integral co-ordinates has the proper- ties R., C, U. We seek every set S of rational quaternions q which has properties R, C, U and which contains I and hence I, i, j, k. By R and (3), N(q) and the double 20 of the scalar part o of q are both integers. By C, the set Con- tains ig, jq, kq, whose Scalar parts are – £, — m, - . As before, their doubles are integers. Hence 4N is the sum of the squares of four integers. That sum is divisible by 4 since N is an integer. But the square of an even or odd integer has the respective remainder o or I when divided by 4, and a sum of four such remainders is a multiple of 4 only when they are all o or all I. Hence the co-ordinates of q are either all integers or all halves of odd integers. In either case the difference of any two co-ordinates is an integer. Thus every quaternion in S is of the form q= a-H (a +3)i-H (a +22);--(a-i-x)k 5 * A much more complicated theory, based on an earlier definition ($ 90), was given by A. Hurwitz, Göttinger Nachrichten (1896), pp. 311– 40; and amplified in his book, Vorlesungen über die Zahlentheorie der Qualernionen (Berlin, 1919). I48 ARITHMETIC OF AN ALGEBRA [CHAP. x where each æ; is an integer. Write &, for the integer 20. Then (4) q=&op-Hari-Haaj-Hask, p=# (I-Hi-Hj+k). Conversely, all Such quaternions q in which 3, . . . . , as are integers form a set S having properties R, C, U. This is true as to R by what precedes, and as to U since (4) becomes I for 3, -2, 3, −2. = x, = -1. To prove C, it suffices to prove that the squares and products by twos of p, i, j, k all belong to S. By (3), p”—p-HI =o, so that p" is in S. Next, ip=# (–1+i-j-HK), pi–3(–1+i-Hj—k) have all co-ordinates equal to halves of odd integers and hence are in S. The same is true of jp, pſ, kp, pk, as shown by permuting i, j, k cyclically, which leaves unaltered the multiplication table of i, j, k given in § II. Hence this set S is the unique maximal of all sets having properties R, C, U, and containing i, j, k. This set S will be shown to give such a remarkably simple arithmetic that we shall call its quaternions integral without inquiring whether there exist further maximal SetS. - THEOREM I. The integral quaternions are given by (4) for integral values of 3%, . . . . , 23. Expressed otherwise, they are the qualernions whose four co-ordinates are either all integers or all halves of odd integers. LEMMA I. Given any real quaternion h and any positive integer m, we can find an integral quaternion q such that N(h–mg) I has in common with p a right (and a left) divisor not a unit. For, if there be no such common divisor, a and p would be relatively prime, so that there would exist integral quaternions A and B satisfying Aa-H.Bp = I. Then N(A)N(a)=N(I–Bp)=(1–Bp) (I–B'p) =I – (B+B')p-HBB'pº =1--tp, where t is an integer. But N(a) is divisible by p. LEMMA 4. If p is a prime there exist integral solutions of (8) I+x^+y}=o (mod p). I52 ARITHMETIC OF AN ALGEBRA [CHAP. x For p = 2, we may take & = I, y=o. Let p}~ 2. If —I is a quadratic residue of p, so that –1 =3% (mod p), we may take y=o. Next, let — I be a quadratic non- residue of p, and let a denote the first quadratic residue of p in the series p – I, p – 2, p – 3, . . . . , the final term I being certainly a quadratic residue. Then b = a +1 is a quadratic non-residue. The product of any two quad- ratic non-residues is known to be a quadratic residue. Hence —b, as well as a, is a quadratic residue. In other words, there exist integers 3 and y for which asa', –a–I = -b=y” (mod p). These imply (8). An integral quaternion, not a unit, is called a prime quaternion if it admits only such representations as a product of two integral quaternions in which one of them is a unit. If T is a prime quaternion and if u and v are any units, then uTv is a prime quaternion, since if it were a product ab, then T =u'a - bu'. LEMMA 5. A prime p is not a prime quaternion. For, by Lemma 4, there exists an integral quaternion q=I+&i-Hyj whose norm is divisible by p. Hence by Lemma 3 there exists a common right divisor d, not a unit, of p = Pá and q =Qd. If P were a unit, so that P'P = I, then q = (QP)p. But this product of the integral quaternion OP' by p has all co-ordinates multiples of p, whereas the first co-ordinate of q is I. This contradiction shows that P is not a unit, so that p = Pd is a product of two integral quaternions neither of which is a unit. LEMMA 6. If the norm of an integral quaternion T is a prime, then T is a prime quaternion. For, if T =ab, N(a)N(b) = N (T) is a prime, so that either N(a) = 1 or N(b) = 1, whence either a or b is a unit. § 91] ARITHMETIC OF QUATERNIONS I53 THEOREM 3. Every prime quaternion T arises from the factorization p =TT' of a prime p. Conversely, every prime p is a product of two conjugate prime quaternions. For, if T is a prime quaternion, and p is a prime divid- ing the integer N(T) > 1, there exists by Lemma 3 an integral quaternion d, not a unit, such that T = ud, p = Pd. Here u is a unit by the definition of a prime quaternion T, so that u'u = I. Hence u"T=d, p=Pu'ar, pº-N(P)N(T), N(T);41. Either p = N(T) =TT', as desired, or p” = W(t), N(P) = I. Then P and v= Pu' are units, so that p = vT is a prime quaternion, Contrary to Lemma 5. To prove the second part of Theorem 3, note that, by the proof of Lemma 5, p = Pd, where neither P nor d is a unit. Thus N(P) =NG)=p. By Lemma 6, P is a prime quaternion. LEMMA 7. Given any integral quaternion a, we can find a unit” u such that au has integral co-ordinates. For, if a itself has integral co-ordinates, take u = 1. In the contrary case, a =#(ao-Hari-H . . . . ), where each aſ is an odd integer by Theorem I. Thus at = 4n+r, where r = I or – I. Then a = 2n+r, n =no-Hºnii-H . . . . . , r =#(ro-Hri-H . . . .). Since r is an integral quaternion whose norm is 4G)*= I, r is a unit. We take u =r'. Then au = 2nr'+1, whose co-ordinates are all integers. * The twenty-four units, obtained from N(u)=1, are == I, + i , ==j, =k, #(== I == i ==j==k). This enumeration will be used only to distinguish the arithmetic of quaternions from that of an algebra discussed later. I54 ARITHMETIC OF AN ALGEBRA [CHAP. x THEOREM 4. Every positive integer is a sum of four integral squares. This will follow if proved for primes since the product of any two sums of four integral squares is expressible as a sum of four integral squares in view of N(q)N(0)= N(qQ). If p is a prime, Theorem 3 shows that p = PP', where P and P' are conjugate prime quaternions. By Lemma 7, P=()u, where Q has integral co-ordinates and w is a unit. Then P’ =u'Q', ulu' = I, whence p =(00' is a Sum of four integral Squares. LEMMA 8. If q is an integral quaternion whose norm is even, then q = (1+d)h, where h is an integral quaternion. For, the Square of half an odd integer is of the form #(8m-HI) and the sum of four such squares is odd. Hence the four co-ordinates q, of q are all integers such that of E24;=24, (mod 2). Thus q, +q, and qs-Hg, have an even sum and are there- fore both even or both odd. In the respective cases, the co-ordinates of h=#(q.--q)+}(q.-q.):--#(qs-Hg.)}+}(qs—q.)k are all integers or all halves of odd integers, whence h is an integral quaternion. But (1–i)q = 2h, whence q = (1+i)h. THEOREM 5. Any integral quaternion can be given the form (I-H i)'mcv, where m is an integer, v is a unit, and c is a quaternion of odd norm whose co-ordinates are integers without a common factor > I. Let N(c) = pºl . , where p, q, l, . . . . are the prime factors, not necessarily distinct, of N(c) arranged in an arbitrarily § 91] . ARITHMETIC OF QUATERNIONS I55 chosen order. Then c=TKX . . . . , where T, K, N, . . . . are prime quaternions of norms p, q, l, . . . . , respectively. Here T may be chosen as any one of a certain set of right- hand associated quaternions, and then k may be chosen as any one of another such set, etc. There are no further decompositions of c into prime qualernions whose norms are p, q, l, . . . . in that order.” For, by Lemma 8, we may express the given quater- nion in the form (I+ i)'a, where a is an integral quater- nion whose norm is odd. By Lemma 7, we can choose a unit u such that au = b has integral Co-ordinates, whence a =bv, where v = u' is a unit. Let m be the greatest common divisor of the co-ordinates of b, and write b =mc. This proves the first statement in the theorem. By Lemma 3, c and p have a common left divisor not a unit. Hence by Theorem 2 they have a greatest common left divisor T which is not a unit, T being uniquely determined up to a unit right factor. If p were the product of T by a unit, p would divide c and hence divide each of its co-ordinates, contrary to the definition of c. Hence p =Td, where neither T nor d is a unit, whence p = N(T) =NCd), so that T is a prime quaternion by Lemma 6. Write c =Tc,. Then N(c) =NC)/p=ql . . . . . As before, c, and q have a greatest common left divisor K which is determined uniquely up to a unit right factor, while k is a prime quaternion whose norm is q. Write c. = KC, and proceed with C2 and l as before. Hence * But each prime factor of the integer m can usually be expressed in many ways as a product of two conjugate prime quaternions. I56 ARITHMETIC OF AN ALGEBRA [CHAP. x Let c=T, k,\, . . . . be any factorization of c into prime quaternions T, K, . . . . of norms p, q, . . . . , respectively. Since p =T.T. and since c is not divisible by the integer p, T, is a greatest common left divisor of C and p. Hence T. =Tu, where u is a unit. Now ca Tuk, . and c=TC, imply c. =uk,\,. . . . . Also, Q = N (K) = N (uk) =ukikºu'. Hence uk, is a greatest common left divisor of c, and q, and hence is equal to Ku, where u, is a unit. Thus K. =u'ku. The two expressions for c, imply c, -u,\,. . . . . This with l- N(u,\,) shows that u,\, is a greatest com- mon left divisor of c, and l, and hence is equal to Xu, where u, is a unit. Thus T-Tu, K =u'ku, , \, = u(\ua, . . . . , where u, u, u, . . . . are units and u', u%, . . . . are their conjugates as well as reciprocals. 92. Outline of the general theory. First, let A be a rational algebra which is not semi-simple and has a modulus. Then A =S+N, where N is the maximal nilpotent invariant sub-algebra of A, and S is a semi- simple sub-algebra of A. It will be proved in §§ 99-104 that the arithmetic of A is associated with that of S. This theorem was illustrated by an example in § 89. Second, let S be a semi-simple rational algebra and hence a direct sum of simple algebras S. By $ 93 the arithmetic of S is known completely when we know the arithmetic of each S. We shall prove in § 95 the impor- tant theorem that for a semi-simple algebra (and no other algebra) of order n each set of integral elements of § 93] ARITHMETIC OF A DIRECT SUM I57 order n has a basis, so that the new definition of integral elements essentially coincides with the definitions by Hurwitz and Du Pasquier for the case of semi-simple algebras and only in that case. - Third, let A be a rational simple algebra and hence a direct product of a simple matric algebra and a division algebra D. Then (§ 97) the integral elements of A are known when those of D are known, and conversely. The arithmetic of A is treated in § 98 for several algebras D by generalizing the classic theory of matrices whose elements are integers. In brief, the problem of arithmetics of all algebras reduces to the case of simple algebras and finally in large measure to the case of division algebras. 93. Arithmetic of a direct sum. Let the rational algebra A having a modulus a be a direct sum of two algebras B and C, called Component algebras of A. As proved in § 21, B and C have moduli 3 and Y whose Sum is a. THEOREM I. The first components of the elements of any (maximal) set of integral elements, with properties R, C, U of $87, of a direct sum BCPC constitute a (maximal) set of integral elements of the first component algebra B, and similarly for the second components. Conversely, given a (maximal) set [b] of integral elements b of a rational algebra B and a (maximal) set [c] of integral elements c of another rational algebra C, such that B and C have moduli 8 and Y and have* a direct sum, then if we add every b to every c we obtain sums forming a (maximal) set of integral elements of the direct sum BeC. * We can always replace B and C by equivalent algebras which have a direct sum (§ 13). I58 ARITHMETIC OF AN ALGEBRA |CHAP. x i) Let [a] be any set of integral elements a =b+c, a' =b'+c', . . . . of A = BEC, having properties R, C, U, where b, b', . . . . are in B, and c, c', . . . . are in C. By the closure property C, a-Ea' = (b+b')+(c-Ec') and aa' =bb'+cc' are in [a]. Hence the first components b, b', . . . . form a set [b] having the closure property C. Since the modulus a = 8+y of A is in [a] by property U, the set [b] contains the modulus 3 of B. By property R, for every element a of [a] the coeffi- cients of the rank function R(w) of A are integers. By § 72, R(0) is the product of the rank functions R.(a) and R,(6) of B and C. By $83 the R;(0) have integral coefficients, when R(w) has integral coefficients. Hence for every element of [b], the coefficients of R.(6) are integers. This proves the first half of the theorem when both words maximal are omitted. It is proved in (iii) when those words are retained. ii) Conversely, let [b] and [c] be any sets of integral elements of B and C, respectively. Then all sums a =b+c form a set [a] containing the modulus 8+y of A = Bép C, having the closure property C, as well as property R, since for any b and any c in those sets the rank functions of B and C have integral coefficients, whence their product (the rank function of A) has integral coefficients for any a of [a]. & Next, let [b] and [c] be maximal sets of B and C, respectively. Then, if the above [a] were not a maximal set of A, it would be contained in a larger set [aſ] of A. By (i), the first components b' of the a' =b'+c' form a set [bº] of elements of B having properties R, C, U, and likewise for the second components cº. Either [bº] is § 93] ARITH METIC OF A DIRECT SUM . I 59 larger than [b] and contains it, or else [c] is larger than [c], contrary to hypothesis. This proves the second half of the theorem. iii) Let [a] of case (i) be a maximal set of A. Then if [b] were contained in a larger set (5') of integral elements of B, case (ii) shows that [b'] and [c] would determine a set [a'] of elements a' =b'+c of A which have properties R, C, U, such that [aſ] contains the smaller set [a], whereas [a] is a maximal by hypothesis. This completes the proof of the first half of the theorem. THEOREM 2. If the element a =b+c of a set [a] of integral elements of A = B (BC is a unit, then b and c are tunits of B and C, respectively, and conversely. For, there exists an element a' =b'+c' of [a] such that aa' = a = 8+y, whence bb' = 3, ce' = y. An integral element not a unit is called a prime if it admits only such representations as a product of two integral elements of the same algebra in which one of them is a unit. THEOREM 3. If the integral elements of determinant zºo of the component algebras B and C possess factoriza- tion into primes in a single way apart from unit factors, the same is true of the integral elements of determinant zo of BGPC. For example, consider the direct sum (e) P(e) P(e): ei=ei, eje;=o (jzi). The rank equation of a =X&ei is II(a) –$3)=o. Hence the integral elements & are those having integral co- ordinates §. The latter are all zo in the product of a by a suitably chosen one of the units +er-Ee-Ees. We may therefore restrict attention to integral elements r I6o ARITH METIC OF AN ALGEBRA [CHAP. X 3 of determinant &#,ászºo and having positive co- ordinates. Denote a by (#1, #2, #3). Then a y = (#, mi, &m, #373). Since (a, 6, Yô)=(a, 6, 'Y) (I, I, ö), (a, 6, Y) = (a, (3, I) (I, I, Y), One of the Co-ordinates of a prime element is a prime number and the remaining two are unity, and con- versely every such element is prime. Hence if the ai, 8, Yi are all prime numbers, we have the following unique factorization into prime elements: (IIai, II63, IIYi)=II(a;, I, I) •II(I, £3, I) e II(I, I, Yi). 94. Sets of order n. Let S be a set of elements of a rational algebra A of Order n having a modulus, such that S has properties C and U and is of order n. Then S contains n linearly independent elements V, . . . . ,wn, which may therefore be taken as the basal units of A. By property U the modulus of A belongs to S. With- out loss of generality we may evidently assume that v, is the modulus. Let therefore % V,V, =0; , V;UI = V; , V;0;= ^i}}'}} (i, j = 2, • * * * 3 n). k = I The y’s are rational numbers. Bring the fractions Y to a common denominator 3 and write Yiji=vijº/ö, where 6 and the v are all integers. By property C, the set S contains u, =v, u = 6v (i-1). We have will;=Viêuj=6V;= us, usu; = 6V;0, -óV;=lli, % Ž tliu;=6° (* + > w) = 6vijºux-H > Vijk!/h , k = 2 k = 2 § 95] BASIS FOR SEMI-SIMPLE ALGEBRA I6I for i> I, j-> I. The constants of multiplication of tl, . . . . , un are all integers. THEOREM. If a set S of elements of a rational algebra A of order n having a modulus has properties C and U and is itself of order n, we can choose basal units ur, . . . . , wn of A belonging to S such that the constants of multiplica- tion are all integers and u, is the modulus. - 95. Existence of a basis for the integral elements of any rational semi-simple algebra A. Let A be of order n and S be any set of elements having properties R., C, U, and order n. By $ 94, we can choose basal units ur, . . . . , un of A which belong to S Such that u, is the modulus and such that the Y’s in % (9) uu-> * (i, j=1, . . . . , n) k = I are all integers. Let x =X&sus be any element of S. By property C, S contains wuj. By (9), % % &u; = > pºu, put X sºft. i = I S = I The first characteristic matrix of 3 is obtained by subtracting a from each diagonal element of matrix (pi;). Apart from sign, the coefficient of a "T" in the first characteristic equation of 3 is therefore % 7, > on- X &Yikh . k = I i, k = I I62 ARITH METIC OF AN ALGEBRA [CHAP. x Apart from sign, the coefficient c, of a "T" in the first characteristic equation of the element acu; is obtained from the preceding Sum by replacing # by pi; and hence is (Io) > *-a (j= I, • - - - 2 n). By $ 70 the distinct irreducible factors of the char- acteristic determinant 5(a) of any element X coincide with those of R(0), where R(0)=0 is the rank equation of X. When X is in S, property R shows that the coefficients of R(w) are integers, that of the highest power of a being I. Hence, by Gauss's lemma in § 83, the same is true of each factor and hence of the product ô(a) of powers of such factors. This proves that each c; in (IO) is an integer. - Let d denote the determinant of the coefficients of £, . . . . , śa in the n equations (IO). Thus dé, -d, where d, is the determinant obtained from d by replacing the elements of the Sth Column by the constant terms c., . . . . , ca. Inserting the value dºſd of £, in w =X&us, we get (II) &=d 2d, us. The elements of d are the sums (27) in § 66, where it was proved that dao if and only if A is semi-simple. Since the y's and the C; are all integers, d and the d, are all integers. Hence every element 3 of S is of the form (II), where the integer d is independent of the par- ticular ac, being a function of the Y’s alone. The proof in § 86 shows the existence of a basis (or, . . . . , an § 96. BASIS I63 of S such that the elements of S coincide with the linear homogeneous functions of the ap's with integral coefficients. THEOREM. Let A be any rational semi-simple algebra of order n having a modulus. Let S be any set of elements of A having properties R, C, U and of order” n. Then S has a basis w, . . . . , Øn, where Q, is the modulus. But if a rational algebra A is not semi-simple, no maximal set of its elements having properties R, C, U has a basis. For, some of the basal units of A may be taken to be properly nilpotent and we shall find in § 104 that the co-ordinates of those units are arbitrary rational numbers in the general element of a maximal set, so that there is evidently no basis (see the example in § 89). 96. A converse of the theorem above is the case m = n of the THEOREM. If for any rational algebra A of order n a set S of elements has the closure property C and the property B, of possessing a basis composed of m inde- pendent elements, then S has property R. First, let m = n. Then we may take the elements tl, . . . . , u, of the basis of S as new basal units of the algebra. By property C, usu; belongs to S. By property B, usu; is equal to a linear function (9) of us, . . . . , un with integral coefficients Yijk. Also the co-ordinates of any element & =2&ui of S are integers by property B, * The theorem may fail for sets of order o, whose elements belong to a maximal set S of elements of a division I74 ARITHMETIC OF AN ALGEBRA [CHAP. X algebra D for which properties R, C, U, P hold, is equiva- lent to a diagonal matrix (d., . . . . , d, o, . . . . , o], where each di is both a right and a left divisor of di- I, di-2, . . . . . Here di may be replaced by udºv, where u and v are any units of S. The final remark follows from (iv) and (v). We shall call (u, . . . . , u,) a unit if u, , . . . . , u, are any units of S. Employing only matrices whose elements are in S, we shall call a matrix d a prime matrix if it is not a unit and if it admits only such representations as a product of two matrices in which one of them is a unit. By definition any matrix equivalent to d is of the form pāq where the matrices p and q are units of the algebra. In other words any matrix d is associated (§ 88) with a diagonal matrix. First, let S be the set of integers so that the elements of our matrices are integers. Then any matrix d of rank n will be expressible as a product of prime matrices in one and only one way apart from unit factors if the like property is proved for diagonal matrices. The latter is proved essentially” as at the end of § 93. Hence unique factorization into prime matrices holds. Second, let S be the set of integral quaternions. The uniqueness of factorization of diagonal matrices and hence of any matrices whose elements are integral quaternions is subject to the same limitations as in Theorem 5 of § 91. * We now need consider (a, 6, yö) only when a divides 6 and when a and 3 both divide yā. For example, if y = 3, we employ (a, 8, 86) = (a, 3, 3) (I, I, 6). While we there employed (a, 8, 1), we would now use the equivalent matrix (1, a, 8). § Iool NORMALIZED UNITS I75 99. The fundamental theorem on arithmetics of algebras. The proof (§ 104) for any rational algebra depends upon that for the complex algebra with the same basal units. Hence we shall first deduce from the general theory of algebras a set of normalized basal units of any complex algebra and derive its characteristic determinants by a method far simpler than that employed by Cartan.* Moreover, our notations are more explicit and hence more satisfactory. It is only incidental to the goal of rational algebras that we find the integral elements of a normalized Complex algebra. That result alone would not dispose of the question for all rational algebras since not all types of the latter are rational sub-algebras of complex algebras in Canonical forms obtained by applying trans- formations of units with complex coefficients. Ioo. Normalized basal units of a nilpotent algebra. LEMMA. Any associative algebra A of index a is a Sum of a linear sets B, . . . . , Ba, no two with an element zºo in common, such that (14) Bº Bºis By-La-HBA-La-Li-H . . . . H-B. (p+q ‘a), (15) B, Bºs Ba (p+q > a.). For, we may select in turn linear sets B1, B2, . . . . such that A = B,-HA*, A*= Ba-HA3, . . . . , A*-*= B.-,+ A*, A" = B., where B, AA*=o in A* = B; +A*. Thus B, s A*. For i p; t > 0 , such that n, n, n, have the respective characters (i, j), (j, i), (i, l). All further products of two units are zero. To find the first characteristic determinant 5(6) of the general element z = x+y of A, where &=&er-H . . . . --āneh, 'y= Win;-H . . . . --vºng, we proceed as in the footnote to § 60. If n, is of char- acter (j, -), zej = {je; +lin, func. of n, . . . . , he ; Ż%g = §n,-Hlin, func. of no-1, no-La, . . . . . Transposing the left members after replacing 3 by Q, we obtain linear equations in the units such that the elements below the main diagonal of the determinant of § Io2] COMPLEX ALGEBRAS g I79° the coefficients are all zero, while each diagonal element is a 3–0. Hence 6(a) is a product of powers of 3–9 (j = 1, . . . . , h) with exponents = I. By $ 70, the same is true of the rank function R(w), in which the coefficient of the highest power of a) is unity. We are now in a position to investigate the sets of elements of A with rational co-ordinates which have properties R, C, U of $87. To secure the closure property C, we assume” that the Y'sin (18) are rational. By prop- erty R, each coefficient of R(0)=o is an integer. Since its roots & are all rational, they are integers. The maxi- mal set is composed of all elements z in which the £; are integers, while the vi are merely rational. All such z’s therefore give the integral elements of A. We shall prove that u = I-H2a,n, is a unit (§ 88) for all rational values of the a,. First, w(I —ain.) = I — a ni-Hla = I+arana+ls=ll. y where li denotes a linear function of ni, ni---, . . . . with rational coefficients. Similarly, waſ I-arºn.)= I-a;n;--l. = I-Harans-Hl, . Proceeding in this manner, we finally reach the product I. Hence tlv= I, v= (I-a;n,)(I-arºn.)(I-aigns) . . . . = I+2bin; , where the bi are rational. Hence u and v are units. If n, is of character (i, j), and 3, . . . . , ś, are all zºo, acu = 3c-H > a.s.n.- &-Hy=z p * This assumption is not necessary for the application we shall make in § IO4. I8o ARITHMETIC OF AN ALGEBRA ſcHAP. X provided a, = v,áT*. Multiply by v. Hence zv=ac. This proves that, if A(2) #o, so that each ##o, 2 is associated with its abridgment & Recalling the definition of associated arithmetics ($88), we have - THEOREM 2. If the Y’s are rational for the algebra A =S+N in Theorem I, the arithmetic of A is associated with the arithmetic of the sub-algebra S having the basal wnits er, . . . . , eh. Io3. General complex algebra. Any complex algebra A with a modulus e is the sum of its maximal nilpotent invariant sub-algebra N and a semi-simple algebra S which is a direct sum of t simple matric algebras St. Then Si has the basal units eis (a, 6–1, . . . . , pi), with (19) e.gé, -e, , e.geº-o(84%), e.gé,-o(#j), - R-A º (20) € F > e., N=eMe= > 6.4/Vés . ł, a - i, j, a, 8 If v is an element of N such that n=e.Végéo, nés= n, ne, -o (unless k=j, ºy= (3), and n is said to have the character i j (22) (. ) Let i and j be fixed integers such that erveſ, is not zero for every v in N and let v, va, . . . . be elements of N such that § 103! COMPLEX ALGEBRAS I8I (23) | ſ =éºppé, (p=I, 2, . . . J I I p form a complete set of linearly independent elements of N of character (24) (. !) whence every element of that character is a linear func- tion of the elements (23). By (23), _ !? j| . . . . . l? j — 23 - 2.7 P,- 6.ar | ſº 4-mºmº e.kºffs == |. ! p 2 ke =6.1966:6 5 whence P, is of character (22). Since N is invariant in A., k, belongs to N. We shall prove that the P, with ł, j, a, 6 fixed, form a complete set of linearly independent elements of N of character (22). First, if they were dependent, 26, P, =O for complex numbers c, not all zero, we multiply by ea on the left and by eff, on the right and get >| |-o, p I I p - whence each c, -o, contrary to hypothesis. Hence the number of elements in a complete set of character (22) is not less than the number in a complete set of char- acter (24). To prove the reverse, note that if a set of P, are linearly independent, the corresponding elements (23) will be linearly independent, since we saw how to deduce P, from (23) by multiplying by e., on the left and by eſs on the right. I82 ARITHMETIC OF AN ALGEBRA [CHAP. x In view of (20), the aggregate of the elements in the Complete sets just described for the various values of ł, j, a, 3 gives a set of basal units of N, each having a definite character. By (19), the product of e.v.e., by ef, vºeſ, is zero if jżk, while if j = k it is ei, v,eſ, vſ. . e., which is zero or of character - Hence O (jzík), i j] . [k l ... I ºf (25) | !. [. |- >4. !. (j=k). From this we shall deduce - # j k l O (jº or gº), * Go || || || || - >| || G-# 3-9. For, the left member denotes the product e. | - !". e e. |. ... which is zero if either jæk or 3% N. In the remaining case, the product of the juxtaposed e's is ef, which pro- duces no effect on (23) when used as a right-hand multi- plier. To evaluate our expression, it therefore remains to multiply (25) on the left by e., and on the right by e., the result is the sum in (26). The complete multiplication table of A is given by (19), (26), and § 103] COMPLEX ALGEBRAS I83 O (izºk or 624 A), (27) *(n+. nº.42, . . . .). I84 ARITHMETIC OF AN ALGEBRA [CHAP. X For the next step, let k / | k l F. < +: |. l ºp (b. *(n.º.º. *-i-Ha, . . . ). C. Replacing z by a) and transposing the left members of (29), (30), (31), . . . . , we see that the determinant 6(a) of the coefficients of the e's and n's is a product of powers (with exponents 2: 1) of the determinants #1– Cº) §. tº ſº tº tº Šip, D;(0)= ge tº §: ša gº & º e $º- Cº) Thus 8(a) is independent of the co-ordinates m of y. The same is therefore true of the rank function R(w) which is a divisor of 6(a). We are now in a position to investigate the sets of elements of A with rational co-ordinates which have properties R., C, U of $87. To secure the closure prop- erty C, we assume that the Y's in (25) are rational. The maximal set of integral elements of A is composed of the z = x+y in which co-ordinates of y are arbitrary rational numbers, while the ac's form a maximal set of integral elements of the sub-algebra S. If the a, are rational, I-H2a,n, is a unit (§ 102). If the determinant A(z)=6(o) of z is not zero, we can find a unit § IO4] FUNDAMENTAL THEOREM 185 k i **, *, *|| || k,j, N, 8, p such that wu = 3c-Hy=z. In fact, *-*. !, summed for A, i, j, a, 6, p. This sum will be identical with y if * > *-*. N for all i, j, a, 6, p. The determinant of the coefficients of the a's having i, j, 6, p fixed and X = 1, . . . . , pi, is D;(o)= 5.x (a, X = I, • • • • y pi), which is zero for no value of i since 6(o) was shown to be a product of powers (with exponents = 1) of the D.(o). There exists a unit w such that up = I. Hence zv = 3c, so that z is associated with 3. THEOREM. Any complex algebra A =S+N with a modulus has a set of basal units each with a definite char- acter and having the multiplication table (19), (26), (27), and (28). If the Y’s are rational, the arithmetic of A is associated with the arithmetic of its semi-simple sub- algebra S. - IoA. Arithmetic of any rational algebra. Let A be any algebra with a modulus over the field of all rational numbers, such that A is not semi-simple. Let N denote its maximal nilpotent invariant sub-algebra. By $78, A =S+N, where S is a semi-simple sub-algebra. I86 ARITHMETIC OF AN ALGEBRA [CHAP. X Let A', S', N' denote the algebras over the field of all complex numbers which have the same basal units as A, S, N, respectively. Then S' is semi-simple and N' is the maximal nilpotent invariant sub-algebra of A' = S’--N’ (§ 74). Introduce the basal units of A' which were employed in §§ Io2–3. As there proved, the first characteristic determinant and rank function of A' does not involve the co-ordinates of the basal units belonging to N’. Hence the rank equation R(w)=o of A does not involve the co-ordinates of the basal units {..., belonging to N, which are therefore arbitrary rational numbers in any integral element of A. Denote the basal units of S by sº. Then every element of A is of the form z= x+y, 2– 22 s; , 'y=2|Yoğa, where Y; and Y, are rational. Let - A(3)2.Éo, u-I-H2a,š, . . Then 3:14 = 3c-H > Xiaosiče. g ł, p Since N, of order g, is invariant in A, g São = > Tiekšk, k = I where the y’s are rational. Hence wu =w-Hy=2 if > Maxia,-Y. (k=1, . . . . , g). ł, p These g linear equations in g unknowns a, with rational coefficients are consistent and have unique § IoS) GENERALIZED QUATERNIONS 187 solutions a, which are therefore rational. In fact, after introducing the basal units of A' employed in §§ Io2–3, we proved that there exists one and only one set of co-ordinates of u such that wu =z, so that the same is true when we return to the present basal units. We can determine rational numbers 6; such that (§ 102, end) MV = I, v= I-H 26; ;. Hence u and v are units, and wu-z implies 20–3, whence z is associated with its abridgment & if A(z) zºo. FUNDAMENTAL THEOREM. The arithmetic of A =S+N is associated with the arithmetic of its semi-simple sub- algebra S. In other words, we may suppress the properly nilpotent elements of an algebra when studying its arithmetic. Io5. Generalized quaternions. Consider the algebra D whose elements are X=x-HyE, where 2 and y range Over all complex numbers with rational co-ordinates, such that (32) E*= — 8, Ex= x'E, where 2' = a –Ši is the conjugate of" & = a +&i. If – 6 is not a sum of two rational squares, D is a division alge- bra (§ 47, where w, y, Y are now replaced by i, E, -8, and we have taken 6 = –1). We restrict 8 to integral values. - * Writing y = n+ki, we see that X is the general element of the algebra (18) of § Io with a = 1, u, -i, ua-E, us=iE, so that D is a generalization of the algebra of quaternions (the case 8 = 1). As proved there, D is associative. The arithmetic of algebra (18) for any a and 8 is being studied by other methods by Latimer in his Chicago thesis, I88 ARITHMETIC OF AN ALGEBRA [CHAP. X The product of X by Z = z--whº is (33) XZ= az– 6 yu'+(ww-Hyg').E, which is an element of D, so that D is an associative algebra. We shall call X = x' –yf the conjugate of X, and (34) N(X)=XX=XX=xx'+Byy' the norm of X. The conjugate of XZ in (33) is seen to be equal to the product ZX of the conjugates of the factors taken in reverse order. Hence (35) W(XZ)=XZZX=XXZZ= W(X). N(Z), since ZZ is a rational number and hence is commutative with X. Note that X and X are the roots of (36) a 4–200+N(X)=o (x= 0–H3i). Consider the set I of all elements X = x+y|E in which a; and y are complex integers (i.e., complex numbers with integral co-ordinates). Then the coefficients of the rank equation (36) are integers. In view also of (33), we see that the set I has the closure property C. We shall now determine every set S of elements X of D which has properties R and C and contains I. For the moment give X, ac, y the foregoing notations and call o the rational part of X. Since i, E, and Ei belong to I and hence to S, the closure property C shows that S contains X, Xi, XE= — 39--&E, and XEi, whose rational parts are evidently or, – £, – 8m, 8%, respectively. The negatives of their doubles are therefore coefficients of the rank equations of X, Xi, etc., and hence are integers by property R. In other words, 23, and 26 y are complex integers, say u and w. Then § Ios] GENERALIZED QUATERNIONS I89 x-(+*E) 5 NGO-(nºw) de By (36) and property R, N(X) must be an integer, so that ww' must be divisible by 3. If c is a complex integer zºo, the introduction of cT*E as a new unitin place of E has the effect of dividing 3 by ce'. Hence we may assume that 8 is not divisible by a sum of two integral Squares. It is known that every prime of the form 4n+1 and every product of such primes is a sum of two integral squares. Also, 2 = I*-HI*. Hence we may assume that + 3 is either unity or a product of distinct primes of the form 4n+3. LEMMA. If such a 3 divides y”--ó”, where Y and 6 are integers, then 6 divides both Y and 6. For, if p = 4n+3 is a prime factor of 6 and hence of y”-Hô”, either p divides y and hence also 6, or we can find (§ 1 Io, end) an integer e such that Ye-I (mod p). Then o-E (Yº-Hö”)é*=1--(6e)” (mod p), whereas — I is known to be not Congruent to a square modulo p = 4n+3. Hence p divides y and 6. Thus 8 = p 3, divides pºs, where s= (Y/p)2+(6/p)*. Since 3 has no Square factor, 3, divides s. As before, any prime factor q of 3, divides both Y/p and 6/p. Proceeding similarly, we conclude that 3 = pg. . . . . divides both Y and 6. We proved above that 6 must divide ww' = y^+6°, if we write w = y +67. Hence 8 divides y and 6 and hence 190 ARITHMETIC OF AN ALGEBRA |CHAP. x also w. Write w = 8v. Thus every element of S is of the form X=#(u-HvK), where u and v are complex integers. Then (37) N(X)=}(uu'+8w') must be an integer. First, let 6= I (mod 4). By (37), ulu'--w', which is a sum of four integral squares, must be divisible by 4. They must all be even or all odd since the square of an even or odd integer has the remainder o or I, respectively, when divided by 4. The maximal set S is therefore composed of all elements #(u-HvA.) in which the four co-ordinates of the complex integers u and v are either all even or all odd integers. If in the latter case we subtract (38) G=}(I-Hi-HE+i E), we obtain a linear combination of I, i, E, i.E with integral coefficients. Since i E = 2G – I –? — E, the set S has the basis I, i, E, G. Since E= (1–7)G – I, S is com- posed of the elements 3+yG, where 3, and y are complex integers. This set S is closed under multiplication since Gi- — I-Hi-iG, G*=G-#(I+6). This completes the proof of the first part of the theorem below. Second, let 3=3 (mod 4). By (37), the integers uu' and w' must be congruent modulo 4. Write u = k+\i, v= p +vi. Then Kº-HA*=p^+vº (mod 4). Hence the values of K, N, p, v are Congruent modulo 2 to those in one of the six sets (39) (oooo), (or Io), (olor), (IoIO), (IOOI), (IIII). § Ios GENERALIZED QUATERNIONS I9I For the first of these sets, X=}(u-i-VE) is of the form a;+y|E, where 3, and y are complex integers, and hence belongs to the set I of all such elements. If from the half of any complex integer we subtract a suitably chosen complex integer w, we obtain #u, where u = k–HXi, k=o or I, A =o or I. Hence any element of S is the sum of a suitably chosen element &-HyF of I and an element H=}(u-HwB) for which (k, N, u, v) is identical with one of the sets (39) and not merely congruent to it. Hence S is derived from I by annexing one or more of the ele- ments H2, . . . . , Ha defined by the Second, . . . . , sixth set (39), respectively. Let S, be the set obtained by annexing either of (40) H2=#(i+E), Hs-#(1+iE) to I. It contains both of them since H2E= Hs—#(I-H 6), Hs E= H,-#(I+6), while I+6 is an even integer. Let S, be the set obtained by annexing either of (41) H3-#i(I+E), HA-3(1-HE) to I. It contains both of them since iF - H3-H, -}(I+6), iF : H = H,-4i.(I+6). If we annex all of the elements (40) and (41), we obtain a set containing Ha-HH,-E=#(1+i), whose norm is #, so that the set does not have properties R and C. If to I we annex Ha =G, given by (38), we obtain a set containing G = H2+Hg = H3+H., so that the set is a sub-set of both S, and S2. I92 , ARITHMETIC OF AN ALGEBRA [CHAP. X Hence the only maximal sets containing I are S, and S2. In view of their origin they have properties R and U. It remains to verify that they have the closure property C. Note that S, has the basis I, i, H., Hs since from the first two and the doubles of the last two we deduce E and iB and hence the basis of I. Since Hs = i.H,--I, the elements of S, are all of the form ac-i-yH2, where 3. and y are complex integers. Thus S, is closed under multiplication since Hai = — I – i.H., H}= —#(I-H 6). Similarly, S2 has the basis I, i, H3 = i.H., Ha, and is closed under multiplication since Hi-i-iH, Hi-H, -}(I-I-6). THEOREM. Let D be the algebra composed of the ele- ſments ac-HyF, where 3 and y range over all complex num- bers with rational co-ordinates, while E = – 8, Ex = x'E, and 8 is an integer. Without loss of generality we may take 6 to be + I or a product of distinct primes of the form 4n+3 or the negative of such a product. Then every maxi- 'mal set of elements having properties R and C, and con- taining the basal units I, i, E, i.E, is formed of all the elements 3+y|B, where ac and y range over all complex integers, while B is given by (38) if 3– I (mod 4), but B is either H, or H, in (40) or (41) if 8–3 (mod 4). Hence in the latter case, D has two such maximal sets. Except for 6 = – I, D is a division algebra. It remains only to prove the final remark in the theorem. As noted above, D is a division algebra if – 6 is not a sum of two rational Squares. Suppose that – 3– ()/e)*-H (6/e)*, § IOS GENERALIZED QUATERNIONS I93 where Y, 6, e are integers and e has no factor > I in common with both Y and 6. Then 6 divides – 6e− = y^+3° and hence divides both Y and 6 by the lemma. Write y = y, 3, 6–3,8. Then – e' = 30%--ó.). Hence 6 divides é+o” and hence also e by the lemma. Since e has the factor 3 in common with both Y and 6, 6 = + 1. For 6 = +I, + 6 is not a sum of two rational squares. Hence D is a division algebra unless 6 = –I. The case 8 = −3.−We saw that S has the basis I, i, E, G, with G defined by (38). Hence every integral element is of the form q=&o-Hari-Ha.2F-H3:36, where the ac; are integers. Let h = ho-H . . . . --h;G be any element of D. Then if m is a positive integer, the coefficients of I, i, E, iF in h-mg are d,=h,--#hs-m(x,--#33), d;=h1+%h;-m(x,--#3:3), da=ha-Hºhs-m(x2++3.3), d;=#(ha-mº). By choice of integers &s, 3:2, ºr, wo, we see that do, . . . d; can be made numerically sim, #m, #m, respectively. But N=N(h—mq)=d;+d;+3(d;+d). • 3 777, # For 6 = –3, 3(d.--dº) lies between —##m” and o. Also, d;--dº lies between o and #m”. Hence N lies between —m” and +m”. Then as in Lemma 2 of § 91 we can always perform the two kinds of division each with a remainder whose norm is numerically less than the norm of the divisor. From N(q) ===I, we see that the number of units is infinite. I94 ARITHMETIC OF AN ALGEBRA [CHAP. x The case 8 = +3.−Employing the set S, with the basis I, i, Ha, Hs, we obtain as in Lemma I of § 91 N(h—mq)< (#m)*-H (#m)*-H3(#m)*-ī-3(#m)*= }{m”.3m”. Then Lemma 2 of § 91 holds. From the integral solu- tions of 4N (q) =4, we obtain at once the I2 units of D: =EI, =Ei, =EH2, ==(H,-i), =EHs, ==(Hs—I). Thus D is not equivalent to the algebra in the preceding case, while neither is equivalent to the algebra of rational quaternions which has 24 units. The reader acquainted with the elements of the theory of numbers will find no difficulty in developing for algebra D with 6 ===3 an arithmetical theory analo- gous to that for quaternions in § 91. Ioé. Application to Diophantine equations. By way of example consider 3:-- . . . . --&f=3%. By factoring a;-aft, we reduce this equation to (42) 3:4-Hyº-H2+wº- uv. Since the norm 3:4-i-y”--?--w” of the product (43) 3+yi–Hzj+wk=AB of two quaternions (44) A = a-Hbi-H ci-H dk, B=a+ 3i-Hyj+ök is equal to the product of their norms, (42) has the solutions - 3 = a&–b{3–cy–d6, y=a&--ba-i-co–dy, (45) A z=a^-b6+ca-Hāff, w=a&#-by-c3+da, % = a2+b^+C2+d” 5 7) = a'-3'----- §2. ^ > § Ioël DIOPHANTINE EQUATIONS I95 We shall first find all rational solutions of (42). If w;4o, we may evidently write &_a y_b w a ' w a ' 5 2 º 3_ 7) . º where a, a, b, c, d are integers without a common factor > I. Then :-( ): .. ... (y-º-º-º: 7) \7) 7) 02 Denote the rational number w/a” by f. Then (46) ſº y=fba, z=foa, w=fda, 4. u-f(a^+b+c++dº), v=fa”. The rational solutions of (42) with v =o have & = y = z =w=o and hence are given by (46) with a =o. The products of an arbitrary rational number f by the six numbers (45), in which a, . . . . , 6 are integers without a common factor > I, give all the rational solutions of (42). In fact, we just proved that they are all given by (46) to which the products of f by the numbers (45) reduce when 6 = y = 6=o. - To prove that we obtain all integral solutions when we restrict the multiplier f to integral values, we have merely to show that, when the products of the numbers (45) by an irreducible fraction n/p are equal to integers, so that the numbers (45) are all divisible by p, then the quotients are expressible in the same form (45) with new integral parameters in place of a, , . . . , 6. It is sufficient to prove this for the (equal or distinct) prime factors of p, since after each of them has been divided out in turn p itself has been divided out. I96 ARITHMETIC OF AN ALGEBRA [CHAP. x Hence let p be a prime which divides the six numbers (45). In particular, p divides the norm u of the quater- nion A having integral co-ordinates. By Lemma 3 of § 91, A has in common with p a right divisor not a unit. By Theorem 4, p is a product PP'-P'P of two con- jugate prime quaternions with integral co-ordinates. After choice of the notation between P and P', we have A =QP, where () is an integral quaternion. i) Let p}~ 2. Then 0 has integral co-ordinates. Otherwise Q=#q, in which the four co-ordinates of q are all odd integers, and AP'-QPP'-#qp=#p q does not have integral co-ordinates in contradiction with the fact that A and P' and hence also AP' have integral co-ordinates. Since ac, y, z, w are divisible by p by hypothesis, (43) shows that AB = pC, where the quaternion C has integral co-ordinates. Either B has P' as a left divisor and B = P(q, where as above q has integral co-ordinates, or else the greatest common left divisor of B and P' is unity, so that I = BD+PE, where D and E are integral quaternions. In the latter case, A = A • BD+A : PE=pC. D+Q. PP' . E=p(CD+QE), where CD+OE is an integral quaternion, so that its double is a quaternion R having integral Co-ordinates. Hence 2A = pk, whereas the co-ordinates of A may be assumed to be not all divisible by p. For, if a, b, c, d are all divisible by p, then a, 8, y, 6 are not all divisible by p and we may employ from the outset the Conjugate B'A' of AB in place of AB in (43). Hence the second § Ioé DIOPHANTINE EQUATIONS I97 of the foregoing cases is excluded and we have B =P'q. Thus A = QP, Q= a;+ bri–H cij-i-dik, w-N(A)=p(a;+ . . . . --d;) B=P'q, q=ar-i-6, i-Hyºj-Hörk, v= N(B) = p(a;+ . . . . --ó), where a, . . . . , 6, are integers. Then by (43), AB=OPP'q=p04, tº-on Just as equations (45) were obtained from (43), we now see that the expressions for 3/p, y/p, z/p, w/p, u/p, V/p are derived from the expressions in (45) by replacing a, . . . . , 6 by the eight new integral parameters ar, . . . . , 6,. This completes the proof for any odd prime p. ii) Let p = 2. Since u is divisible by 2, a +b+c+d is even. Hence at least one of a +b, a + c, a +d is even. These three cases differ only in notation since the substi- tution T = (bcd)(3)6)(yzw), which permutes b, c, d cyclically, etc., leaves unaltered” the system of equations (45). Hence we may assume that a + b is even, whence c—Hd is even. Then A = a-b-i-b(I+i)+(c—d);--dk(I--i) is evidently the product of a quaternion Q having integral co-ordinates by P=1 +i, since 2 = (1–7)P. Similarly, if a-H 3 is even, B = P'q and the last part of case (i) * This is due to the fact that T corresponds to the cyclic substitu- tion (ijk) on the units, which leaves unaltered their multiplication table (§ II). I98 ARITHMETIC OF AN ALGEBRA [CHAP. x leads to the same conclusion when p = 2. But if a--8 is odd, Y-H 6 is odd and either a +y or a--ó is even. These two sub-cases are interchanged when we replace a by b, b by -a, Y by – 6, and 6 by y, whence z and w in (45) remain unaltered, while & is replaced by y, and y by –3. Hence let a +y be even. Since a-i-b and c—Hd are even, while a-H 6 and Y-H 6 are odd, of E3 = aa-Ha (a+1)-cy--c(y-HI)=a+c (mod 2). Applying the inverse substitution TT to a +c and a + Y, . we are led to the former case in which a-Hb and a +6 3.ſé €VéI]. THEOREM. All integral solutions of 3:4-Hy”--zº-Hw’ = up are given by the products of the numbers (45) by an arbitrary integer and hence are given by the formula which expresses the fact that the norm of the product of two quater- nions is equal to the product of their norms. This simple method due to the author” has led to the complete solution in integers of various Diophantine equations not previously solved completely. It is evidently applicable to aº-Hyº-E3(?--wº)=uw since there exists a greatest common left (or right) divisor of any two integral elements of the algebra D of § IoS with 6 = +3. In his book (cited in § 91), Hurwitz employed quater- nions to prove classic theorems on the number of ways of expressing a positive integer as a sum of four integral squares and to prove that every real linear transformation y;=diº-H . . . . --diº, (3= 1, 2, 3, 4) * Comptes Rendus du Congrès International des Mathématiciens (Strasbourg, 1920) pp. 46–52. Further developed in Bulletin of the American Mathematical Society, XXVII (1921), 353–65. § Ioël DIOPHANTINE EQUATIONS I99 of positive determinant for which Xy;=cxx; may be obtained from the equation y=a^b between real quater- nions. In particular, for c=1, every real orthogonal transformation of determinant + I on four variables is obtained from y=a&b where the norms of the quater- nions a and b are unity. To obtain corresponding results for three variables, take y, =4, -o, b =a'. CHAPTER XI FIELDS Io'7. Examples. In § I we gave several examples of fields of ordinary complex numbers. There exist also fields of functions; one example is the set of all rational functions of a variable a with rational coefficients; a more general example is the set of all rational functions of the independent Complex variables ac, . . . . , ºn having as coefficients numbers belonging to any chosen field of complex numbers. - Still further types of fields are obtained if we adopt the purely abstract definition next explained. We shall treat only those properties of fields which are required to make the theory of algebras presented in the preceding chapters valid for algebras over an arbi- trary field. Io8. Postulates” for a field. A field F is a system consisting of a set S of elements a, b, c, . . . . and two operations, called addition and multiplication, which may be performed upon any two (equal or dis- tinct) elements a and b of S, taken in that order, to pro- duce uniquely determined elements a (Pb and a G)b of S, such that postulates I-V are satisfied. For simplicity, we shall write a + b for a €5 b, and ab for a G) b, and call them the sum and product, respectively, of a and b. Moreover, elements of S will be called elements of F. * Essentially the second set by Dickson, Transactions of the Amer- ican Mathematical Society, IV (1903), 13–20. For other definitions by him and by Huntington, see ibid., VI (1905), 181-2O4. 2OO § Io8] POSTULATES FOR A FIELD 2OI I. If a and b are any two elements of F, a + b and ab are uniquely determined elements of F, and b+a = a-Hb , ba-ab. II. If a, b, c are any three elements of F y (a+b)+c+a+(b+c), (ab)a= a(bc), aſb-Hc)=ab-Hac. III. There exist in F two distinct elements, denoted by O and I, such that if a is any element of F, a +o = a, a . I = a (whence o-Ha = a, I a = a by I). TV. Whatever be the element a of F, there exists in Fan element & Such that a +3=o (whence ac-i-a=o by I). V. Whatever be the element a (distinct from o of F, there exists in F an element y such that a y = I (whence ya = I by I). Io9. Simple properties; subtraction and division. VI. The elements denoted by oand I in III are unique and will be called the zero and the unity of F. For, if a-Hz = a and au = a for every a in F, we have in particular o-H2=o, I u = 1. But, by III, o-Hz=z, I u = u. Hence z = o, u = I. VII. If a, b, c are elements of F such that a-Hb =a+c, then b = c. For, by IV, there exists an element 3 of F such that a;+a =o. Using also II, we get b=0+b=(x+a)+b=x-H(a+b)=x--(a-Hc) = (x+a)+c=o-Hc=c. In particular, if a-Hb =o and a-H c =o, then b = c. Hence the element & in IV is uniquely determined by a; it will be designated by — a. 2O2 FIELDS [CHAP. XI VIII. If a and b are any elements of F, there exists one and (by VII) only one element 3 of F for which a-Ha = b, viz., & = —a-Hb. For, a-Hſ-a-Hb)=[a-H (–a)]+b=o-Hb = b . The resulting element x will be written b–a and called the result of subtracting a from b. IX. If a, b, c are elements of F such that ab =ac and a zºo, then b = C. For, by V, there exists an element y of F such that ya = I. Using also II, we get b= I b= (ya)b = y(ab)=y(ac) = (ya)c=1 . c- c. In particular, if ab = I and ac = I, then b = c. Hence the element y in V is uniquely determined by a, it is called the reciprocal (or inverse) of a and designated by I/a or aT'. By II, with c =o and VII, ao =o. Taking c =o in IX, we see that ab=o, a zºo, imply b = 0. X. If a and b are elements of F and a zºo, there exists one and (by IX) only one element & of F such that ax=b, viz., 3 = a Tºb. w For, a(a-b)=aa-, - b-I • b =b. The resulting element a will be designated by b/a and called the quotient of b by a, or the result of dividing b by a. - IIo. Example of a finite field. Let p be a prime number > I. All integers a, a-Ep, az-2p, . . . . which differ from a by a multiple of p are said to form a class of residues [a] modulo p, and this class may also be designated by [a-HKp, where k is any integer. Hence § IIo] FINITE FIELD 2O3 there are exactly p distinct classes: [o], [1], . . . . , [p — I]. We shall take them as the p elements of a finite field F in which addition and multiplication are defined by [a]+[a']=[a-Ha'], [a] [aſ]=[aa']. To justify these definitions, note that if k and l are any integers, the sum and product of a--kp and a' +lp are, respectively, a -i-a'--mp and aa' +tp, where m = k–Hl, _t = al-Ha'k-i-klp. In other words, whichever number of class [a] we add to whichever number of class [aſ], we always obtain a number of the same class [a-Ha'; and similarly for multiplication. For these p elements and for addition and multiplica- tion just defined, it is easily seen that the postulates I–IV for a field are all satisfied. Classes [o] and [I] are the zero and unity elements, respectively. Postulate V states that if [a] is any class #|o], there exists a class [y] such that [a] [y]=[I], and is another statement of the well-known theorem that, if a is any integer not divisible by the prime p, there existintegers y and z such that a y = I+pz. For example, if p = 5, a = 2, 3, or 4, then 2 - 3 = I-H 5 - I = 3 - 2, 4 - 4= I-H 5 - 3. To prove the last theorem, assign to y the values I, 2, tº . , p – I, and divide each product ay by p to obtain a remainder > o and aft- we º 'º - º In particular, for sets composed of single elements, (a)+(3)=(a+3), (a)(B)=(a6). Hence these sets form a field which is abstractly identical with F, so that no contradiction can arise if we identify (a) with a. Accordingly, if p is any element of F we define (p) to be p. Then * pa-ap=(p)a=a(p)=(pao, . . . . , pan). Denote the set (o, I) by 3. Then *=(o, o, I), æ" = (o, . . . . , o, I), § III] INDETERMINATES 2O5 in which I is preceded by k zeros. Hence (ao, a, . . . . , an)=(a)+(o, a.)+(o, o, a)+ tº º is tº = Go-Ha;(0, 1)+a,(o, o, I)+ . . . . takes the form (1) above, which is called a polynomial in the indeterminate & = (o, I) with coefficients a, . . . . , an in F. Two such polynomials are therefore equal only when corresponding coefficients are equal, while their sum and product are found exactly as in elementary algebra. If an æo, polynomial (1) is said to be of degree n in 3. No degree is assigned if a., +o, . . . . , an=o. The degree of the product of two polynomials in 3 is evi- dently the sum of their degrees. Hence the product is zero only when at least one polynomial factor is zero. To define polynomials in two indeterminates 3 and y, consider sets s =[ao, ar, . . . . , an of n+1 ordered polynomials ao— Coo-FCorº-HC923°-H . . . . 2 • ? an= Cho-HCarº-FCu23°-H . . . . in & with coefficients cº, in F. Define equality, addition, and multiplication of sets exactly as above. Write y for the set ſo, Il. As above, S = Go-Hazy-H . . . . -- any”= > (cº-cº-º-H . . . . ))". The final sum is called a polynomial in the two indeter- minates 3; and y with coefficients cº, in F. The method just employed to define polynomials in two indeterminates by means of those in one may be used to define polynomials in k (commutative) indeter- 2O6 FIELDS |CHAP. XI minates 3, . . . . , as by means of those in 3, . . . ack–1. By induction on k we obtain the THEOREM. Two polynomials in the indeterminates ac, , . . . . , whº with coefficients in F are equal only when corresponding coefficients are equal. Their sum and product are found as in elementary algebra. Their product is zero only when at least one of them is zero. All operations on polynomials in indeterminates are in their last analysis operations on sets of ordered elements of the given field F. If f, g, h are polynomials in w, . . . . , as with coefficients in F such that f=gh, then f is said to be divisible by g and h. Then if neither g nor his an element of F, f is called reducible with respect to F. But if f has no divisor other than a and af, where a is an element źo of F, f is called irreducible with respect to F. For example, 3:-43; is reducible and aft–33, is irreducible with respect to the field of rational numbers. II2. Polynomials which vanish throughout F. We shall consider first a polynomial f(x) of degree n > o in one indeterminate a with coefficients in the field F. If e is an element of F, we have • ? wº- (wk-i-Hak-ºe-Hak-3e3+ . . . . --wek-2+ek-)(x-e)+e”. Multiply by the coefficient as of wº in f(x) and sum as to k. We get f(x)=Q(x)(x-e)+f(e), where 0(x) is a polynomial of degree n – I in a with coefficients in F. When the element f(e) of F is zero, we shall say that f(x) vanishes for e and has the divisor & —e. Let f(x) vanish for two distinct elements e, and e, of F. From * f(x)=(x-e)0(x), o= (e.-e)0(e)=o, § II2] VANISHING OF POLYNOMIALS 2O7 we have O(e.)=o, so that Q(x) has the divisor &–ea. Thus f(x)=(x-e)(x-e)0,(x). A repetition of this argument shows that, if f(x) = a,”-- . . . . vanishes for n distinct elements er, . . . . , en of F, then f(x)=a,(x-e)(x-e.) . . . . (x-en), If f(x) vanishes also for e which is distinct from er, . . . . , e, then ao = O. Repeating the argument on a,3:"T"-H . . . . , etc., we obtain the following con- clusion: I. If a polynomial adº”-- . . . . -- an with coefficients in F vanishes for more than n elements of F, each coefficient a; is zero. II. In any infinite field F, a polynomial in a with coefficients in F is zero (identically) if it vanishes for all elements of F. - But II need not hold for a finite field. For example, if F is the field of the classes of residues of integers modulo p, a prime (§ IIo), the polynomial 3:2-3 is not zero, but vanishes for every element of F since, by Fermat's theorem, e”—e is divisible by p when e is any integer. III. A polynomial f(x, . . . . , 3...) in n indeter- minates with coefficients in an infinite field F is zero (identically) if it vanishes for all sets of n elements of F. To give a proof by induction, let III be true for poly- nomials in 3, . . . . , 3, ... Then III is true for fif it lacks &n. Hence let f=go(3, . . . . . *n-1)*H . . . . Fgn(x1, . . . . . *n-1), gožo, m = I. In view of the hypothesis for the induction, we may assign elements & , . . . . , §n—, of F such that go(3, 2O8 FIELDS [CHAP. XI . , §n-1)*o. Then f becomes a polynomial in the single indeterminate 3, which, by II, does not vanish for a certain element e of F. But this contra- dicts the assumption that f vanishes for the set of ele- ments à, . . . . , śn—r, e. II.3. Laws of divisibility of polynomials in x. It is to be understood that all the polynomials employed have their coefficients in any fixed field F. We shall first prove that there exists a greatest com- mon divisor of any two polynomials f(x) and h(x), the latter being of degree n > o. The process employed in elementary algebra to divide f(x) by h(x) is purely rational and hence leads to a quotient q.(x) and a remainder r;(x), each being a polynomial with coefficients in F, such that either r,(x) is zero (and then fis exactly divisible by h) or r,(x) has a degree n, (n, & q,(x)|=6. Hence the second factor is zero. As before, p.(x) would divide one of the qi, i = 2, say q2, whence q2 = a-pa. Proceeding similarly, we obtain V. Any polynomial reducible in F can be expressed as a product of polynomials irreducible in F; apart from the arrangement of the polynomials and the association of multipliers belonging to F, this factorization can be effected in a single way. * The theorems of this section are illustrated in § II6 for the case of Congruences with respect to a prime modulus. - II4. Laws of divisibility of polynomials in several indeterminates. The theorems of this section are stated explicitly for polynomials in two indeterminates ac and y. However, if we interpret a to mean a set of indeterminates 3, . . . . , an, the theorems concern polynomials in wr, . . . . , ºn, y and are established by induction from n to n+I variables by the proofs as written,* if we assume that Theorems V, VII, VIII hold * Provided the citations to I, IV, V of § 113 be replaced by citations to the analogues of V, VII, VIII below for polynomials in 3, . . . . , wh. 2 I2 * FIELDS - ſcHAP. XI. for polynomials in 3, , . . . . , aca. Since the latter theorems were proved in § 113 when n = 1, the induction will be complete. If p(x) and gi(x) are polynomials in a with coefficients in F, - (4) rº, y)=X p(x)y, p.(x)=o; s(x, y)= X. o;(x)y', on(x)}o 7, FO are of degrees n and m, respectively, in y. By I of § 113, poſa), . . . . , pn(x) have a greatest common divisor p(x). In case p(x) reduces to an element of F, we call r(x, y) primitive in y. Let g(x) be a greatest common divisor of go(x), . . . . , on(x). Then (5) r(x, y) = p(x).R(x, 'y), s(3, y) = cºs(x, 'y), where R and S are primitive in y. I. If the product of r(x, y) and s(x, y) is divisible by a polynomial P(x) which is irreducible in F, either r or s is divisible by P(x). Since this is evident if every p(x) or every ai(x) in (4) is divisible by P(x), let paſa) and gºſa) be relatively prime to P(x), and p;(x) and of (x) be divisible by P(x) for i> p, j-> q. Then p g %S = > p(x)y ge > o;(x)y’-HP(x)0(x, y). Since rs is divisible by P(x), the coefficient pºſa)0,03) of y” must be divisible by P(x), contrary to IV of § 113. II. If two polynomials r(x, y) and s(x, y) are primitive in y, their product is primitive in y. § II4] DIVISIBILITY OF POLYNOMIALS 2I3 For, if rs is not primitive in y, then rs = r(x)T(x, y), where r(x) is not an element of F and hence, by V of § 113, has a factor P(x) which is irreducible in F. Then, by I, either r or s is divisible by P(x), whereas each is primitive in y. III. If, in (5), R and S are primitive in y and if r is divisible by s, then R is divisible by S, and p(x) is divisible by a (x). For, if r =sk, where k = k(x)K(x, y) and K is primitive in y, then r= a KSK, r= pk. Since SK is primitive in y by II, a greatest common divisor of the coefficients of the powers of y in r is a k by the first equation and is p by the second. Hence, by I of § 113, a k=ap, where a is in F. Then R= GSK. COROLLARY. Any divisor of a polynomial primitive ſin y is itself primitive in y. - This proof establishes also IV. If p(x)R is equal to the product of a (x)S by k(x)K, where R, S, K are primitive in y, then R = a SK and a k = ap, where a is an element of F. V. Two polynomials r(x, y) and s(x, y) with coefficients in F have a greatest common divisor [r, s] which is uniquely determined apart from a factor belonging to F. The prod- wet” rº, of [r, s] by a certain polynomial in 3 is expressible as a linear combination of r and s, while [r, s] itself may not be so expressible. * For example, let r = (x+1) y–I, s-3:(y-HI), and let F be the field of rational numbers. Evidently [r, s]=I, which is not a linear com- bination of r and s since they are both zero when & = – 2, y = — I. This holds also if F is the field of the three classes of residues of integers modulo 3 (§ IIo). Hence we cannot prove VI by the method employed for II of § II.3. 2I4. FIELDS [CHAP. XI For, let s =v(x)y"+ . . . . be of degree n > o in y. If r is of degree n+k-1 in y, the algebraic division of vºr by s yields a quotient q, and remainder r, which is either zero or has a degree n,(n, 1 in the inde- terminate 3:. This and all further polynomials to be employed are understood to have all their coefficients in the (arbitrary) field F. Two polynomials g(x) and ga(x) are called congruent modulo P(x) if g, —g, is divisible by P(x); we then write g=g, (mod P). All polynomials which are con- gruent to a given one g are said to form the class [g]. The zero class ſo is composed of all polynomials, includ- ing o, which are divisible by P. If also h;(x)=h,(x) (mod P), then g-H h;=g2+ha, gh;Egoh. (mod P). Hence the sum of an arbitrary polynomial g;(x) of a class G and an arbitrary polynomial h;(x) of a class H belongs to a class uniquely determined by G and H, and is desig- nated by either G+H or H+G. Also their product belongs to a definite class designated by GH or HG. In other words, addition and multiplication of classes are defined by (6) [g]+[h]=[h]+[g]=|g-Hb), g|[h]=[h][g]=[gh]. We assume henceforth that P(x) is irreducible with respect to F. If G#|o], any polynomial g(x) of G is not divisible by P(x) and hence is relatively prime to the irreducible polynomial P(x). Hence by (3) there exist polynomials a (x) and T(x) such that ag-HTP = I. But TP=o (mod P), so that [gal =[1]. Let S denote the class containing a . Hence GS =[1]. The postulates (§ 108) for a field are seen to be satis- fied by our classes as elements under addition and multiplication as defined by (6), with [o] and [1] as the § II5] EXTENSION OF A FIELD 217 zero and unity elements. Since each number a of F is a polynomial lacking ac, it determines a class [a], and these special classes form a field simply isomorphic with F. THEOREM I. If P(x) is a polynomial irreducible in F, the classes modulo P(x) of polynomials with coefficients in F form a field F, having a sub-field simply isomorphic with F. Each class #|o] is determined by the unique reduced polynomial of degree ºn in the class, while the class [o] is determined by the polynomial o. We may there- fore employ these reduced polynomials, including o, as the elements of F. Then the sum of two such elements g(x) and h(x) is an element of F, but their product is the element obtained as the remainder of degree ºn from the division of g(x) , h(x) by P(x) This remainder may also be obtained by the elimination of the powers of a with exponents = n by means of the recursion formula P(x)=o. In other words, we may regard the element 3 of F, as a root of P(£)=o; this agreement is merely a convenient mode of expressing the fact that w is a root of the congruence (7) P(£)=(§-2)Q(£, w)=o |mod P(x)], in which the polynomial Q(£, w) is the quotient obtained by dividing P(£) by É–3, the remainder being P(x). We have therefore solved the problem to extend a given field F to a field F, containing a root of a given equation P(x)=o which is irreducible in F. For various applications we need an extension F" of a given field F such that any given polynomial f(x), having coefficients in F, shall decompose into a product 2I8 FIELDS [CHAP. XI of linear factors with coefficients in F". In case there is such a decomposition in F, we may take F = F. In the contrary case, f(x) has an irreducible factor P(x) of degree - I. In the field F, = F(x) obtained above, we have P(3) = (#–3)0(É, 3), whence” f(#) = (#–3)f(£), where f(#) is a polynomial in § with coefficients in F, . In case f(8) is a product of linear functions of £ with coefficients in F, we may take F, as the desired field F. In the contrary case, f(y) has a factor P.(y) which is irreducible in F, and of degree - I in the new indeterminate y. As above, y is a root of P.(3) =o in an extension F, = F(y) of F, so that P.(#) has the factor £–y in F. Thusf f:(#) = (#–y)f,(£), where f,(£) has coefficients in F. If f:(#) is a product of linear functions of £ with coefficients in F, we may take F = F2. In the contrary case, we employ a non-linear factor P,(£) irreducible in F., and extend F, to F. = F2(z), where P,(z)=o. Proceeding similarly, we ultimately: obtain a field F' in which f(#) is a product of linear functions of £. THEOREM 2. Given any field F and any polynomial f(x) with coefficients in F, we can determine an extension F' of F such that f(x) is a product of linear functions with coefficients in F". - II6. Applications to congruences; Galois fields. Although not required for our exposition of the theory of ( , This and the preceding equation are really congruences modulo P(x). f This is really a congruence modulis P(x), P.(y), viz., f;(£)—(£–y) f,(£)=AP(x)+BP1(y), where A and B are polynomials in 3 and y with coefficients in F. f Or by adjoining a single root of the Galois resolvent of f(t)=0, as proved by J. König (Algebraische Gröszen [Leipzig, 1903], pp. 150–55). § II6] CONGRUENCES, GALOIS FIELDS 2I9 algebras, an excellent illustration of the preceding theory is furnished by the case in which F is the field of classes of residues of integers modulo p, where p is a prime > I (§ IIo). By a polynomial in an indeterminate & we shall here mean one having integral coefficients. Two such polynomials are called Congruent modulo p if and only if the coefficients of like powers of a are congruent modulo p (i.e., their difference is divisible by p). A polynomial h(x), not congruent to o, is said to be of degree n modulo p if the coefficient of 3." is prime to p and the coefficients of all higher powers of 3 are divisible by p. Given also any second polynomial f(x), we can readily determine three polynomials q, r, s, such that (8) f(x)= h(x) g(x)+r(x)+ps(x), where r(x) is either o or of degree ºn modulo p. In case r(x)=o (mod p), we shall say that f(x) is divisible by h(x) modulo p. Theorem I of § 113 now states that any two poly- nomials have a greatest common divisor modulo p which is congruent to a linear combination of the two. Again, Theorem V now states that a polynomial in a which is reducible modulo p is congruent to a product of poly- nomials each irreducible modulo p, and such a factoriza- tion is unique apart from the arrangement of the factors and apart from multipliers which are integers prime to p. It is unnecessary to restate similarly the remaining theorems of §§ 113–14. Each coefficient of r(x) in (8) can be expressed in the form a +pb, where a and b are integers and os a yºf,(x) ? = O of algebra A, each element below the main diagonal has the factor Y. For its determinant |a| we therefore have (14) | a y–o-f(&) tº e º ſº fo (#) =norm f(£) ſº We are now in a position to determine the conditions which y must satisfy in order that. D shall be a division algebra. For any given polynomials h; in a with coefficients in D, we desire that f - z=y+y-ºh, H- . . . . --yh,_1+h, DIVISION ALGEBRAS OF ORDER 77° 225 shall have an inverse. Write 2, -yº-r-ţ-yº-º-ºk, H- . . . . --k, -, , where the ki are polynomials in 3. Then zz=y+y"-(h,+y-'ky)+y^*(h,-- - yº-'ky-ºh, H-y-'kay)-H . . . . . The sums in parenthesis will be zero if k, = —yhyT", k, = —y’h,yT’—ykyTº y’h, yT', . . . . . These are polynomials in a with coefficients in F since (15) yºf(x)y-s=f(6*-*(x)], by (II) of § 47 with r=n—s. Hence we can determine ki, . . . . , kn—, So that 2,2–22, where 22 is of degree ‘r in y. If z, has an inverse w, So that w82 – I, then wº,2=I and z has the inverse wº. Let yºh(x) be the term of z, of highest degree in y. Then tº r and h has an inverse l in the field F(x) and hence in D. Thus z, will have an inverse if z_1= y!-- . . . . has an inverse. The latter will have an inverse, by the argument just employed for 2, if the next polynomial of degree ‘t has an inverse. It follows in this manner that z has an inverse unless we reach a pair of consecutive polynomials whose product does not involve y. Give them the foregoing notations, 2, 2. Then ziz=y+6, where 6 is independent of y, since by (15) the coefficients of kr, k2, . . . . of Z, are independent of Y and since in forming the product 2,2 we obtain the term y”=y only once. For the moment, regard Y as a variable in F. If 6 involves 3, Y-H 6 is not zero and hence has an inverse in F(x), so that z has an inverse in D. Hence let 6 be a number of F. Employing the matric forms of the z's, we have (16) zz=(Y-H 6)I, 226 APPENDIX II where I is the n-rowed unit matrix. By the remark below (13), the determinant |z| of z is a polynomial in y: |z|= (–1)*-*Y+ . . . . , |z|= (–1)*-*Y*-*-i- . . . . . Each is a factor of (Y-H 6)" by (16). Hence |z|=(-1)*-(y--5). When Y=o, z becomes norm h, by (14). Hence (–1)***6 =norm h, , If y–Hözo, 2 has an inverse. If Y-Hö=o, the last result shows that Y is the norm of (-1)^h,. This proves our theorem. APPENDIX II DETERMINATION + OF ALL DIVISION ALGEBRAS OF ORDER 9; MISCELLANEOUS GENERAL THEOREMS ON DIVISION ALGEBRAS THEOREM I. If an algebra A of order a has a modulus e and contains a division sub-algebra B of order 8 whose modulus is also e, there exists a linear set C of order Y (of elements of A) such that A = BC, a = 8). For, if a., is an element of A which is not in B, the linear set B+Ba, is of order 28, since otherwise there would exist elements b, and b,(bazºo) of B for which b,--baa, =0, whence b. *b, H-ea, =0, or as--by ºbs, whereas a, is not in B. Then if a = 28, we have A = B(1, a2) and the theorem is proved. But if a P 28, A contains an element as which is not in B+Baa. The linear set B+Baa-H Bas is of order 36, since otherwise B would contain elements b, b2, bazºo for which b,--baaz-i-baas -o, whence a;= -b, *(b.-H.b.a.)=ba-Hbsa, * Amplification of the article by Wedderburn, Transactions of the American Mathematical Society, XXII (1921), I29–35. DIVISION ALGEBRAS OF ORDER 9 227 whereas as is not in B+Baz. Then if a = 38, we have A = B(1, a2, a3) and the theorem is proved. If a P-38, we repeat the argument. - COROLLARY I. The order of a division algebra A is a multiple of the order of any sub-algbera. For, the sub-algebra is a division algebra with a modulus w. If e be that of A, then u°=u, ue =u, u(u-e)=o, u-e=o. THEOREM 2. Given a division algebra A over a non-modular field F, let the algebra B be composed of all those elements of A which are commutative with every element of A. We can find an extension F" of F such that the algebra A' over F', which has the same units as A, is the direct product of a simple matric algebra and the commutative algebra B' over F', which has the same tunits as B. For, by § 76, there exists a field F obtained from F where I, #1, #2, . . . . are linearly independent with respect to F, such that algebra A' over F" is a direct sum of simple matric algebras A1, . . . . , A5. Let ej be the modulus of A;. If f = 2f;, g=2gi, where fi and g; are in Ai, and f is commutative with g, then * 2fig; =fg=gf=2gift, fig=gift. By $ 52 the products of ei by numbers of F" are the only elements of A; which are commutative with every element of Ai. Hence all those elements of A' which are commuta- tive with every element of A' form an algebra B' with the basal units ei, . . . . , 65. Since each ei, and therefore also any element yżo of B', is a linear function of the basal units of A with coefficients in F', we may write y =X&ºi, where the x; are elements, not all zero, of A, while £o-I, and #1, #2, . . . . are the fore- going irrationalities. 228 APPENDIX II If x is any element of A (and hence in A'), zy=y& by the definition of B'. Hence o-wy—yº–25(xxi-xx). Since each xxi-xx is a linear function of the units of A with coefficients in F, and since the # are linearly independent with respect to F, we have &;=&#x for every i, and for every x in A. Hence the elements of the sub-algebra B (of A) generated by the & are commutative with every element of A. If x, is any element of A commutative with every element of A, then 3, is in B, since 3, is evidently commutative with every element of A' and hence is an element of B' of the special form y=&o-Hoš, H-oša-H . . . . . Thus B is the algebra defined in the theorem. Since every element of B' is of the form y=2#&#, B' has the same basal units as B, although the two algebras are over different fields F and F. The commutative division algebra B is a field. We may regard A as an algebra A, of order aſb over this field B. As above we extend the latter field to a field F, such that the algebra A. Over F, with the same units as A1, is a simple matric algebra or a direct sum of simple matric algebras. The latter alternative is excluded since otherwise B would not contain all elements commutative with every element of A. Since A, is a simple matric algebra, A' over F" is the direct product of B' and a simple matric algebra. A division algebra A over F is called normal if the prod- ucts of its modulus by numbers of F are the only elements of A which are commutative with every element of A, i.e., if the B of Theorem 2 is of order I. COROLLARY 2. The order of any normal division algebra is a Square. COROLLARY 3. Any division algebra A whose order is the square of a prime p is either normal or is equivalent to a field. DIVISION ALGEBRAS OF ORDER 9 229 For, pºabgº, where b is the order of its B. Hence b = 1 or p”. In the first case, A is normal. In the Second case, A = B is a commutative division algebra and hence is a field. Polynomials over an algebra. Let A be any algebra having a modulus which will be designated by I. Poly- nomials ao-Ha,3+ . . . . -- anº” in an indeterminate 3, having coefficients ao, . . . . , an in A, may be defined as in § III, with the modification that, when p is an element of A and a denotes the set (ao, a, . . . . , an), pa-(pa, . . . , pan) and ap=(a,p, . . . . , anp) may now be dis- tinct since A need not be a commutative algebra. However, &= (o, I) is commutative with every element of A and hence with the foregoing polynomial in 3 over A. Two such polynomials are equal only when corresponding coefficients are equal. The sum and product of the two are found as in elementary algebra, provided care is taken in . multiplication to preserve the Order of factors belonging to A. Let A=a,0"+ . . . . 4-am, B=b,0"-- . . . . --ba (b. 20) be two polynomials in the indeterminate & over a division algebra D. If n p. Thus m = p and (p(0) = M. This proves (1). 232 ! APPENDIX II Finally, we can permute the linear factors of (I) cyclically if p >1. Write @EP(0–3). The unique division process yields 5–0(0-3), where 0 is a polynomial in 6 and x, with coefficients in F. Hence Q= P is commutative with 2-3, whence b=(0–3)P, as desired. THEOREM 4. If A is an algebra over a non-modular field F and if y is an element for which the rank equation of A has no multiple roots, then any element of A which is commutative with y is a polynomial in y with coefficients in F. For, let & be the general element &=X&ei of an algebra A over a non-modular field F. Let the rank equation of A be f(x, 3) = a,(£)3'-Ha;(3)x^* + . . . . --a,(£)=o, where # denotes the Set of Co-ordinates #1, . . . . , śa of 3. Let y = 2mie; be a particular element of A such that f(y, m)=o has no multiple roots. We seek the elements & which are commutative with y. Let X be a variable in F. Then f(y-HX3, m-i-Né)=o. The coefficient of each power of X in its expansion must be zero. If we write a;(n+\})=a;(n)--\ai,(m, ś)+\’ai,(m, 3)+ . . . . , and equate to zero the coefficient of X* in f, we get f'(y, m)&–H X. at:(m, 8)y'T'-o', where f' denotes the derivative with respect to y and is not zero. Hence f" has an inverse which is a polynomial in y (§ 84). - THEOREM 5. Every normal division algebra D of order 9 over a non-modular field F is generated by elements w and y such that a y=y0(x), yº = Y, where Y and the coefficients of the DIVISION ALGEBRAS OF ORDER 9 233 polynomial 0(x) belong to F, while * x, 0(x) and 0°(x)=6|0(x)] are the roots of a cubic equation irreducible in F. For, by Theorem 2 in which B is now of order I, D is a simple matric algebra over an extended field. Hence the rank equation of D is of degree 3 (§ 71). Thus the equation of lowest degree with coefficients in F satisfied by an element 3, not in F is of the formi - (5) Ö(a)= w8+a10%--a20+a;=o. i) If D contains an element &, not in F which is commuta- tive with a transform t=yTºy of 3, (tāz), Theorem 4 shows that t is a polynomial 0(x,) in 2, with coefficients in F. By the foregoing, 6–t is a right divisor of p(a) and hence t is a root of (5). Since the latter is irreducible, and has a root 3, in common with q(t)=(p(0(x)|=o, all of its roots satisfy the latter, by Theorem 7 of $84, whence 0°(x,) is a root of (5) and 63(x,) is equal to the root 3. By the two expressions for t, - 3:) =y0(x), x, y =yºff'(x), acry?= y^63(x)=y33, . whence y3 is commutative with 3, and by Theorem 4 is ex- pressible as a polynomial in wr: y3 =X3:-H pack-Hv 3. with \, g, v in F. If X and p are not both zero, yº is not in F and its adjunction extends F to the algebra (I, &c., 3%) of order 3 over F. But y3 extends F to a sub-algebra of (I, y, yº) and hence to the latter itself. Thus y is a polynomial in ac, and hence is commutative with wr, whereas y transforms w; into łżacy. This contradiction proves that y} = v. Hence Theorem 5 is true for case (i). * By (Io), § 47. The algebra is of the type treated in §§ 47, 48. i By Corollary 1, it is not of degree 2. 234 APPENDIX II ii) Let D contain an elementa, which is not commutative with any of its transforms other than 3, itself. By Theorem 3 there exist transforms wa and 3.3 of w. Such that (6) Ö(a)=(a)-33)(Q-32)(0-3), in which the three factors may be permuted cyclically. We proceed as in the proof of Theorem 3 with now ac' – (x,-33)&(x,-23)Tº 5 which is distinct from 3:1, since otherwise &;=&sº, contrary to the hypothesis on 3:1. Hence - &a= Rx'RT+ =Sæ,ST, S = (x'—a,)(x,−3.3), S=<'(x,-23)-2, (x,-w,)=(x,-23)3, -2, (x,-23), (7) wa–(3,3's -3.3%) 3,03:33–333)Tº . Comparing (5) with (6), we have 3.332-H422,-H3:33, - as . Permuting 3.3, wa, &, Cyclically, we get 323,-H3,33-H423.3— a 2, w;33+33*2+3;&a=a. By subtraction, 'y=%2%r-301%2 = 3:33–3.3% = 303%2-302%3. Then (7) becomes (8) 302 = y^xyT". Permuting 3:3, 3.2, 3, cyclically, we see that the three preceding values of y are permuted cyclićally. Hence (8) gives - n (9) &r=y&syT", & =y&-yT*=y+x,yT*, *, = y^x,yT3, whence yº is commutative with 2, and by Theorem 4 is expressible as a polynomial in wr. As shown above, either y STATEMENT OF FURTHER RESULTS 235 itself is commutative with 2, in contradiction with (8), or y3 is an element of F. If any transform (other than y) of y is commutative with y, we have case (i). If no such transform is commutative with y, we take y as the 3', employed at the beginning of case (ii). Thus our discussion of case (ii) holds with the simplification 33-y, where Y is in F. Write (Io) 21–33), 22 =3,3,3], *=&y&T*. Then 2,22–2.2, -3, yx:y:z, '-x:y’-3’,(yx:y-&-yº)&’. Since (6) is now identical with a 3–Y, & = y = x,&xs, whence o-3;-2,4,-yººyº-yºy" & =y(yaºy—w, yºx)/v, by (8), (9), and yº-v. Hence 2,22-222, =0, So that 2, is a polynomial 9(z) with coefficients in F. By (Io), 2.3.−3,3,-3,002.), 3::= y. Replacing z, by 3, and ac, by y, we obtain Theorem 5. Hence by Corollary 3 every division algebra of order 9 is either a field or is of the type in Theorem 5. APPENDIX III STATEMENT OF FURTHER RESULTS AND UNSOLVED PROBLEMS 1: If A, . . . . . As is a series of algebras such that each A, is a maximal invariant proper sub-algebra of its pred- ecessor A, i, while As is simple, the series is called a series of composition of Ar. The series of simple algebras A1-A2, A 2-A 3, . . . . , As—1-As, As is called a series of differences of Af. Algebra (I) in § 20 has the series of composition A = (u, u, us), (u, u,), (us), as well as that derived by any per- 236 APPENDIX III mutation of 1, 2, 3. For each of these six series of composi- tion of A, the series of difference algebrasis composed of three algebras of order I, each generated by an idempotent element. Since all such algebras of order I are equivalent, this illus- trates the theorem” that two series of differences of the same algebra contain the same number of algebras, and the algebras of one series are equivalent to those of the other Series when properly rearranged. If A is an algebra of index a and if the order n of A exceeds that of A" by r, each Series of differences of A can be so arranged that the first r terms are.zero algebras of order I. Hence, if a PI, A has an invariant sub-algebra of order n–I. 2. An associative algebra A with a modulus é over a field F is reducibleſ with respect to F if and only if it con- tains an idempotent element #e which is commutative with every element of A. 3. Iff an associative algebra A has no modulus, but Contains an invariant Sub-algebra having a modulus, then A can be expressed in One and only one way as a direct sum of an algebra B with a modulus and an algebra C which has no modulus and no invariant Sub-algebra which has a modulus. 4. The authors has recently found all associative algebras with a modulus of order n and rank n or 2 over any non- modular field, and deduced all algebras of orders 2, 3, 4. If A is of order and rank n, it contains an element & Such that I, &, wº, . . . . , 3."T" are dependent, while & is a root of an equation f(0)=0 of degree n with coefficients in F. *Wedderburn, Proceedings of the London Mathematical Society, Series 2, Vol. VI (1907), pp. 83-84, 89. f Scheffers, Mathematische Annalen, XXXIX (1891), 319; Linear Algebras, pp. 26–27. † Communicated by Wedderburn. B is an invariant sub-algebra which has a modulus and is contained in no other invariant Sub-algebra having a modulus. Then A = B B C by § 22. § Proceedings of the London Mathematical Society, IQ23. STATEMENT OF FURTHER RESULTS 237 Then A is irreducible with respect to F if and only if f(0) is irreducible or is a power of a polynomial irreducible in F. 5. Consider” algebra C in which multiplication is defined by (q+Qe)(r-HRe)=t-HTe, t-gr–R'Q, T= Rq+Or', where q, Q, r, R are any real quaternions, and r", R' are the conjugates of r, R. Taking r = q', R= -O, we get N(q+Qe)=(q+Qe)(q'—Qe)=qq'+QQ'. The norm of a product is the product of the norms of the factors. Each of the two kinds of division except by zero is always possible and unique, so that C is a division algebra; it is not associative. The authorf has discussed the arithmetic of C at length. 6. Iff a division algebra A over F contains a normal sub- algebra B, A can be expressed as the direct product of B and another algebra C over F. Further results on division algebras have been obtained by the authorš and O. C. Hazlett. Every associative division algebra over a finite field is a field." * Dickson, Transactions of the American Mathematical Society, XIII (1912), 72; Annals of Mathematics, XX (1919), 155-71, 297; Linear Algebras, p. I5. An equivalent real algebra of order 8 had been given by Cayley. f Journal de Mathématiques, Sér. 9, Tome II (1923). # Wedderburn, Transactions of the American Mathematical Society, XXII (1921), 132. The proof is by the corollary to Theorem 2 in Linear Algebras, pp. 28, 29. § Transactions of the American Mathematical Society, VII (1906), 370, 514; XIII (1912), 59; XV (1914), 39; Bulletin of the American Mathematical Society, XIV (1907–8), 160; Göttinger Nachrichten (1905), pp. 358-93; Linear Algebras, pp. 69, 71. | Transactions of the American Mathematical Society, XVIII (1917), I67–76. "| Wedderburn, op. cit., VI (1905), 349; Dickson, Göttinger Wach- richten (1905), p. 381. 238 APPENDIX III 7. Invariantive characterizations of algebras and certain vector covariants of them have been given by Hazlett” and MacDuffeef. The authori deduced the algebra of quater- nions from relations between algebras and continuous groups. 8. There are papers' dealing with the relations between linear algebras and finite groups, and others dealing with analytic functions of hypercomplex numbers. 9. Among the unsolved problems are the determination of all division algebras, the classification of nilpotent alge- bras, the discovery of relations between an algebra and its maximal nilpotent invariant sub-algebra (cf. §§ IOI-3 for the case of complex algebras), theory of non-associative alge- bras, theory of ideals in the arithmetic of a division algebra, and the extension to algebras of the whole theory of algebraic numbers. - * Annals of Mathematics, XVI (1914), 1–6; XVIII (1916), 81–98; Transactions of the American Mathematical Society, XIX (1918), 408–20. f Transactions of the American Mathematical Society, XXIII (1922), I35-50. i Bulletin of the American Mathematical Society, XXII (1915), 53–61; Proceedings of the National Academy of Sciences, VII (1921), IOQ-I4. § Linear Algebras, pp. 63, 73; or Encyclopédie des Sciences Mathé- matiques, Tome I, Vol. I (1908), pp. 436, 441. INDEX |Numbers refer to pages] Adjunction to field, 2 Algebra defined, 9, 22; comple- mentary to, 40. See Complex, Difference, Division, Equiva- lent, Invariant, Irreducible, Matrices, Maximal, Principal, Quaternions, Reciprocal, Re- ducible, Semi-simple, Simple Algebraic numbers, I, I28–40, I42-43 Annihilated, 49 Arithmetic of algebra, 141–99, 237–38 - Associated arithmetics, I44, 180– 87 Associated elements, 144 Associative algebra, Io, 92, 98 Basal units, 14, 17; normalized, I75–85 Basis, Io, 25, 130, 138, I61-64 Cayley’s algebra, 237 Central, 31 Character of units, 177, 18O Characteristic determinant of ele- ment, IoI-3, I78–84; of matrix, 99, IO3 - Characteristic equation of element, IoI, IO4–5, III, II5; of matrix, 99, IO3, IIO Characteristic matrices, IoI Class, 38, 90; of polynomials, 216; of residues, 202, 220 Complex algebra, I6, I76–85 Components, 33 Congruences, 218 Congruent, 38, 2I6 Conjugate, 20, 188 Constants of multiplication, 17 I26–27, Co-ordinates, 17 Covariants, 238 Cyclic equation, 66 Decomposition relative to an idempotent, 48 Degree of an algebraic field, 134 Determinant, first and second, 95; irreducible, II5. See Char- acteristic, Symbols Dickson algebras, 66 Difference algebra, 36–41, 52, 80, 85-91 Diophantine equations, 2O3 Direct product, 72, 78, 79, 84-91, II.8 Direct sum, 33, 35, 40, 53, II6, I2O, I57, 236 Division algebras, 59–71, 78–80, I2O-23, I26, 165-74, IQ2, 22 I- 38; normal, 228 Divisor of zero, 60 I94-99; Element, 9 Elementary transformations, 171 Equation. See Characteristic, Cyclic, Diophantine, Minimum, Rank Equivalent algebras, 20, 96, 98 Extension of field, 2, 118, 215-18 Factorization unique, 155, 159, I74, 2II, 2I5, 2IQ Fields, 1, 200–220; as algebras, I6. See Extension, Finite, Modulus Finite fields, 202, 220, 237 Galois field, 218–20 Gauss's lemma, 133 239 24O INDEX Greatest common divisor of poly- nomials, 208-9, 213–14, 229; of generalized quaternions, 198; of quaternions, I49 Group, 94, 98, 238 Idempotent, 44, 48–51, 54-61, 80, 81, 85, 121. See Primitive, Principal Identity transformation, 4 Incongruent, 38 Indeterminates, 203–5 Index, 43 Integral algebraic number, 130–40 Integral element, 141 Integral quaternion, 148, 150 Intersection, 26 Invariant Sub-algebra, 31, 41–42 Invariant under transformation, Io2, II 7, 238 Inverse in field, 202; ternion, 20 Irreducible algebra, 35, 237 Irºque polynomial, 132, 135, 2O of qua- Linear Sets: basis of, 25; inter- Section of, 26; order of, 25; product of, 29; sum of, , 26; Supplementary, 28 Linear transformations: corre- Sponding to elements of an algebra, 93; defined, 2; de- generate, 6; determinant of, 2; inverse of, 4; not commutative, 4; Orthogonal, 199; product of, 3; product associative, 4 Linearly dependent, 13, 15; inde- pendent, I3 Matrices: adjoint, 7, 9; algebra of, 16, 18, 22, 92; determinant of, 6; diagonal, 173; division by, 7; equal, 6; equivalent, I69–74; first and Second, 95, 98, 99; first element of, I71; identity, 7; inverse of, 7; prime, I74; product of, 5; product associative, 6; rank, IoS, 173; Scalar, 8; sum of, 7; unit, 7; with elements in a division algebra, 165-74; with integral elements, 168, 174. See Char- acteristic, Minimum, Simple Maximal invariant sub-algebra. 32, 42, 51 Maximal nilpotent invariant sub- algebra, 44, 52, IoS, II8, 121–27, 238 * - Minimum equation of element, III; of matrix, Io9–Io Modulo, 38, 202, 216 Modulus, 15, 33, 38, 97; of field, IO Nilpotent, 43, IoS, 175-76, 238. See Maximal, Properly Norm, 20, 67, 68, 7.0, I69, 188, 224 Normal, 228 t n-tuple, 22 Order of algebra, I4 Polynomials in an element, 61, 229; in indeterminates, 203–15. See Class, Greatest, Irreducible, Primitive, Reducible, Relatively prime, Vanishing Postulates for algebras, 9, 23; for arithmetics, I41; for fields, 200 Prime element, I59; matrix, 174; quaternion, I52 - Primitive idempotent, 55–58, 81 Primitive polynomial, 212 Principal idempotent, 49-51, 57– 58, 81 Principal theorem on algebras, 118–27 Principal unit, 15 Product of linear sets, 29. See Direct, Scalar * Proper sub-algebra, 31 Properly nilpotent, 46, 59, 60, 89, 9o, IoS-8, 187 # INDEX 24I Quadratic integer, I29 Quadratic number, 128 Quaternions, 19, 64, 67, 194–99, 237–38; arithmetic of, I47-56; generalized, 187–94, 198 Rank of algebra, 114, 236; of matrix, IoS, 173 Rank equation, III–I'7 Reciprocal algebras, 21, 96, 98, 99 Reciprocal groups, 98 Reducible algebras, 33–35, 53, 236 Reducible polynomials, 132, 135, 2O6 Relatively prime polynomials, 2Io; quaternions, I5I Scalar multiplication, 9 Scalar product, 8, 9 Semi-simple, 51–54, 60, IoS, II8, I61–64, 187 Series of composition, 235 Series of differences, 235 - Simple algebras, 42, 53, 54, 73–80, I27, I65-74 Simple matric algebras, 76, 78–80, 82–91, II.5, II8-20, I27, 223, 227 . Sub-algebra, 31 Sub-field, 2 Subtraction, I2, 202 Sum of four Squares, 154, 198 Sum of sets or algebras, 26. See Direct Supplementary, 28 Symbols: |ay for the determinant whose general element is dij; +, Z\, 26; s, z, 26; B, 33; X, 72; (ºr,..., **) 25; A —B, 37; [x], 38; &#y (mod B), 38, 216; N(q), 20, 169, 188; A(x), 93; A'(x), R., S., 95; 6(3), ô'(x), IoI Table of multiplication, 17 Trace, IoS-8 Transformation of units, 15, IoI, II.7. See Linear Units, I43-44, I53, I59, IZO, I 74, I79, 185, IQ4. See Basal, Transformation Unity of field, 201 Unsolved problems, 238 Vanishing of polynomial, 206–8 Zero algebra, 43 Zero element, II, 201 Zero Set, 25 PRINTED IN THE U.S.A., The University of Chicago Science Series EDITORIAL COMMITTEE ELIAKIM HASTINGS MOORE, Chairman JOHN MERLE COULTER PRESTON KYES The following volumes are bound uniformly in maroon cloth, small I2mo: The Evolution of Sex in Plants. By JoHN MERLE CoulTER. x+14o pages. $1.25, postpaid $1.35. - Individuality in Organisms. By CHARLES MANNING CHILD. x+212 pages. $1.5o, postpaid $1.65. The Origin of the Earth. By THOMAS C. CHAMBERLIN. xii-H272 pages. $1.75, postpaid $1.85. Finite Collineation Groups. By HANS F. BLICHFELDT. xii-HI94 pages. $1.5o, postpaid $1.60. A Chemical Sign of Life. By SHIRO TASHIRO. x+142 pages. $1.25, postpaid $1.35. - The Biology of Twins. By HoRATIO HACKETT NEWMAN. x+186 pages. $1.5o, postpaid $1.60. The Physiology of Twinning. By HoRATIO HACKETT NEWMAN. x+232 pages. $1.75, postpaid $1.85. Food Poisoning. By EDWIN OAKES JoFDAN. viii-HII6 pages. $1.25, postpaid $1.35. The Electron: Its Isolation and Measurement, and the Determination of Some of Its Properties. By RoBERT ANDREWS MILLIKAN. xii-H268 pages. $1.75, postpaid $1.85. The Living Cycads. By CHARLES JoSEPH CHAMBERLAIN. xiv-H172 pages. $1.5o, postpaid $1.60. Problems of Fertilization. By FRANK R. LILLIE. xii-H278 pages. $1.75, postpaid $1.85. - The Origin and Development of the Nervous System from a Physiological Viewpoint. By CHARLES M, CHILD. xvii-H296 pages. $1.75, postpaid $1.85. • The Story of the Maize Plant. By PAUL WEATHERWAx. xvi +248 pages. $1.75, postpaid $1.85. The Antiquity of Disease. By ROY L. MOODIE. xiv-H150 pages. $1.50, postpaid $1.60. - Algebras and Their Arithmetics. By LEONARD E. DICKSON. xii-H242 pages. $2.25, postpaid $2.35. VOLUMES FOR LATER PUBLICATION Mechanics of Delayed Germination in Seeds. By W. CROCKER. Black-Body Radiation. By CHARLEs E. MENDENHALL. The Rigidity of the Earth and of Materials. By A. A. MICHELSON. Linear Integral Equations in General Analysis. By ELIAKIM HASTINGS MOORE. THE UNIVERSITY OF CHICAGO PRESS CHICAGO gº º gº tº * ILLINOIS i | MATHEMAſlCS i ØA 25 ſ | D (c_kson ) | ,” ‘. - , -e, -- })\cºſ(ſ)--\,\!}:* tº i ł · · ·-- -… "-* I \, , , , , ºſa 14}^{\ſtt.·w , ( * * · · ·*~-E w : * *•-··~--~--~-:№ Rae:# ,----- ... • ¶ · --· ∞ √°.',ț############### });; ; ; ;ſraec; ?;&#:}};{{,}3&3 X_2} :; : : ?§ . . . . .* … …" .· •º•';, F&##~#~#e3e3e}, {eº-"...º, ×: H2, …:):):):)... :)::}}!ſae§; §:ſº:};ſº:-!######:§§*; ~--~-', ,ſ.º.).",».º.ºs ºßº.s. ·:##~################ 23;&#####88§§§§§§2;&#:¿•;&#:::::::::: :ſ :, ,~،∞Ľ!, , …" […]', :ſſ.!};',i ſº '); sº s: ae :r.*, ** * * * * } ∞sae± • • • • • • ..º.º., ſ..ſz., ſ.„ºs n º.º, , , , , , º• • º.0* - * :º: - 0∞ B~ œ Œ· ()،|-●! W |º.};}, • ** *, s°, -s a c e º º .*.*.*.*.*.*.* º º & º º Gº tº ºl * * * º tº re-º-º-º: }}ſ. , , , , , waes.,…”, , , , • •¿0 ſº „º.!!!,,,išķſ,№ waeºº * *,Ō % ► ► ►º: &ſe()', ,,.,:.,.) Rºſſºſ};};};};};{{!-}} Ľ№: № º ، ، | .ſeº.º.,º ,, , º.º.º.º.º.· ∞ √°', * … :) •), ºwº, º ºſº, ºſſº, º·|- * .· ·0 * , ! ¿;- ·ºs]№ſ,ſºļºſ ºn', �ºſº, º, ºſºa · --· |- º.º.º.º.º.º','',· ſºſ, ººººººº,ſae ¿¿.* º.0 • №ºn’s sae, ∞}. 。· - - - - ·sºs, , !|×e º· · -- -·ſºlº.ºº:Dae* :: : -- · ·- - e º ºſaer·º·:·º·:·º·:·º·:·º,■ -},*(\&\wſaegſ*ae []e º „ºſºs ſºnº, ºſ ºſſºwºlae 0ae :∞ſ. ·ſºs, -sº|-¿?• s s.º ºſºiſſºs! 1©¿ yºn"º":∞ √{ ∞ √°√∞ aewae،: º.º.ºººfa",?[] ، ،© :∞�� pºſſº $$$$$$$$$$$$ º ai .• №ºoºººººººººººººº.ae, t.º.ºr.ººaeae∞ſaeſº, ſ.º., ºs,¿Ķ|- ſiſſae ºſ r.º.º.º.,, ſº, ºſºșaes., ſpºnſº,%;، ، ، *,**wae? Twº.* . --· - · ∞ ae |-··|-- · -Ķ∞∞∞- · !ºrſ ! º.ºoº » ∞ wae º ſ', ∞∞ √° º , t. y., .· |-wae, ſae ſae,|-ºu, } J