Header auto-generated for TEI version

~~6.4 . ~~
~~The primary decomposition theorem ~~
~~We are trying to study a linear operator T on the finite-dimensional space V , by decomposing T into a direct sum of operators which are in some sense elementary . ~~
~~We can do this through the characteristic values and vectors of T in certain special cases , i.e. , when the minimal polynomial for T factors over the scalar field F into a product of distinct monic polynomials of degree 1 . ~~
~~What can we do with the general T ? ? ~~
~~If we try to study T using characteristic values , we are confronted with two problems . ~~
~~First , T may not have a single characteristic value ; ; ~~
~~this is really a deficiency in the scalar field , namely , that it is not algebraically closed . ~~
~~Second , even if the characteristic polynomial factors completely over F into a product of polynomials of degree 1 , there may not be enough characteristic vectors for T to span the space V . ~~
~~This is clearly a deficiency in T . ~~
~~The second situation is illustrated by the operator T on Af ( F any field ) represented in the standard basis by Af . ~~
~~The characteristic polynomial for A is Af and this is plainly also the minimal polynomial for A ( or for T ) . ~~
~~Thus T is not diagonalizable . ~~
~~One sees that this happens because the null space of Af has dimension 1 only . ~~
~~On the other hand , the null space of Af and the null space of Af together span V , the former being the subspace spanned by Af and the latter the subspace spanned by Af and Af . ~~

~~This will be more or less our general method for the second problem . ~~
~~If ( remember this is an assumption ) the minimal polynomial for T decomposes Af where Af are distinct elements of F , then we shall show that the space V is the direct sum of the null spaces of Af . ~~
~~The diagonalizable operator is the special case of this in which Af for each i . ~~
~~The theorem which we prove is more general than what we have described , since it works with the primary decomposition of the minimal polynomial , whether or not the primes which enter are all of first degree . ~~
~~The reader will find it helpful to think of the special case when the primes are of degree 1 , and even more particularly , to think of the proof of Theorem 10 , a special case of this theorem . ~~

~~Theorem 12 . ~~
~~( primary decomposition theorem ) . ~~

~~Let T be a linear operator on the finite-dimensional vector space V over the field F . ~~
~~Let p be the minimal polynomial for T , Af , where the Af , are distinct irreducible monic polynomials over F and the Af are positive integers . ~~
~~Let Af be the null space of Af . ~~
~~Then ( A ) Af ; ; ~~
~~( B ) each Af is invariant under T ; ; ~~
~~( C ) if Af is the operator induced on Af by T , then the minimal polynomial for Af is Af . ~~
~~Proof . ~~

~~The idea of the proof is this . ~~
~~If the direct-sum decomposition ( A ) is valid , how can we get hold of the projections Af associated with the decomposition ? ? ~~
~~The projection Af will be the identity on Af and zero on the other Af . ~~
~~We shall find a polynomial Af such that Af is the identity on Af and is zero on the other Af , and so that Af , etc. . ~~

~~For each i , let Af . ~~
~~Since Af are distinct prime polynomials , the polynomials Af are relatively prime ( Theorem 8 , Chapter 4 ) . ~~
~~Thus there are polynomials Af such that Af . ~~
~~Note also that if Af , then Af is divisible by the polynomial p , because Af contains each Af as a factor . ~~
~~We shall show that the polynomials Af behave in the manner described in the first paragraph of the proof . ~~

~~Let Af . ~~
~~Since Af and P divides Af for Af , we have Af . ~~
~~Thus the Af are projections which correspond to some direct-sum decomposition of the space V . ~~
~~We wish to show that the range of Af is exactly the subspace Af . ~~
~~It is clear that each vector in the range of Af is in Af for if **ya is in the range of Af , then Af and so Af because Af is divisible by the minimal polynomial P . ~~
~~Conversely , suppose that **ya is in the null space of Af . ~~
~~If Af , then Af is divisible by Af and so Af , i.e. , Af . ~~
~~But then it is immediate that Af , i.e. , that **ya is in the range of Af . ~~
~~This completes the proof of statement ( A ) . ~~

~~It is certainly clear that the subspaces Af are invariant under T . ~~
~~If Af is the operator induced on Af by T , then evidently Af , because by definition Af is 0 on the subspace Af . ~~
~~This shows that the minimal polynomial for Af divides Af . ~~
~~Conversely , let G be any polynomial such that Af . ~~
~~Then Af . ~~
~~Thus Af is divisible by the minimal polynomial P of T , i.e. , Af divides Af . ~~
~~It is easily seen that Af divides G . ~~
~~Hence the minimal polynomial for Af is Af . ~~

~~Corollary . ~~

~~If Af are the projections associated with the primary decomposition of T , then each Af is a polynomial in T , and accordingly if a linear operator U commutes with T then U commutes with each of the Af , i.e. , each subspace Af is invariant under U . ~~

~~In the notation of the proof of Theorem 12 , let us take a look at the special case in which the minimal polynomial for T is a product of first-degree polynomials , i.e. , the case in which each Af is of the form Af . ~~
~~Now the range of Af is the null space Af of Af . ~~
~~Let us put Af . ~~
~~By Theorem 10 , D is a diagonalizable operator which we shall call the diagonalizable part of T . ~~
~~Let us look at the operator Af . ~~
~~Now Af Af so Af . ~~
~~The reader should be familiar enough with projections by now so that he sees that Af and in general that Af . ~~
~~When Af for each i , we shall have Af , because the operator Af will then be 0 on the range of Af . ~~

~~Definition . ~~

~~Let N be a linear operator on the vector space V . ~~
~~We say that N is nilpotent if there is some positive integer R such that Af . ~~

~~Theorem 13 . ~~

~~Let T be a linear operator on the finite-dimensional vector space V over the field F . ~~
~~Suppose that the minimal polynomial for T decomposes over F into a product of linear polynomials . ~~
~~Then there is a diagonalizable operator D on V and a nilpotent operator N in V such that ( A ) Af , ( b ) Af . ~~
~~The diagonalizable operator D and the nilpotent operator N are uniquely determined by ( A ) and ( B ) and each of them is a polynomial in T . ~~
~~Proof . ~~

~~We have just observed that we can write Af where D is diagonalizable and N is nilpotent , and where D and N not only commute but are polynomials in T . ~~
~~Now suppose that we also have Af where D' is diagonalizable , N' is nilpotent , and Af . ~~
~~We shall prove that Af . ~~

~~Since D' and N' commute with one another and Af , we see that D' and N' commute with T . ~~
~~Thus D' and N' commute with any polynomial in T ; ; ~~
~~hence they commute with D and with N . ~~
~~Now we have Af or Af and all four of these operators commute with one another . ~~
~~Since D and D' are both diagonalizable and they commute , they are simultaneously diagonalizable , and Af is diagonalizable . ~~
~~Since N and N' are both nilpotent and they commute , the operator Af is nilpotent ; ; ~~
~~for , using the fact that N and N' commute Af and so when R is sufficiently large every term in this expression for Af will be 0 . ~~
~~( Actually , a nilpotent operator on an n-dimensional space must have its T power 0 ; ; ~~
~~if we take Af above , that will be large enough . ~~
~~It then follows that Af is large enough , but this is not obvious from the above expression . ~~
~~) Now Af is a diagonalizable operator which is also nilpotent . ~~
~~Such an operator is obviously the zero operator ; ; ~~
~~for since it is nilpotent , the minimal polynomial for this operator is of the form Af for some Af ; ; ~~
~~but then since the operator is diagonalizable , the minimal polynomial cannot have a repeated root ; ; ~~
~~hence Af and the minimal polynomial is simply x , which says the operator is 0 . ~~
~~Thus we see that Af and Af . ~~

~~Corollary . ~~

~~Let V be a finite-dimensional vector space over an algebraically closed field F , e.g. , the field of complex numbers . ~~
~~Then every linear operator T in V can be written as the sum of a diagonalizable operator D and a nilpotent operator N which commute . ~~
~~These operators D and N are unique and each is a polynomial in T . ~~

~~From these results , one sees that the study of linear operators on vector spaces over an algebraically closed field is essentially reduced to the study of nilpotent operators . ~~
~~For vector spaces over non-algebraically closed fields , we still need to find some substitute for characteristic values and vectors . ~~
~~It is a very interesting fact that these two problems can be handled simultaneously and this is what we shall do in the next chapter . ~~

~~In concluding this section , we should like to give an example which illustrates some of the ideas of the primary decomposition theorem . ~~
~~We have chosen to give it at the end of the section since it deals with differential equations and thus is not purely linear algebra . ~~

~~Example 11 . ~~

~~In the primary decomposition theorem , it is not necessary that the vector space V be finite dimensional , nor is it necessary for parts ( A ) and ( B ) that P be the minimal polynomial for T . ~~
~~If T is a linear operator on an arbitrary vector space and if there is a monic polynomial P such that Af , then parts ( A ) and ( B ) of Theorem 12 are valid for T with the proof which we gave . ~~

~~Let N be a positive integer and let V be the space of all N times continuously differentiable functions F on the real line which satisfy the differential equation Af where Af are some fixed constants . ~~
~~If Af denotes the space of N times continuously differentiable functions , then the space V of solutions of this differential equation is a subspace of Af . ~~
~~If D denotes the differentiation operator and P is the polynomial Af then V is the null space of the operator p ( , ) , because Af simply says Af . ~~
~~Let us now regard D as a linear operator on the subspace V . ~~
~~Then Af . ~~

~~If we are discussing differentiable complex-valued functions , then Af and V are complex vector spaces , and Af may be any complex numbers . ~~
~~We now write Af where Af are distinct complex numbers . ~~
~~If Af is the null space of Af , then Theorem 12 says that Af . ~~
~~In other words , if F satisfies the differential equation Af , then F is uniquely expressible in the form Af where Af satisfies the differential equation Af . ~~
~~Thus , the study of the solutions to the equation Af is reduced to the study of the space of solutions of a differential equation of the form Af . ~~
~~This reduction has been accomplished by the general methods of linear algebra , i.e. , by the primary decomposition theorem . ~~

~~To describe the space of solutions to Af , one must know something about differential equations ; ; ~~
~~that is , one must know something about D other than the fact that it is a linear operator . ~~
~~However , one does not need to know very much . ~~
~~It is very easy to establish by induction on R that if F is in Af then Af ; ; ~~
~~that is , Af , etc. . ~~
~~Thus Af if and only if Af . ~~
~~A function G such that Af , i.e. , Af , must be a polynomial function of degree Af or less : Af . ~~
~~Thus F satisfies Af if and only if F has the form Af . ~~
~~Accordingly , the ' functions ' Af span the space of solutions of Af . ~~
~~Since Af are linearly independent functions and the exponential function has no zeros , these R functions Af , form a basis for the space of solutions . ~~