The ring of entire functions is a non-factorial domain

Let A denote the ring of entire functions (although the following result holds for the ring of holomorphic functions over any complex domain with exactly the same proof). This ring is a domain: if fg=0 with f and g entire functions, then the complex plane is the union of the zero locus of f and the zero locus of g. One of these sets must contain an accumulation point (for instance by the Baire category theorem) so it follows that either f or g must be identically zero by the identity principle. This establishes that A is an integral domain.

However, it is not a factorial domain. It is easy to see that the units of A are precisely the entire functions that vanish nowhere. Moreover, any entire function f with at least two zeroes is reducible: indeed, if f(z_0)=0, then f=(z-z_0)g with g entire and vanishing in at least one point, so we have written f as a product of two non-units. It follows that irreducible elements (they exist, although we will not prove it) have exactly one root. Then any non-zero entire function with an infinite amount of zeroes (such as \sin(\pi z)) cannot be written as a finite product of irreducibles, which proves that A is not a factorial domain.

Posted in Uncategorized | Tagged , | Leave a comment

A non-finitely generated subgroup of a finitely generated group

Let k be an infinite field of characteristic 0 and consider the subgroup G of \mathrm{GL}_2(k) generated by the matrices \begin{bmatrix}2 & 0\\ 0 & 1\end{bmatrix} and \begin{bmatrix}1 & 1\\ 0 & 1\end{bmatrix}. We now consider the subgroup H of G consisting of matrices with all of their diagonal entries equal to 1. As a set, this subgroup consists of matrices of the form \begin{bmatrix}1 & 2^kn\\ 0 & 1\end{bmatrix} where k,n are integers. This subgroup is obviously isomorphic to the underlying additive group of the localization of \Bbb{Z} at the multiplicative set of the powers of 2, which is not finitely generated.

Posted in Uncategorized | Tagged | Leave a comment

On some parametric algebraic curves

Let k be an algebraically closed field and consider an affine polynomial parametric curve X of the form (t,f_2(t),\dots,f_n(t)). We will show that this curve is an algebraic variety and we will characterize its associated ideal. This family of examples encompasses classic curves such as the moment curve (t,t^2,\dots,t^n).

It is easy to see that such a curve is the zero locus of the ideal J=(f_2(x_1)-x_2, \dots, f_n(x_1)-x_n). Since I(X) = I(Z(J)) = \sqrt{J} by the nullstellensatz, if we show that J is a prime ideal we will have both proved that I(X)=J and that X is a variety.

Consider the morphism \varphi:k[x_1,\dots,x_n]\to k[t] given by x_1\mapsto t,\, x_i\mapsto f_i(t). One easily checks that J\subseteq \ker \varphi. In order to prove the other inclusion, we first make the following observation:

Notice that any polynomial f(x_1,\dots,x_n) is equivalent modulo J to a polynomial \hat{f}(x_1); to obtain \hat{f} from f, simply replace any appearance of x_i by f_i(x_1). Thus, we have that f = \hat{f} + g for some g\in J.

Suppose now that f\in\ker\varphi and write f = \hat{f} + g as before. Evaluating \varphi at this sum, we get that 0=\hat{f}(t) + g(t,f_2(t),\dots,f_n(t)), which amounts to \hat{f}=0, since g\in J\subseteq \ker\varphi. We have thus shown that f\in J, which in turn proves that J=\ker\varphi. Notice that the crucial observation that makes this whole line of reasoning work is the fact that \varphi restricts to a monomorphism over the subalgebra k[x_1]. As a side comment, the same proof may be modified to hold over a slightly more general family of parametric curves, where the first coordinate is some linear function of t, so that the restriction of \varphi is still a monomorphism.

Finally, this shows that J is prime, since k[x_1,\dots,x_n]/J is an integral domain.

Posted in Uncategorized | Tagged | Leave a comment

Representations of the Weyl algebra

(This post is a solution to problem 1.26 from Etingof, Goldberg and Hensel’s text on representation theory).

In this post we will study the representation theory of the Weyl algebra A=k\langle x,y\rangle/(xy-yx-1), where k is algebraically closed.

If the characteristic of the field k is zero, then there are no non-trivial finite dimensional representations of A, since in such a representation x, y must act as matrices and the identity [x,y]=1 cannot hold since the commutator of two matrices has null trace, while the identity does not.

It is easy to prove by induction that y^jx=xy^j+jy^{j-1}. Suppose then that I is a bilateral ideal of A containing a non-zero element p(x,y), in which we will suppose that both variables appear with non-zero coefficients. The previous identity shows that xp-px=p'(y) (in fact, p' is the derivative of p with respect to y evaluated at x=1), so we may always find a polynomial q depending on only one variable in any ideal. This shows that the only non-zero ideal is in fact the whole algebra, since if q(x)\in I then so does yq-qy = q(1) which is a unit in the algebra. This in turn provides another proof of the fact that there are no non-trivial finite dimensional representations, since such a representation is given by a map of algebras A\to \mathrm{End}_k(V), and since the latter is finite-dimensional we necessarily have non-trivial kernel. Since the kernel is an ideal in A, it follows that A, and in particular its unit, must act as 0, which proves that V=0.

Suppose now that our base field has positive characteristic p. The commutation relation easily shows that x^p, y^p are central, since y^px = xy^p +py^{p-1}=xy^p. We claim that the center is exactly the ideal (x^p, y^p). If not, suppose p is a polynomial not contained in this ideal. Without loss of generality we will assume that there is a term of the form x^iy^j with p does not divide j. Then xp-px = p'(y) where p' is a non-zero polynomial and so p is not central.

Let us now find all the irreducible finite dimensional representations of A.
Taking traces in the identity xy-yx=1, we obtain that any finite dimensional representation V must be of dimension np. Suppose v\in V is an eigenvector of y; say yv = \lambda v. We claim that the x-cyclic space generated by v is an A-submodule, since it is obviously x-stable and

y\sum \mu_i x^iv = \sum( \mu_i\lambda x^iv + i\mu_i x^{i-1}v).

By Schur’s lemma we know that, since x^p, y^p are central and V is irreducible, their action on V is scalar. Therefore x^pv=\mu v and so V=\langle v,xv,\dots, x^{p-1}v\rangle. Moreover, these vectors are linearly independent, since the representation is of dimension np and v\neq 0.

In conclusion, any finite dimensional representation V must be of dimension p and there is a basis \{v,xv,\dots, x^{p-1}v\} where the action of x and y is given by matrices

x=\begin{bmatrix} 0&0&0&\dots&\mu\\1&0&0&\dots&0\\ 0&1&0&\dots&0\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ 0&0&0&1&0\end{bmatrix}, y=\begin{bmatrix} \lambda&1&0&\dots&0\\0&\lambda&2&\dots&0\\ 0&0&\lambda&\ddots&0\\ \vdots&\vdots&\ddots&\ddots&p-1\\ 0&0&0&0&\lambda\end{bmatrix}

and conversely different choices of \lambda,\mu gives rise to non-isomorphic irreducible finite dimensional representations.

Posted in Uncategorized | Tagged | 1 Comment

The characteristic polynomial of a product

Let k be an infinite field. Then, the affine space k^n is irreducible in the Zariski topology (this follows since the ideal I(k^n)=0 is prime): in other words, it is not the union of two proper algebraic subvarieties. This has the following consequence: if P is a polynomial vanishing in a Zariski open set G, it must vanish everywhere (since k^n = G^c\cup V(P) and so one of the terms in the right hand side must be k^n). Therefore, one may prove a polynomial identity by showing it holds on a Zariski open set.

For instance, let A,B be n\times n matrices over k. If A,B\in\mathrm{GL}(n,k), then obviously \chi_{AB} = \chi_{BA}, since AB = B^{-1}(BA)B (in fact, we only need one of the two matrices to be invertible). Therefore, the identity \chi_{AB}=\chi_{BA}, which is a polynomial in the entries of both A and B, holds in a Zariski open set (the set in k^{2n^2} given by pairs of matrices where one of them is invertible), and so by the previous observation holds for all square matrices.

Posted in Uncategorized | Tagged , | Leave a comment

The fundamental group of SO(n)

In this post we will compute the fundamental group of the rotation groups \mathrm{SO}(n).

We will start off by characterizing \mathrm{SO}(3). Let D^3 denote the closed 3-disk of radius \pi and define a map f:D^3\to\mathrm{SO}(3) as follows: send a point x to the rotation by \vert x\vert radians around the axis x. This map is obviously continuous and surjective. Moreover, antipodal points at the boundary of D^3 get mapped to the same rotation, so f factors through \mathbb{RP}^3. We now have a map f:\mathbb{RP}^3\to\mathrm{SO}(3) which is bijective and continuous, and since projective space is compact and Hausdorff, it is an homeomorphism. Therefore, we conclude \pi_1(\mathrm{SO}(3))\simeq \Bbb{Z}/2\Bbb{Z}.

Now, \mathrm{SO}(n) acts transitively on S^{n-1} by rotations. The stabilizer of a point x is isomorphic to \mathrm{SO}(n-1). Therefore, we have a fiber bundle \mathrm{SO}(n)\to S^{n-1} with fiber \mathrm{SO}(n-1), constructed by picking a point x\in S^{n-1} and sending r\to r(x). The long exact sequence of a fibration tells us that

\dots\longrightarrow\pi_2(S^{n-1})\longrightarrow\pi_1(\mathrm{SO}(n-1))\longrightarrow\pi_1(\mathrm{SO}(n))\longrightarrow\pi_1(S^{n-1})\longrightarrow\{1\}

Suppose n>3. Then, since the first two homotopy groups of the (n-1)-sphere vanish, we have isomorphisms \pi_1(\mathrm{SO}(n-1))\simeq\pi_1(\mathrm{SO}(n)) and then by induction \pi_1(\mathrm{SO}(n))\simeq \Bbb{Z}/2\Bbb{Z} for all n\geq 3.

The fundamental groups of \mathrm{SO}(2) and \mathrm{SO}(1) are easily computed, since they are homeomorphic to S^1 and a point, respectively.

Posted in Uncategorized | Tagged , | Leave a comment

Epimorphisms between Lie groups

In this post we will prove that if G_1 and G_2 are Lie groups, with G_2 connected, a morphism f: G_1 \to G_2 inducing an epimorphism between their corresponding Lie algebras is itself an epimorphism.

Indeed, if the differential at the identity is epimorphic, then by the rank theorem the image of f contains a neighborhood of the identity of G_2. We claim that any subgroup H containing such a neighborhood U must be G_2 itself. This holds in general for connected topological groups: H is open, since if h\in H then hU is an open neighborhood of h contained in H. On the other hand, the complement of H is the union of all the cosets xH with x\not\in H, which are all open. Therefore, H is both open and closed, and then H=G_2 by connectedness.

Posted in Uncategorized | Tagged , | Leave a comment

The tangent bundle is orientable

Let M be a smooth n-manifold; we will prove that its tangent bundle TM is orientable. While this result may be proved elementarily by picking the standard charts for TM and showing that the transition maps have positive determinant, we will use some technology.

The cotangent bundle on M carries a canonical symplectic structure. In particular, it is orientable, since we may produce a non-vanishing 2n-form on it by wedging the symplectic form n times against itself. Now, any manifold admits a riemannian metric: one way to show this is to pick a covering of M by open sets that trivialize the tangent bundle, picking any inner product on each trivialization and pasting these local constructions using a positive partition of unity. The positivity of the partition of unity is key to ensure that the global construction is positive definite on each fibre. Finally, the metric induces a bundle isomorphism between the tangent and cotangent bundles via the map \varphi \mapsto \langle \varphi, - \rangle, and so the tangent bundle is orientable as well.

Posted in Uncategorized | Tagged , | Leave a comment

Density of diagonalizable matrices

Consider M_n(\mathbb{C}) regarded as a metric space (for instance, identifying it with \mathbb{C}^{n^2}). We will prove that the set of diagonalizable matrices is dense in M_n(\mathbb{C}).

Let A\in M_n(\mathbb{C}). We want to find a sequence of diagonalizable matrices D_n such that D_n\rightarrow A. Suppose that A = CBC^{-1}; then, D_n\rightarrow A iff C^{-1}D_nC\rightarrow B by continuity. Therefore it suffices to prove that any element in the conjugacy class of A may be approximated by diagonalizable matrices.

Now, since \mathbb{C} is algebraically closed, any matrix A is conjugate to a triangular matrix B. We may perturb the diagonal entries of B to get a new triangular matrix B_\varepsilon in such a way that d(B,B_{\varepsilon})<\varepsilon and all of the diagonal entries of B_\varepsilon are distinct. This last condition ensures that B_\varepsilon is diagonalizable, since it has n distinct eigenvalues.

Another proof goes as follows. A matrix with n different eigenvalues is such that the discriminant of its characteristic polynomial does not vanish. Therefore, the set of matrices with n different eigenvalues is Zariski open (and therefore dense) since one may write down the discriminant of the characteristic polynomial of a matrix entirely in terms of its coefficients.

The fact that \mathbb{C} is algebraically closed is the key behind this proof. In fact, the result is false over \mathbb{R}. To see this, let A\in M_2(\mathbb{R}) be a real matrix with no real eigenvalues. Therefore, its characteristic polynomial is quadratic and has no real roots, and so its discriminant is strictly negative. On the other hand, the characteristic polynomial of any 2\times 2 diagonalizable matrix over the real numbers must have non-negative discriminant, since it must have at least one real root. Since the discriminant of the characteristic polynomial is a polynomial on the coefficients, it is continuous, and so we cannot approximate A with diagonalizable matrices.

Posted in Uncategorized | Tagged , | Leave a comment

Eigenvalues of the adjoint representation

This is exercise 1.7 from Humphreys’ Introduction to Lie algebras and representation theory.

Let f\in \mathfrak{gl}(n) be a matrix with n distinct eigenvalues \lambda_1,\dots,\lambda_n. Then the eigenvalues of the adjoint representation \mathrm{ad}\, x are precisely the differences \lambda_i - \lambda_j.

In fact, suppose g is an eigenvector for \mathrm{ad}\, x; that is, fg - gf = \mu g for some scalar \mu. Let e_i denote an f-eigenvector associated to the eigenvalue \lambda_i. Then, we have that

\mu g(e_i) = fg(e_i) - gf(e_i) = fg(e_i) - \lambda_i g(e_i),

and therefore (\mu+\lambda_i) g(e_i) = fg(e_i). Then g(e_i) is an f-eigenvector and so \mu+\lambda_i = \lambda_j for some j, or in other words \mu = \lambda_j - \lambda_i. This proves that every eigenvalue of the adjoint representation has the desired form.

As for existence, let g be the endomorphism that sends e_i\mapsto e_j, and e_k\mapsto 0 for k\neq i. Then

(fg- gf)(e_i) = f(e_j) - \lambda_i g(e_i) = (\lambda_j -\lambda_i) e_j = (\lambda_j -\lambda_i) g(e_i)

and

(fg- gf)(e_k) = - \lambda_j g(e_k) = 0 = (\lambda_j -\lambda_i) g(e_k),

and so fg-gf = (\lambda_j - \lambda_i)g since the identity holds in a basis.

Posted in Uncategorized | Tagged , | Leave a comment