tag:blogger.com,1999:blog-61289555987182048082018-08-30T16:31:35.108-07:00The Matthew Maths Showmaths blog - <a href="http://mebdev.blogspot.com">programming blog</a> - <a href="http://mebassett.gegn.net">about me</a> - <a href="http://www.twitter.com/mebassett">twitter</a>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.comBlogger56125tag:blogger.com,1999:blog-6128955598718204808.post-61926764387651401382013-01-21T17:51:00.000-08:002013-01-21T18:41:43.952-08:00Bosonization Part III, Co-Bosonization<p>A few days ago I sketched out some of the basic category theoretic ideas behind the theory of <a href="http://mebassett.blogspot.co.uk/2013/01/bosonisation-of-quantum-groups.html">bosonization of quantum groups</a>. Today I want to talk about the dual version, co-bosonization.</p> <p>First note that this whole theory is trivial, as dualizing such a theory is a straightforward and obvious process, but I'm working through the proofs of bosonization in a dualized setting by hand, just so I can learn Quantum Groups.</p> <p>So let $(H,\mathcal{R})$ be a <i>dually</i> quasitriangular Hopf algebra (I called this a <b>co-braided Hopf algebra</b> in <a href="http://mebassett.blogspot.co.uk/2013/01/bosonisation-of-quantum-groups-part-ii.html">an earlier post on braided categories</a>) and let $\mathcal{C} = ^H\mathcal{M}$ by the braided category of $H$-comodules. If $(V, \beta_V: v \mapsto \sum h^{(1,v)}\otimes v^{(1)})$ and $(W, \beta_W: w \mapsto \sum h^{(1,w)}\otimes w^{(1)})$ are objects in the category, then the braiding is given by </p> <p align=center>$ \displaystyle \Psi_{V,W}(v\otimes w) = \sum \mathcal{R}\left(h^{(1,v)} \otimes h^{(1,w)}\right) w^{(1)} \otimes v^{(1)}$</p> <p>I won't give the proof here (I may not give it at all!). As before, a Hopf algebra $B$ in the category $^H\mathcal{M}$ means that $B$, $B\otimes B$ and all the structure maps are morphisms in the category, thus any twist involved is actually an invocation of the braiding $\Psi$.</p> <p>Also as before, $^B \mathcal{C}$ is the category of $B$-comodules within the category $\mathcal{C}$, and the idea behind co-bosonization is to find a Hopf algebra $\text{cobos}(B)$ such that $^B\mathcal{C} \cong \, ^{\text{cobos}(B)}\mathcal{M}$.</p> <p>We're going to propose what this new Hopf algebra $\text{cobos}(B)$ should be. I won't give any proofs in this post (because I haven't done them yet), but hopefully I've dualized things correctly and I'm not stating anything that's not true. We'll start with the tensor product $B \otimes H$ and we'll put an coalgebra structure on it by first noting that in the bosonization case, the coalgebra $\Delta B \otimes H \rightarrow (B\otimes H) \otimes (B \otimes H)$ is</p> <p align=center>$ \displaystyle \left(\text{Id}_B \otimes \Psi_{B,H} \otimes \text{Id}_H\right) \circ \left( \Delta_B \otimes \Delta_H \right) $</p> <p>This coproduct <i>almost</i> makes sense here, we just need to treat $H$ as a co-module of itself (so the braiding makes sense). But this is no difficult matter, because <b>every Hopf algebra coacts on itself as a coalgebra</b> by</p> <p align=center>$ \displaystyle \beta_H (h) = \sum h_{(1)} S(h_{(3)}) \otimes h_{(2)}$</p> <p>So we just use the dual-braiding $\Psi$ instead of the original one, and write</p> <p align=center>$ \displaystyle \Delta (b \otimes h) = \sum \mathcal{R}\left( h^{(1,b_{(2)})} \otimes h^{(1,h_{(2)})} \right) b_{(1)} \otimes h_{(1)}^{(1)} \otimes b_{(2)}^{(1)} \otimes h_{(2)}$</p> <p>Now on to the product for $\text{cobos}(B) = B \otimes H$: in the bosonization case (were $H$ acts, rather than coacts, on $B$), the product is given by</p> <p align=center>$ \displaystyle (b \otimes h) (c\otimes g) = \sum b(h_{(1)} \triangleright c) \otimes h_{(2)}g$</p> <p>This doesn't immediately make sense in our case, but we have the following proposition:</p> <p ><strong>Proposition.</strong> <em>There exists a monoidal functor (functor that respect the tensor product of objects) $^H \mathcal{M} \rightarrow ^H_H\mathcal{M}$, the crossed modules we mentioned <a href="http://mebassett.blogspot.co.uk/2013/01/drinfelds-quantum-double-and-category.html">for the quantum double</a>, by $(V, \beta_V) \mapsto (V,\beta_V,\triangleright)$, where $h \triangleright v = \sum \mathcal{R}\left( h \otimes h^{(1,v)} \right)v^{(1)}$</em> </p> <p>Hence we do have an action of $H$ on $B$, and we can write the product</p> <p align=center>$ \displaystyle (b \otimes h) (c \otimes g) = \sum \mathcal{R}\left( h_{(1)} \otimes h^{(1,c)}\right) bc^{(1)} \otimes h_{(2)}g$</p> <p>The unit and counit are obvious in this case ($1 \otimes 1$ and $\epsilon(b \otimes h) = \epsilon(b)\epsilon(h)$), but we still need an antipode. Again, we refer to bosonization and try to find the correct dual notion. In bosonization, the antipode $S: B \otimes H \rightarrow B \otimes H$ is given by:</p> <p align=center>$ \displaystyle B\otimes H \xrightarrow{S_B \otimes S_H} B \otimes H \xrightarrow{\Psi_{B,H}} H \otimes B \xrightarrow{\Delta_H \otimes \text{Id}_B} H \otimes H \otimes B \xrightarrow{\text{Id}_H \otimes \Psi_{H,B}} H \otimes B \otimes H \xrightarrow{\triangleright \otimes \text{Id}_H} B \otimes H$</p> <p>This makes perfect sense for co-bosonization without modification. To summarize, we need proofs for the claims</p> <p> <ol> <li> That $\Psi$ is the braiding for the category $^H \mathcal{M}$. (I have this.) <li> That the product and coproduct as defined above give us a bialgebra $\text{cos}(B)$ for $B$ a Hopf algebra in the category $^H \mathcal{M}$. (Not yet proved.) <li> That the above antipode gives us a Hopf algebra $\text{cos}(B)$. (Not yet proved.) <li> That $^B\mathcal{C} = ^{\text{cos}(B)}\mathcal{M}$. (Not yet proved). </ol> </p> <p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-17787840129918251172013-01-19T19:25:00.002-08:002013-01-19T19:25:38.577-08:00 Category Theoretic Methods and Drinfeld's Quantum Double<p>Earlier this month I discussed <a href="http://mebassett.blogspot.co.uk/2013/01/what-i-learned-today-braided-or-quasi.html">braided Hopf algebras</a>, but I neglected to give an example of one. Today I'll describe how we can find a large collection of braided Hopf algebras via a construction by Drinfeld called the Quantum Double.</p> <p>Let $H$ be a <b>finite dimensional</b> Hopf algebra with invertible antipode. Then $H^* = \text{Hom}(H, k)$ is also a Hopf algebra: it's a basic result to show that the dual vector space of a coalgebra is an algebra with the product map $\Delta^*$, the transpose of the original coproduct, moreover, in the finite dimensional case, $m^*$ becomes a coproduct, making the dual of a finite dimensional algebra a coalgebra. Since $H$ is finite dimensional by assumption, and the product and coproduct maps are [co]algebra morphisms, we see we have a Hopf algebra structure on $H^*$, too. That the transpose of the antipode is an antipode and that we have the necessary co/units is a more straightfoward result. The finite dimensional requirement is needed because $H^* \otimes H^*$ is not necessarily isomorphic to $(H\otimes H)^*$ without it. </p> <p>To be explicit, we have the product</p> <p align="center">$ \displaystyle \phi \psi (h) = \sum \phi(h_1) \psi(h_2)$</p> <p>and the coproduct</p> <p align="center">$ \displaystyle \Delta \phi (h \otimes g) = \phi(hg)$</p> <p>along with the unit $1(h) = \epsilon(h)$, counit $\epsilon(\phi) = \phi(1)$ and antipode $S(\phi)(h) = \phi(h)(Sh)$. Of course, $H^*$ and $H$ are <a href="http://mebassett.blogspot.co.uk/2012/12/what-i-learned-today-dually-paired-hopf.html">dually paired</a> by the evaluation map</p> <p align="center">$ \displaystyle \langle \phi, h \rangle = \text{ev}_\phi(h) = \phi(h)$</p> <p>we also have the reverse dual pairing $\langle , \rangle : H\otimes H^{\text{op}*}$ for $H$ and $H^*$ with the opposite multiplication. From this, we can define the Quantum Double $D(H)$ as the vector space $H^* \otimes H$ with the product:</p> <p align="center">$ \displaystyle (\phi \otimes h) (\psi \otimes g) = \sum \left( \langle Sh_1, \psi_2 \rangle \langle h_3, \psi_3\rangle \right) \psi_2 \phi \otimes h_2 g$</p> <p>the coproduct:</p> <p align="center">$ \displaystyle \Delta(\phi, h) = \sum (\phi_1 \otimes h_1) \otimes (\phi_2 \otimes h_2)$</p> <p>the antipode:</p> <p align="center">$ \displaystyle S(\phi\otimes h) = (1\otimes Sh) (S^{-1}\phi \otimes 1)$</p> <p>and finally with the unit $1\otimes1$ and counit $\epsilon(\phi \otimes h) = \epsilon(\phi)\epsilon(h)$. </p> <p>We can also describe the quantum double as the bicrossed product of $H$ with $H^{\text{op}*}$, but we aren't focusing on that here. Rather, let me point out that $D(H)$ is a braided Hopf algebra with </p> <p align="center">$ \displaystyle \mathcal{R} = \sum_{k=1}^n (f^k \otimes 1) \otimes (1 \otimes e_k)$</p> <p>Where $\langle e_1, \ldots e_n \rangle$ is a basis for $H$ and $\langle f^1,\ldots f^n \rangle$ is a dual basis.</p> <p>We can also define the Quantum Double for an infinite dimensional Hopf algebra so long as we can find another dually paired Hopf algebra $H'$, which means finding the map $\langle, \rangle: H \otimes H' \rightarrow k$. But I want to describe it in a better (in my opinion) way. First, I need to quote some results:</p> <p>Let $_H^H \mathcal{M}$ denote the category of <b>crossed $H$-modules</b>, that is, vector spaces $V$ that are compatibly both $H$-modules and $H$-comodules in the following sense: $\triangleright: H \otimes V \rightarrow V$ is the module action and $v \mapsto \sum h^{(1)} \otimes v^{(1)}$ is the coaction, then </p> <p align="center">$ \displaystyle \sum h_1 h^{(1)} \otimes h_2 \triangleright v^{(1)} = \sum (h_1 \triangleright v)^{(1,H)} h_2 \otimes (h_1 \triangleright v)^{(1,V)}$</p> <p>for all $h \in H$ and $v \in V$ (the pair $(1,V)$, et al, is to denote which is the $H$ component and which is the $V$ component of the coaction). The morphisms in this category are linear maps that commute with both the action and the coaction. This category is braided with</p> <p align="center">$ \displaystyle \Psi_{V,W}(v \otimes w) = \sum h^{(1,v)} \triangleright w \otimes v^{(1)}.$</p> <p>Here the pair $(1,v)$ denotes that this is the coaction on $V$ and not on $W$. Now, we have the following result:</p> <p><blockquote><b>Theorem</b> <em> Let $H$ be a finite dimensional Hopf algebra with invertible antipode. Then $_H^H \mathcal{M} = _{D(H)}\mathcal{M}$, that is, the category of crossed $H$-modules is category of modules of the quantum double of $H$. </em></blockquote></p> <p>I won't (and can't yet) prove this, but you can see an argument for it by noting that $D(H)$ contains both $H$ and $H^{\text{op}*}$ as Hopf sub-algebras; a $D(H)$ module is then an $H$ module and an $H^{\text{op}*}$ left module, but that's the same as a $H^*$ right module, which is the same as a left $H$-comodule.</p> <p>Now, if $H$ is not finite dimensional, then can we find a Hopf algebra $\mathcal{H}$ such that $_H^H\mathcal{M} = _\mathcal{H}\mathcal{M}$? We again invoke those <a href="http://ncatlab.org/nlab/show/reconstruction+theorem">Tannakian reconstruction theorems</a> and thus have a suitable definition for $D(H) = \mathcal{H}$ for $H$ of any dimension, and without having to specify a dually paired Hopf algebra $H'$. </p> <p>The obvious question is, what does $D(H)$ look like for an infinite dimensional $H$? Perhaps I'll try answering this later.</p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-77875849965619195552013-01-14T15:50:00.000-08:002013-01-14T15:50:59.267-08:00Bosonisation of Quantum Groups, part II (Braided Categories)<p>Let's talk about the category of modules ${_H\mathcal{M}}$ for a braided Hopf algebra ${H}$ for a minute. If ${V}$ and ${W}$ are ${H}$-modules, then we have an easy action of ${H}$ on ${V \otimes W}$ as well, by:</p> <p align=center>$\displaystyle h \triangleright(v\otimes w) = \sum h_1\triangleright v \otimes h_2\triangleright w $</p> <p>Hence we can think of ${\otimes}$ as a functor ${_H\mathcal{M} \times _H\mathcal{M} \rightarrow _H\mathcal{M}}$. This sort of functor is what turns ${_H\mathcal{M}}$ into a <a href="http://en.wikipedia.org/wiki/Monoidal_category">monoidal category</a>. We also have the opposite function ${\otimes^\text{op}}$, which does what one would expect: ${V \otimes^\text{op} W = W \otimes V}$. </p> <p>We have another operation on this category, the flip map ${\tau}$, which carries the action of one functor ${\otimes}$ to another ${\otimes^\text{op}}$. We'd like it to be a <a href="http://en.wikipedia.org/wiki/Natural_transformation">natural transformation</a>, actually a natural isomorphism (one can see already that it's invertible) from ${\otimes \rightarrow \otimes^\text{op}}$. For this to work we must have, for each ${f: V \rightarrow V'}$ and ${g: W \rightarrow W'}$ in the category, a morphism ${\tau_{V \otimes W}: V \otimes W \rightarrow W \otimes V}$ such that the square</p> <div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-ywKISfDurAo/UPSZqc2hJOI/AAAAAAAAAGw/3VV4gNq9vT8/s1600/Screenshot%2Bfrom%2B2013-01-14%2B23%253A45%253A35.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="115" width="224" src="http://1.bp.blogspot.com/-ywKISfDurAo/UPSZqc2hJOI/AAAAAAAAAGw/3VV4gNq9vT8/s400/Screenshot%2Bfrom%2B2013-01-14%2B23%253A45%253A35.png" /></a></div> <p>commutes. But before we check that, let's first verify that ${\tau_{V \otimes W}}$ is even a morphism in the category (to take a step back from category-theoretic language, this means that we need to check that ${\tau_{V \otimes W}(v\otimes w) = w \otimes v}$ is an ${H}$-linear map). But it's not necessarily true that ${h \triangleright \tau_{V \otimes W}(v\otimes w) = \tau_{V \otimes W}(\sum h_{(1)}\triangleright v \otimes h_{(2)}\triangleright w)}$. In particular, this is only true when ${H}$ is co-commutative. </p> <p>But ${H}$ is a <a href="http://mebassett.blogspot.co.uk/2013/01/what-i-learned-today-braided-or-quasi.html">braided Hopf algebra</a>, meaning that its failure to be co-commutative is controlled by an invertible element ${\mathcal{R} \in H \otimes H}$. Can we use this to construct an natural transformation ${\otimes \rightarrow \otimes^{op}}$? Yes, we can, and this is exactly the structure that gives us a <a href="http://en.wikipedia.org/wiki/Braided_monoidal_category">braided category</a>.</p> <p>To be brief, we can define a <b>braided category</b> to be a <a href="http://en.wikipedia.org/wiki/Monoidal_category">monoidal category</a> together with a natural isomorphism ${\Psi : \otimes \rightarrow \otimes^\text{op}}$ that obeys the following conditions: ${ \Psi_{V \otimes W, Z} = \Psi_{V,Z} \circ \Psi_{W,Z}}$ and ${\Psi_{V,W\otimes Z} = \Psi_{V,Z} \circ \Psi_{V,W}}$. (These are the ``hexagon conditions''. I'm not writing the structure maps of the monodial category, but when you include them, you express these as two hexagonal commutative diagrams.)</p> <p>In the case of a braided, or quasitriangular Hopf algebra, our ${\Psi_{V,W} = \tau(\mathcal{R} \triangleright v\otimes w) = \sum R_2 \triangleright w \otimes R_1 \triangleright v}$. The reader can verify that this meets the axioms of a braided category.</p> <p>There's more we can say about the braiding ${\Psi}$ and it's connection to braid groups, et cetera, but I don't want to go down that route this evening. Rather, I want to discuss the dual version of this theory. First, I need find the dual notion of a braided Hopf algebra. We'll call this a <b>co-braided Hopf algebra</b>. By this I mean that we want to control how far ${H}$ is from being commutative, instead of co-commutative. </p> <p>A <b>co-braided Hopf algebra</b> is a Hopf algebra ${H}$ together with a linear functional ${R: H\otimes H \rightarrow k}$. We require ${R}$ to be convolution invertible, id est, that there exists another linear functional ${R^{-1}}$ such that ${\sum R^{-1}(h_1 \otimes g_1) R(h_2 \otimes g_2) = \epsilon(h)\epsilon(g) = \sum R(h_1 \otimes g_1) R^{-1}(h_2\otimes g_2)}$. We also require it to obey three other relations: </p> <p align=center>$\displaystyle \sum g_1h_1 R(h_2\otimes g_2) = \sum R(h_1 \otimes g_1) h_2g_2$</p> <p align=center>$\displaystyle R (hf \otimes f) = \sum R(h\otimes f_1)R(g\otimes f_2)$</p> <p align=center>$\displaystyle R(h\otimes gf) = \sum R(h_1 \otimes f)R(h_2 \otimes g)$</p> <p> for ${g,h,f \in H}$. </p> <p>Let ${V}$ and ${W}$ be ${H}$-co-modules with structure maps ${\beta_V: V \rightarrow H \otimes V}$ by ${v \mapsto \sum g_1 \otimes v_1}$ and ${\beta_W: W \rightarrow H \otimes W}$ by ${w \mapsto \sum h_1 \otimes w_1}$ The tensor functor works like such:</p> <p align=center>$\displaystyle \beta_{V\otimes W}(v\otimes w) = \sum g_1 h_1 \otimes v_1 \otimes w_1 $</p> <p>What does the braiding on the category of co-modules ${^H\mathcal{M}}$ look like? I propose that the braiding ${\Psi}$ is then:</p> <p align=center>$\displaystyle \Psi_{V,W}(v\otimes w) = \left(\sum R(g_1 \otimes h_1) \right) w_1 \otimes v_1$</p> <p>I've not proved it yet.</p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-82042702404314900542013-01-13T16:47:00.003-08:002013-01-13T16:47:46.964-08:00Bosonisation of Quantum Groups, an Introduction<p>Let $H$ be a quasitriangular (or braided) Hopf algebra and let $\mathcal{C} = _H\mathcal{M}$ be it's category of modules (that is, a category where the objects are $H$-modules and the morphisms are $H$-linear maps between them.) Let $B \in \mathcal{C}^0$ be a Hopf algebra such that it's structure maps $m_B : B\otimes B \rightarrow B$, $\Delta_B : B \rightarrow B \otimes B$, et cetera, and all their respective objects, are all morphisms and objects in the category. </p> <p>Naturally, $B$ will have it's own modules. But what if we restrict the $B$-modules we want to look at? In fact, let's only look at $B$-modules in $\mathcal{C}$, we'll call these the <i>braided</i> $B$-modules (the reason for this language will become clear later on). More precisely, let $_B\mathcal{C}$ be the sub-category of $B$-modules in $\mathcal{C}$ such that the action of $B$ on the module $V$, $B \otimes V \rightarrow V$, is a morphism in the category. </p> <p>Does there exist a Hopf algebra $\mathcal{H}$ such that it's category of modules is exactly $_B\mathcal{C}$ ? The answer is yes. From a general abstract nonsense point of view, the affirmation comes from the <a href="http://ncatlab.org/nlab/show/reconstruction+theorem">Tannakian reconstruction theorems</a> which allows us to construct certain algebraic objects from its category of modules. This is a sort of duality between the object and it's category of modules. I don't know enough about this sort of thing to comment any more.</p> <p>But for us, we'll answer this question by means of Majid's theory of <b>bosonisation of quantum groups</b>. We won't use higher category theory, rather, we will construct the Hopf algebra $\mathcal{H}$ explicitly from $H$ and $B$, and call this construction the <i>bosonisation of $B$</i>. To do this, I need to study a few things:</p> <ol> <li> <b>Braided Categories.</b> When $H$ is quasitriangular, it's module category has an additional structure, a <i>braiding</i>, that will color all calculations in $_H\mathcal{M}$ and $_B\mathcal{C}$. I barely know what a braided category is at the moment, so I'll have to cover this in a blog post or two. <li> <b>Hopf algebras in braided categories.</b> (aka ``braided groups'' in Majid's lexicon) A more concise way of describing my choice of $B$ is to say that $B$ is a braided group in $_M\mathcal{H}$. All the structure maps of $B$ will have to contend with the braiding from $H$, hence why we called $_B\mathcal{C}$ the category of <i>braided</i> modules. I'll study these objects in a subsequent blog post, too. <li> Constructing the bosonisation $\mathcal{H} = B \rtimes H$ as a semidirect product of $B$ and $H$, with its structure maps built from the various morphisms in $\mathcal{C}$ </ol> <p> The next step is to repeat this construction for a <b>dual quasitriangular Hopf algebra</b> to arrive at a theory of co-bosonisation, and then to repeat the process for double bosonisation.</p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-40518932547584613332013-01-05T15:30:00.000-08:002013-01-05T15:30:36.354-08:00What I learned today: Braided (or quasi-triangular) Hopf Algebras<p><i>This is a post in my ``psuedo-daily'' series ``What I learned Today''</i> </p> <p>A Hopf algebra $H$ is endowed with both a multiplication map $m: H \otimes H \rightarrow H$ and a comultiplication map $\Delta: H \rightarrow H \otimes H$. Many of our ``quantum'' objects lack commutivity or co-commutitivity, but not completely. In cases of co-commutitivity, the entities that lack it often have the lack of co-commutivity controlled by an invertible element $\mathcal{R} \in H \otimes H$ in the following manner:</p> <p align=center>$\displaystyle \tau \circ \Delta h = \mathcal{R}(\Delta h)\mathcal{R}^{-1}$</p> <p>Where $\tau(x\otimes y) = y \otimes x$ is the flip operator. If we also put some restrictions on what element $\mathcal{R}$ we can chose, we'll arrive at Drinfeld's theory of Quasi-tranigular, or Braided, Hopf Algebras. But first, let's look at an example: Let $G$ be the <a href="http://en.wikipedia.org/wiki/Klein_four-group">Klein four-group</a>, id est, $G = \langle x, y : x^2 = y^2 = (xy)^2 = 1 \rangle$ and let $kG$ be the group Hopf algebra. For $q \in k$ define </p> <p align=center>$\displaystyle \mathcal{R}_q = \frac{1}{2} (1\otimes 1 + 1\otimes x + x\otimes 1 - x\otimes x) + \frac{q}{2}(y \otimes y + \otimes xy +xy \otimes xy - xy \otimes y)$</p> <p>Let's change the coproduct on $kG$ to be $\Delta x = x\otimes x$ and $\Delta y = 1 \otimes y + y\otimes x$. Also change the antipode to $Sx = x$ and $Sy = xy$ and the counit $\epsilon (x) = 1$ and $\epsilon (y) = 0$. This new Hopf algebra indeed obeys the $\tau \circ \Delta h = \mathcal{R}_q (\Delta h) \mathcal{R}_q^{-1}$. It also obeys the additional relations</p> <p align=center>$\displaystyle (\Delta \otimes I) \mathcal{R} = \mathcal{R}_{13}\mathcal{R}_{23}$</p><p align=center>$\displaystyle (I \otimes \Delta) \mathcal{R} = \mathcal{R}_{13}\mathcal{R}_{12}$</p> <p>Where $\mathcal{R}_{23} = 1 \otimes \mathcal{R}$, for instance. These three conditions are what defines a <i>Braided Hopf algebra</i> (we call them braided because their category of modules generates a braided category. More on that in a later post.) These Hopf algebras generate solutions to the \href{http://en.wikipedia.org/wiki/Yang </p> <p align=center>$\displaystyle (c \otimes I)(I \otimes c)(c \otimes I) = (I \otimes c)(c \otimes I)(I \otimes c)$</p> <p> for an automorphism $c$ of $V \otimes V$, where $V$ is an $H$-module. To see this, let $V$ and $W$ be $H$ modules and define an isomorphism of $V \otimes W$ and $W \otimes V$ by </p> <p align=center>$\displaystyle c_{V,W} (v\otimes w) = \tau_{V,W} (\mathcal{R} (v\otimes w))$</p> <p> From here, one can show that for $H$-modules $U$, $V$, and $W$ we have <p align=center>$\displaystyle c_{U\otimes V, W} = (c_{U,W} \otimes I_V)(I_U \otimes c_{V,W}) $</p> <p align=center>$\displaystyle c_{U, V\otimes W} = (I_V \otimes c_{U,W})(c_{U,V} \otimes I_W)$</p> and finally <p align=center>$\displaystyle (c_{V,W} \otimes I_U) (I_V \otimes c_{U,W})(c_{U,V} \otimes I_W) = (I_W \otimes c_{U,V})(c_{U,W} \otimes I_V)(I_U \otimes c_{V,W})$</p> <p> Letting $V = W = U$, we see we have a solution of the Yang-Baxter equation. It's also worth noting that the element $\mathcal{R}$ itself satisfies a version of the Yang-Baxter equation: $\mathcal{R}_{12}\mathcal{R}_{13}\mathcal{R}_{23} = \mathcal{R}_{23}\mathcal{R}_{13}\mathcal{R}_{12}$.</p> Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-35767270405950043272013-01-04T15:50:00.003-08:002013-01-04T15:50:59.428-08:00Galois Quantum Groups of Finite Fields<p>In Chater 4 of Majid's <i>A Quantum Groups Primer</i>, he introduces an object he calls the ``automorphism quantum group'' $M(A)$ associated to an algebra $A$. At the end of the chapter he makes a comment ``the role of such objects in number theory, however, is totally unexplored at the moment'', which was my queue to start searching for references. After I couldn't find any, I thought I'd take a stab at computing this object for some simple number-theory flavored objects. I haven't got anywhere yet, but let's take a look at what these objects are: </p> <p>Let $A$ be an algebra over a field $k$. We call $B$ a <b>comeasuring</b> of $A$ if it comes with an algebra map $\beta: A \rightarrow A \otimes B$, this is very similar to a <a href="http://mebassett.blogspot.co.uk/2012/12/the-quantum-group-slq2-and-its-coaction.html">coaction</a>, except that we forget about a coalgebra structure. A morphism of comeasurings $B_1$ and $B_2$ is an algebra morphism $\phi: B_1 \rightarrow B_2$ such that $\beta_2 = (I \otimes \phi) \circ \beta_1$. </p> <p>Hence we have a category where the objects are comeasurings of $A$ and the arrows are comeasuring morphisms. When $A$ is finite dimensional, we can prove that this category has an <a href="http://en.wikipedia.org/wiki/Initial_and_terminal_objects">initial object</a>, that is, an object $M(A)$ such that for every other object $B$ in the category, there exists one and only one unique morphism $M(A) \rightarrow B$. </p> <p>The prove of the existence of $M(A)$ for finite dimensional $A$ is constructive, we won't go through the details here (see Majid's book), but we will state and use the result:</p> <p>Let $e_1,\ldots,e_n$ be a basis for $A$ and let $c_{ij}^k$ be the structure constants of $A$, id est, defined by </p> <p align=center>$ \displaystyle e_i e_j = \sum_{k=1}^n c_{ij}^k e_k$</p> <p> and define $M'(A)$ as the free associative algebra $k\langle t^1_1, \ldots t^1_n, \ldots t^n_1, \ldots t^n_n \rangle$ modulo the relations </p> <p align=center>$ \displaystyle \sum_{r=1}^n c_{ij}^r t^k_r = \sum_{r,s=1}^n c_{rs}^k t^r_i t^s_j $</p> <p> for $i,j,k=1$ to $n$ The initial object $M(A)$ is $M'(A)$ modulo the additional relations $t^1_1 =1$, $t^i_1 =0$ for $1 < i \leq n$. The initial map $\beta_{M(A)}: A \rightarrow A \otimes M(A)$ is: </p> <p align=center>$ \displaystyle \beta_{M(A)}(e_i) = \sum_{r=1}^n e_r \otimes t^r_i$</p> <p> The reader can verify that this $\beta_{M(A)}$ is an algebra map and thus verify that this is indeed a comeasuring. To show that it's initial, we need to show that for any other comeasuring $B$ with map $\beta: A \rightarrow A \otimes B$, there exists a unique algebra morphism $\pi: M(A) \rightarrow B$. Let $b^i_j = \sum_r e_r \otimes t^r_i$ and observe that $\sum_r c_{ij}^a b^k_r = \sum_{r,s} c_{rs}^k b^r_i b^s_j$, so we can let $\pi(t^i_j) = b^i_j$ and extend this as multiplicative map. Uniqueness follows from having finite dimension and the by noting that another morphism $\pi'$ must obey the same relations above on the same image of $\beta$.</p> <p>The algebra $M(A)$ is what Majid calls the <i>automorphism quantum group</i>. It is a bialgebra. I won't prove it, but you can see that it has a coproduct, et cetera by noting that $M(A) \otimes M(A)$ is a comeasuring with the map $(\beta_{M(A)} \otimes I) \circ \beta_{M(A)}$, hence we have a unique morphism $\Delta: M(A) \rightarrow M(A) \otimes M(A)$ by the initial object condition. The counit pops out in a similar way. The coproduct on the generators is <p align=center>$ \displaystyle \Delta t^i_j = \sum_{r=1}^n t^i_r \otimes t^r_j$</p></p> <p>As an example of this nonsense, let $m(x) \in \mathbb{F}_p[x]$ be an irreducible polynomial of degree $n$ over the field with $p$ elements, and let $F_m$ by the corresponding field extension (which is an algebra over $\mathbb{F}_p$). For each other irreducible polynomial $g$ of degree $n$, we have an isomorphic field $F_g$. Each $F_g$ is also an object in the category of comeasurings of $F_m$, as the isomorphism $F_m \rightarrow F_g$ will give rise to an algebra map $F_m \rightarrow F_m \otimes F_g$. $M(F_m)$ is a $n^2 -n$ dimensional algebra in this category that has a unique isomorphism to each $F_g$, hence $M(F_m)$ and its maps to the $F_g$'s would capture the arithmetic of each other field extension of the same degree. I hope that I can use this object in my program for the quantum de rham cohomology over finite fields. One obvious task to describe how the various $M(F_g)$ relate? The relations will be governed by a rearrangement of the structure constants. Of course they should all be isomorphic, and each $M(F_g)$ exists in the other's $M(F_m)$ category of comeasurings, too...(I suspect my ideas are a bit vacuous.)</p> <p>In the case of $\mathbb{F}_2$, consider the extension $\mathbb{F}_4 \cong \mathbb{F}_2[x]/(x^2 +x + 1)$ as a finite dimensional algebra. The structure constants are easy to compute, and we can easily see that the dimension of $M(\mathbb{F}_4)$ is 2. Let's call its generators $\alpha$ and $\beta$, then using the above formulas we have that $\beta_{M(\mathbb{F}_4)}(x) = x \otimes \alpha + x \otimes \beta$ and the relations are $1 + \alpha = \alpha^2 + \beta^2$ and $[\alpha, \beta] = \beta^2 + \beta$. This algebra is clearly a lot more complicated than the simple field extension. One should note that there are no other field extensions of degree 2 of $\mathbb{F}_2$. </p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-83580645736718808242012-12-30T15:44:00.000-08:002013-01-03T15:06:39.982-08:00The Quantum Group $SL_q(2)$ and its coaction on the Quantum Plane<h1>The algebra $SL(2)$ and its coproduct. </h1> <p>Recall our earlier discussion about a <a href="http://mebassett.blogspot.co.uk/2012/11/what-i-learned-today-universal-group.html">universal group structure on algebras</a>. In particular, consider</p><p align=center>$\displaystyle \text{Hom}_\text{Alg}(k[a,b,c,d], A) \cong A^4 \cong M_2(A)$</p><p> as a vector space. Let $M(2)$ denote the polynomial algebra $k[a,b,c,d]$. Last time we pulled back addition on $A$ to a map $\Delta k[x] \rightarrow k[x] \otimes k[x]$. This time, we're going to follow the same pattern to take matrix multiplication (a map $ m : M_4(A) \otimes M_2(A) \rightarrow M_2(A)$) back to $\Delta M(2) \rightarrow M(2) \otimes M(2)$. </p><p>In particular, we're looking for a $\Delta$ so that when we have $\alpha \in \text{Hom}_\text{Alg}(M_2(A) \otimes M_2(A), A)$, $\alpha \circ \Delta = m (\alpha)$</p><p>The above notation is slightly confusing, let me try to explain more clearly: We have that the matrix algebra $M_2(A)$ is isomorphic as a vector space to $\text{Hom}_\text{Alg}(M(2),A)$ by the map taking </p><p align=center>$\displaystyle f \mapsto \begin{pmatrix} f(a) & f(b) \\ f(c) & f(d) \end{pmatrix} = \hat{f}$</p> <p>We also have that matrix multiplication is a map $m: M_2(A) \otimes M_2(A) \rightarrow M_2(A)$, and that $M_2(A) \otimes M_2(A)$ is isomorphic as a vector space to $\text{Hom}_\text{Alg}(M(2) \otimes M(2),k)$ by the map</p><p align=center>$\displaystyle \alpha \mapsto \begin{pmatrix} \alpha(a) & \alpha(b) \\ \alpha(c) & \alpha(d) \end{pmatrix} \otimes \begin{pmatrix} \alpha(a') & \alpha(b') \\ \alpha(c') & \alpha(d') \end{pmatrix} = \tilde{\alpha}$</p> <p>I write $a'$, et al, just so it's clear that $\alpha$ varies on different elements of the tensor product basis $a \otimes a$, et al. So for a map to implement multiplication on the $M(2)$ side, it must be $\Delta: M(2) \rightarrow M(2) \otimes M(2)$ and we want $m(\tilde{\alpha}) = \alpha \circ \Delta$.</p><p>From this requirement it's obvious what $\Delta$ needs to be; $\Delta(M)$ for $M\in M(2)$ needs to ensure that the generators $a,b,c,d$, et cetera, get mapped to the items that will correspond to matrix multiplication after they are acted upon by $\alpha$. So $\Delta(a) = a \otimes a + b \otimes c$, et cetera. In matrix notation, we can write this</p><p align=center>$\displaystyle \Delta \begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \otimes \begin{pmatrix} a & b \\ c & d \end{pmatrix}$</p> <p>This looks group-like! But it's not! The matrix isn't actually an element of $M(2)$, it's just a convenient way for us to write the action of $\Delta$. The action elements are $k$-linear combinations of $a,b,c$, and $d$. </p> <h1>Quantizing things </h1> <p>I bet the reader is guessing that $\Delta$ is a coproduct, making $M(2)$ into a bialgebra. Such a reader would be correct. But before we discuss this further, let's take a quotient of $M(2)$ by the relation $ad -bc =1$. Call this new bialgebra $SL(2)$. We can make it into a Hopf algebra by introducing the antipode:</p><p align=center>$\displaystyle S\begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} d & -b \\ -c & a\end{pmatrix}$</p> <p>We're going to take a ``quantum deformation'' of this new Hopf algebra. It's actually simple to do, we're going to modify the commutativity of $a$, et al, by an element $q \in k^\ast$. In particular, let $ca = qac$, $ba =qab$, $db = qbd$, $dc = qcd$, $bc = cb$, $da -ad = (q-q^{-1})bc$, and the ``q-determinant'' relation $ad -q^{-1}bc = 1$. The coproduct remains the same. </p> <h1>The Quantum Plane and the coaction </h1> <p>The classical group we're mimicking, $SL_2(\mathbb{R})$, acts on the affine plane $\mathbb{R}^2$ by transforming it in a way that preserves orientation and area of all geometric shapes on the plane. Since $SL(2)$'s role is as the base of a set of homomorphisms to the algebra $A$, we expect any equivelant ``action'' to have the arrows reversed, we'll discuss this later. But first, we're going to need a notion of an affine plane in our polynomial algebra-geometry language:</p><p>This isn't so bad, as $\text{Hom}_\text{Alg}(k[x,y],A) \cong A^2$, hence we call $k[x,y]$ the <i>affine plane</i>. Quantizing it is easy, too: we define the quantum plane $\mathbb{A}^2_q$ to be the free algebra $k\langle x,y\rangle$ quotiented by the relation $yx = qxy$, id est, it's $k[x,y]$ but with a multiplication deformed by the element $q \in k$. </p><p>Now back to are ``arrow reversed'' version of an action, or a coaction. One can arrive at this definition by reversing the arrows in the commutative diagram that captures the axioms of an algebra acting on a vector space. In particular, we say that a Hopf algebra $H$ <i>coacts</i> on an algebra $A$ by an algebra morphism $\beta: A \rightarrow H \otimes A$ such that $(I \otimes \beta) \circ \beta = (\Delta \otimes I)\circ \beta$, and $I = (\epsilon \otimes I) \circ \beta)$, where $I$ is the identity map and $\Delta$ and $\epsilon$ are the coproduct structure maps for $H$.</p><p>We can define the coaction $\beta$ on the generators $x$ and $y$ and extend it as an algebra morphism should, so the reader can check that $\beta(x) = x\otimes a + y\otimes c$ and $\beta(y) = x\otimes b + y\otimes d$ defines a coaction. In matrix notation, we have:</p><p align=center>$\displaystyle \beta(x,y) = (x,y) \otimes \begin{pmatrix} a & b \\ c & d \end{pmatrix}$</p><p>I do not have a geometric interpretation for this.</p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-74399425922622536402012-12-18T16:25:00.003-08:002012-12-18T16:25:54.569-08:00What I learned today: Coadjoint action of group Hopf algebras<p><i>This is another post in my ``psuedo-daily'' series ``What I learned Today''</i> </p> <p>Yesterday we talked about <a href="http://mebassett.blogspot.co.uk/2012/12/weird-differences-between-action-of.html">actions of a Hopf algebra $H$</a> on an algebra $A$. Today, let's talk about some examples of this (I didn't cover as much ground as I hoped to today, but we're trying to make these blogs daily!). Every Hopf algebra acts on itself by</p> <p align=center>$\displaystyle h \triangleright g = \sum_h h_{(1)} g Sh_{(2)} $</p> <p>This is the <i>adjoint action</i>. It's not hard to prove that it's 1) an action and 2) a Hopf action (that it plays well with the Hopf algebra structure of $H$ and the algebra structure of $A$). As a more concrete example, let's look at the group Hopf algebra $H=kG$ for a finite group $G$. The coproduct is given by $\Delta g = g \otimes g$ and the antipode is group inversion: $Sg = g^-1$. In this case, the adjoint action becomes:</p> <p align=center>$\displaystyle h \triangleright g = hgh^{-1} $</p> <p>Which is just group conjugation. </p> <p>Hopf algebras can act just as well on coalgebras - in this case we require that the action commutes with the coproduct of $A$. Now if $H'$ and $H$ are <a href="http://mebassett.blogspot.co.uk/2012/12/what-i-learned-today-dually-paired-hopf.html">dually paired</a>, then we also have adjoint action, or rather, a coadjoint action of $H'$ on the <i>coalgebra</i> $H$, given by:</p> <p align=center>$\displaystyle \phi \triangleright h = \sum_h h_{(2)} \langle \phi, (Sh_{(1)})h_{(3)} \rangle $</p> <p>We know that the algebra of functions on $G$, $k(G)$ is the Hopf algebra dual of $kG$, where $\langle f, g \rangle = f(g)$, so let's see what the this action becomes for $H' = k(G)$ and $H = kG$:</p> <p align=center>$\displaystyle f \triangleright g = g \langle f, (Sg)g \rangle = f(1) \cdot g$</p> <p>Additionally, we also have the notion of a <i>coaction</i>. Generally, if $H$ is a Hopf algebra and $A$ an algebra, we say $\beta : A \rightarrow H \otimes A$ is a coaction of $H$ on $A$ (or that $A$ is a <i>$H$-comodule algebra</i> if $\beta$ is a algebra morphism, $(I \otimes \beta) \circ \beta = (\Delta \otimes I) \circ \beta$ and $I = (\epsilon \otimes I) \circ \beta$, where $I$ is the identity and $\Delta$ and $\epsilon$ are the coalgebra structure maps.</p> <p>The most interesting case of this is the quantum group $SL_q(2)$ coacting on the quantum plane $\mathbb{A}^2_q$. This is a topic for another post, though.</p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-30503513924909513312012-12-17T15:31:00.000-08:002012-12-17T15:31:43.397-08:00What I learned today: Weird differences between the action of a Hopf algebra and its dual<i>This is another post in my ``psuedo-daily'' series ``What I learned Today''</i> <p>Let $H$ be a Hopf algebra. Today we're going to talk about its actions (and its coactions, time permitting). A Hopf algebra is just an algebra, which is just a ring, and ring's have actions (equiv. modules), and we're comfortable with that already. For this to make sense on a Hopf algebra, we merely require that the action agrees with all the necessary units, products, coproducts, et cetera.</p> <p>More precisely, we say that $H$ acts on an algebra $A$ (or that $A$ is an $H$-module algebra) if we have a linear map $\triangleright : H \otimes A \rightarrow A$, written $h \triangleright a$ such that </p> <p align=center>$\displaystyle h \triangleright (ab) = m(\Delta(h) \triangleright (a \otimes b))$</p> <p>with $\triangleright$ extended to tensor products and $m$ the product of $A$, and </p> <p align=center>$\displaystyle h\triangleright 1_A = \epsilon(h) 1_A$</p> <p>Some weirdness emerges here, for instance, if $H=kG$, the group hopf algebra of a finite group $G$ (the vector space having elements of $G$ as a basis, the product being the group operation, and the coproduct defined by $\Delta(g) = g \otimes g$, antipode, units, et cetera are the obvious choice), then a hopf action collapses to a typical group action:</p> <p align=center>$\displaystyle g\triangleright(ab) = (g\triangleright a) (g\triangleright b)$</p> <p>But if we forget the Hopf algebra structure for a second, we know that $kG$ is exactly the same as $k(G)$, that is, as functions defined on $G$ with values in $k$, as any ``vector'' in $kG$ defines a function and vice versa. Hence, as vector spaces $kG$ and $k(G)$ are naturally isomorphic. This is not the case with the Hopf algebra structure, however (though we have seen how the two are <a href="http://mebassett.blogspot.co.uk/2012/12/what-i-learned-today-dually-paired-hopf.html">dually paired</a> already), as the coproduct on $k(G)$ is $\Delta(f)(x,y) = f(xy)$, where we identify $k(G \times G)$ with $k(G) \otimes k(G)$ (this is exactly what a tensor product is for, actually.) In this case, the hopf algebra action on an algebra $A$ is the same as a $G$-grading on $A$. Let me explain this (with help from <a href="http://math.stackexchange.com/a/260798/27165">a kind person on math.stackexchange</a>):</p> <p>Say $A$ is $G$-graded and for each $a \in A$ let $|a|$ denote the element in $G$ such that $a \in A_{|a|}$. We can get a $k(G)$ action from this by $f \triangleright a = f(|a|) a$. To go the other way, start from a Hopf algebra action $\triangleright: k(G) \otimes A \rightarrow A$. Then for each $g \in G$, let $A_g$ be the set of all elements of $A$ on which every $f\in k(G)$ acts via scalar multiplication by $f(g)$. One then has to prove that these sets are nonempty, that they are subspaces, that $A$ decomposes into direct sums of them, and that they obey the axioms of a grading. I've been assured by various sources that it can be done!</p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-14038725137725665832012-12-06T16:39:00.002-08:002012-12-06T16:44:02.720-08:00What I learned today: Dually Paired Hopf Algebras <i>To help keep me motivated and mathematically active, I will be blogging about "what I learned today" in my various projects. This is the second post in this "psuedo-daily" series.</i> <br /><br /> I'm still trying to catch up on Quantum Groups. Any reader who is familiar with the field will be able to tell from these posts that I'm a <i>long</i> way away from doing research. But we're slowly getting somewhere!<br />Let $H = kG$ be the group Hopf algebra for a finite group $G$. In particular, the product map $m: H \otimes H \rightarrow H$ is just the group operation $g \otimes h \rightarrow gh$ for $g, h\in G$, the coproduct is $g \mapsto g\otimes g$, the antipode is group inversion $x \mapsto x^{-1}$, the group idenity is the unit and $g \mapsto 1$ is the counit. We want to talk about its dual $H^\ast = \text{Hom}(H,k)$. <br /><br /> Kassel's Quantum Groups text gives us several propositions to prove that the dual of a finite dimensional Hopf algebra is also a Hopf Algebra. Since $G$ is a finite group, $H$ is finite dimensional, and we'll follow Kassel in proving that $H^\ast$ in particular is a Hopf Algebra.<br /><br /> Now, $H^\ast$ is the space of linear functions from $kG$ to $k$. Any such linear functional will be determined by basis elements that themselves are determined by their action on $G$, hence $H^\ast$ is the space $k(G)$ of functions on $G$ with values in $k$. Clearly this space has pointwise multiplication to make it into an algebra - but we want to derive this from the dual of the coproduct map $\Delta: H \rightarrow H \otimes H$.<br /><br /> So let's talk about $\Delta^\ast: (H\otimes H)^\ast \rightarrow H^\ast$. This is, by definition, the map that takes each $\alpha\otimes\beta \in (H\otimes H)^\ast$ to another linear functional $\gamma = \Delta^\ast(\alpha\otimes\beta)$ on $H$ such that $\gamma(h) = \alpha\otimes\beta(\Delta(h))$. But before we can work out how this becomes a product map, we need to work out some technicalities: <br /><br /> Can we say that $(H\otimes H)^\ast \cong H^\ast \otimes H^\ast$ so that $\Delta^\ast$ is actually a product? Yes! this is provided by a theorem in Kassel's book which says that the map $\lambda: \text{Hom}(U, U') \otimes \text{Hom}(V,V') \rightarrow \text{Hom}(V \otimes U, U' \otimes V')$ by $(f\otimes g)(v \otimes u) = f(u)\otimes g(v)$ is an isomorphism when one of the pairs $(U, U')$, $(V, V')$, or $(U,V)$ consists of only finite dimensional spaces. We won't discuss the proof here, but I do want to point that the theorem does require finite dimensionality . <br /><br /> Back to the dual of comultiplication on $H$: We have that $\Delta(h) = h \otimes h$. So $\alpha \otimes \beta (h\otimes h) = \alpha(h) \beta(h) = \gamma(h)$. Hence $\Delta^\ast(\alpha \otimes \beta)(h) = \alpha(h)\beta(h)$ for $\alpha, \beta \in H^\ast$ and $h\in H$. Hence the dual of comultiplication of a group Hopf algebra is just pointwise multiplication, as we expected. <br /><br /> Now what about the dual of multiplication? Same definition as above, $m^\ast: H^\ast \rightarrow H^\ast \otimes H^\ast$ must be the map taking $\alpha \in H^\ast$ to an $m^\ast(\alpha)=\beta \otimes \gamma$ such that $\beta\otimes\gamma(h_1\otimes h_2) = \alpha(m(h_1,h_2)) = \alpha(h_1 h_2)$. Hence we let $m^\ast$ be the function $m^\ast(\alpha)(x\otimes y) = \alpha(xy)$<br /><br /> Continuing in this fashion, you'll see that the antipode is the map $S(\alpha(x)) = \alpha(x^{-1})$, the unit is the identity function, and the counit the map $\alpha \mapsto \alpha(1)$. <br /><br /> Hence we have a Hopf algebra $H^\ast = k(G)$ that is dual to $H = kG$. If we let $\langle, \rangle: H^\ast \otimes H$ be the evaluation map $\langle \alpha, x \rangle = \alpha(x)$, we can see write down the behavior of this map and see if we can come up with a more general situation to more varied sets of Hopf algebras. For instance, extending the map to tensor products pairwise, note that<br /><br /><div align="center">$\displaystyle \langle \alpha\beta, h \rangle = \langle \alpha\otimes\beta, \Delta(h) \rangle$</div><div align="center">$\displaystyle \langle \Delta (\alpha), h\otimes g \rangle = \langle \alpha, hg\rangle$</div><div align="center">$\displaystyle \langle S(\alpha), h \rangle = \langle \alpha, S(h) \rangle$</div><div align="center">$\displaystyle \langle 1, h \rangle = 1 $</div><div align="center">$\displaystyle \langle \alpha, 1 \rangle = \alpha(1)$</div>Replacing those last two conditions with the same thing expressed in units and counits, we have<br /><div align="center">$\displaystyle \langle 1, h \rangle = \epsilon(h)$</div><div align="center">$\displaystyle \langle \alpha, 1 \rangle = \epsilon(\alpha)$</div><br /> Thus now we have a definition: Let $H_1, H_2$ be Hopf algebras. We say that they are <b>dually paired</b> if there is a linear map $\langle, \rangle : H_2 \otimes H_1 \rightarrow k$ that satisfies the above 5 conditions. <br /><br /> To make things explicit, $k(G)$ and $kG$ are dually paired by the evaluation map. Majid's book gives a more exotic example of a ``quantum group'' that is paired with itself:<br /><br /> Let $q \in k$ be nonzero and let $U_q(b+)$ be the $k$-algebra generated by elements $g, g^{-1}$, and $X$ with the relation $gX = qgX$. Majid's book assures me we'll see how to find this ``quantum group'' in the wild later on, but for now he gives it a Hopf algebra structure with the coproduct $\Delta X = X\otimes 1 + g\otimes X$, $\Delta g = g \otimes g$, $\epsilon X = 0$, $\epsilon g = 1$, $SX = -g^{-1} X$, and $Sg = g^{-1}$<br /><br /> This Hopf algebra is dually paired with itself by the map $\langle g, g \rangle = q$, $\langle X, X \rangle = 1$, and $\langle X, g \rangle = \langle g, X \rangle = 0$Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-24554049806740818032012-11-21T23:20:00.000-08:002012-11-21T23:20:31.770-08:00What I learned today: universal group structure of algebras<p><i>To help keep me motivated and mathematically active, I will be blogging about "what I learned today" in my various projects. I will endeavor to post to this series daily, or as close to daily as I can. This is the first post in that series, and today we're talking about what I learned in universal algebra/hopf algebras.</i></p> <p>So I'm taking a break from my finite fields project and am focusing on a new one, purely in quantum groups. I "started" about three months ago, but between procrastination, my job, and other projects (exempli gratia, learning class field theory) I haven't actually learned any quantum group theory.</p> <p>I had made some progress in learning quantum groups in July, but due to some personal issues that came up in August, I took a long break from mathematics and largely forgot everything I had learned. So I'm trying to learn it all again. Now, Majid's Quantum Groups Primer covers the material exactly needed for my project, but I'm having trouble with many of the algebraic subtleties. In fact, I'm pretty weak in algebra (or maths in general). I'm using Kassel's Quantum Groups as a companion text, and I spent an hour or two today going over some basic universal algebra. Here's what I learned: Let $A$ be an associative algebra over a field $k$. Then:</p> $$\text{Hom}_\text{Alg}(k[x_1,\ldots,x_n], A) \cong A^n$$ <p>You can see this because an algebra map from the polynomial algebra to $A$ is exactly determined by its action on the generators $x_1,\ldots,x_2$, hence a choice of an algebra map is a choice of a set map from $\{1,\ldots, n\}$ to $A$, id est, $A^n$.</p> <p>This allows us to write the additive group structure of $A$ universally in terms of maps on $k[x]$ and $k[x', x'']$. That is, we will express addition on $A$ without reference to $A$ itself. The group addition on $A$ is a map $+: A^2 \to A$. Now, from above, $A^2 \cong \text{Hom}_\text{Alg}(k[x',x''],A)$. So if we let $f \in \text{Hom}_\text{Alg}(k[x',x''],A)$, we will reach our goal by expressing $+(f)$ in terms of the polynomial algebras.</p> <p>$f$ is entirely determined by its action of $x'$ and $x''$, hence $f(x')$ and $f(x'')$ define $f$. Now let $\Delta: k[x] \to k[x', x'']$ by $x \mapsto x' + x''$. Then $f \circ \Delta (x) = f(x' + x'') = f(x') + f(x'') = +(f)$. Hence $\Delta$ corresponds to our group addition. </p> <p>Similarly, the map $S:k[x] \to k[x]$ by $x \mapsto -x$ and $\epsilon: k[x] \to k$ by $x \mapsto 0$ correspond to the group inversion and the group identity, respectively, of $A$.</p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-28680103996382082292012-08-19T18:00:00.001-07:002012-08-19T18:00:34.114-07:00Progress (or lack thereof) on NCG de Rham Cohomology of Finite Field Extensions<p>I've been working on calculations of the aforementioned entity, in particular, I want $H^0$, id est, the kernel of the exterior derivative, to be $\mathbb{F}_p$ when I pass to the limit leading to the algebraic closure. </p><p>I'm going to make that more precise, but first let me clear up some notation: since $H^0$ does not capture the isomorphism class of field extensions, and since the differential calculi of a field $K$ are in one-to-one correspondence with the monic irreducible polynomials in $K[x]$, we shall write $H^0(m(\mu), K)$ to mean the 0th cohomology of the field extension $K[\mu] (m(\mu))$, or simply $H^0(m(\mu))$ when the field is understood. Also, I shall write $\langle m(x) \rangle_K$ to mean the $K$-span of the set $\{ m(x)^n : n\in \mathbb{N}\} = \{ f(m(x)) : f\in K[x]\}$.</p><p>More precisely, we know that the algebraic closure of $\mathbb{F}_2$ is the colimit of fields $\mathbb{F}_{2^k}$, $n\in\mathbb{N}$, with field morphisms $\varphi_{kj} : \mathbb{F}_{2^k} \rightarrow \mathbb{F}_{2^j}$ whenever $k$ divides $j$. It is hoped that $H^0$ is a contravariant functor, yielding a morphism $H^0 (\varphi_{kj}) : H^0(m_j(\mu)) \rightarrow H^0(m_k(\mu))$ for monic irreducible polynomials $m_j(\mu)$ and $m_k(\mu)$ of degrees $j$ and $k$, respectively. This will give us a projective system and thus a projective limit:</p><p align=center>$ \displaystyle H^0(\varinjlim_{k\in\mathbb{N}} m_k(\mu) ) = \varprojlim_{k\in\mathbb{N}} H^0(m_k(\mu))$</p><p>So our goal is to answer two questions: <ol> <li> What conditions are needed on the polynomials $m_k(x)$, $k\in\mathbb{N}$ so that $H^0(\varphi_{jk})$ is well defined? <li> When it is defined, when does the above limit equal $\mathbb{F}_2$? </ol>(I'm starting to think the first question is vacuous, but I didn't until recently.) </p><h3>1. Calculating $H^0(m_k(\mu), \mathbb{F}_2)$ </h3><p>So first I tried to calculate the 0th cohomology for the simplest case: $H^0(\mu^2 + \mu + 1)$. Finding the cohomology amounts to finding which polynomials satisfy $f(x +\mu) = f(x)$. After staring at the equation for awhile, it's pretty easy to see that if $f(x) \in H^0$, then $\text{deg} f = 2^k$ for some $k$. </p><p>After some effort (and help), I was able to prove that $H^0(\mu^2 + \mu + 1) = \langle x^4 - x \rangle_{\mathbb{F}_2}$. The proof went like this: <ol> <li> Find the smallest polynomial $f(x)$ in $H^0$, in this case, $f(x)=x^4 - x$. <li> Let $g(x) \in H^0$, prove that $\text{deg} f$ divides $ \text{deg} g$. (This requires some effort - I've only done it in specific cases by exhaustion). <li> Hence $\text{deg} \, g = r \, \text{deg}\, f$. Then $g(x) - f(x)^r$ has degree divisible by $\text{deg}f$, so it's also in $H^0$, and it's also of smaller degree, so continuing in this fashion we have to end back at $f(x)$, and we're done. </ol></p><p><b>NOTE:</b> I don't know that there is always only one polynomial of smallest degree in $H^0$, but I've not yet found a case where this isn't true, either. </p><p>It's not hard to prove that $x^{p^k} - x$ is always in $H^0(m_k(\mu), \mathbb{F}_p)$. However, $H^0$ can contain a lot more than just $x^{2^k} - x$. But since this is a particularly nice polynomial (its splitting field is $\mathbb{F}_{p^k}$) and $\varprojlim \langle x^{p^k} - x \rangle_{\mathbb{F}_p}$ looks like it's just $\mathbb{F}_p$ (I think, I really have no idea how to calculate that limit, I've not thought about it much yet), one might desire to choose polynomials for the algebraic closure such that $H^0(m_k(\mu)) = \langle x^{p^k} - x\rangle$ . </p><p>Note that in this case ($k$ divides $j$, so $j = kq$), we have: </p><p align=center>$ \displaystyle x^{p^j} - x = - \sum_{i=0}^{q-1} \left( x^{p^{j - k(i+1)}} -x^{p^{j - ik}} \right) = - \sum_{i=0}^{q-1} (x^{p^k} - x)^{p^{k(q-i)}} $</p><p>So that $H^0(\varphi_{kj})$ is simply the identity on $H^0(m_j(\mu))$ embedding it into $H^0(m_k(\mu))$</p><p>So, as a guess, we're going to try to find a specific set of polynomials such that $H^0(m_k(\mu)) = \langle x^{p^k} - x\rangle$. Also as a guess, our first candidates are... </p><h3>2. Conway's polynomials and fields </h3><p>Conway polynomials of degree $k$ for $\mathbb{F}_p$ are the least monic irreducible polynomial in $\mathbb{F}_p[x]$ under a specific lexicographical ordering. To the best of my knowledge, their primarily use is to provide a consistent standard for the arithmetic of $\mathbb{F}_{p^k}$ for portability across different computer algebra systems. Since these are the ``standard convention'' for Galois Fields, we try them first. </p><p>Sadly, they let us down. The Conway polynomial of degree 4 for $\mathbb{F}_2$ is $\mu^4 + \mu + 1$. Using a computer algebra system (sage), I found that $H^0( \mu^4 + \mu + 1) = \langle x^8 + x^4 + x^2 + x^1 \rangle_{\mathbb{F}_2}$. </p><p>In addition to those polynomials, John Conway has also described the algebraic closure of $\mathbb{F}_2$ as a subfield of $\text{On}_2$, the set of all ordinals with a field structure imposed by his ``nim-arithmetic''. The description of this field, from his ``On Numbers and Games'' does not treat $\overline{\mathbb{F}_2}$ as a direct limit over various simple extensions. Instead, he shows that the ``quadratic closure'' of $\mathbb{F}_2$ lies in $\text{On}_2$, and then extends the quadratic closure to a cubic closure, also in $\text{On}_2$, et cetera. </p><p>This means that for us to use his description, we first have calculate things like the minimal polynomial for $\omega$ and $\omega^\omega$ (where $\omega$ is the least infinite ordinal) over $\mathbb{F}_2$, these calculations are quite difficult (I haven't been able to do a single one), in fact, it's quite a bit of work just figuring out which ordinal corresponds to which $\mathbb{F}_{2^k}$. </p><p>But fortunately for us, the quadratic closure only relies on finite ordinals, namely $2^k$. With some help from the <a href="http://math.stackexchange.com/questions/166804/what-simple-extensions-can-be-naturally-embedded-into-conways-algebraic-closure">internet</a>, we have polynomials $m_{2^k}(\mu)$ describing Conway's quadratic closure. They are given by the recursive relations $m_{2^k}(\mu) = m_{2^{k-1}} ( \mu^2 + \mu)$ with $m_2(\mu) = \mu^2 + \mu + 1$, obviously.</p><p>So $m_4(\mu)$ corresponds to the Conway polynomial (this is not the case in general), and we've already used Sage to show that this polynomial doesn't have the cohomology we're looking for.</p><h3>3. So what next? </h3><p>Well, for one, we <i>could</i> just define the polynomials $m_k(\mu)$ to be such that $H^0 = \langle x^{2^k} -x \rangle$. But is this choice unique? </p><p>Turns out no: again using Sage, I have that both $H^0(1 + \mu + \mu^2 + \mu^3 + \mu^4)$ and $H^0(1+\mu^3+\mu^4)$ are $\langle x^{16} -x \rangle$. </p><p>So let's revisit the requirement on $H^0$. I was probably being a dunce insisting on it in the first place. After all, if $H^0$ really is a functor (as it should be, though I've not tried to prove it, or even defined the categories), then the projective system is always well-defined, and we really just need to calculate the limit.</p><p>In fact, what we really want is that the chain of polynomial subalgebras $H^0(m_k(x))$ becomes coarser as $k$ tends towards larger integers (thus $m_k(x)$ tends towards larger degrees), and that we have sane maps between $H^0(m_j(x))$ and $H^0(m_k(x))$ whenever $k$ divides $j$. </p><p>If my assumption that $H^0$ is always ``spanned'' by a single polynomial is correct, and if my sketch of a proof of $H^0 = \langle f(x) \rangle$ always works, then we might be able to find said map. Say that $H^0(m_k(x)) = \langle \tilde{m}_k(x) \rangle$. We know that $x^{2^k} - x \in H^0(m_k(x))$, so that $\langle x^{2^k} - x \rangle \subset \langle \tilde{m}_k(x) \rangle$, we also know that $\langle x^{2^j} - x \rangle \subset \langle x^{2^k} -x \rangle$ as, from before:</p><p align=center>$ \displaystyle x^{p^j} - x = - \sum_{i=0}^{q-1} (x^{p^k} - x)^{p^{k(q-i)}} $</p><p>Since $x^{2^j} -x \in H^0(m_j(x))$, we know that $f(m_j(x)) = x^{2^j} - x$ for some polynomial $f \in \mathbb{F}_2[x]$. Let $g(x) \in H^0(m_j(x))$, then </p><p align=center>$ \displaystyle g(x) = \tilde{m}_j(x)^s + g_{s-1} \cdot \tilde{m}_j(x)^{s-1} + \ldots + g_0$</p><p> by definition. Define a map from $H^0(m_j(x)) \rightarrow H^0(m_k(x))$ by </p><p align=center>$ \displaystyle g(x) \mapsto f(\tilde{m}_j(x))^s + g_{s-1} \cdot f(\tilde{m}_j(x))^{s-1} + \ldots + g_0$</p><p>Assuming that makes sense, and I haven't made an other stupid errors (a big if!), then I guess that leaves the following for a TODO to show that $H^0$ goes to $\mathbb{F}_2$ for <b>any</b> algebraic closure.: <ol> <li> Check that $H^0$ is indeed ``spanned'' by a single polynomial, id est, that $H^0(m(x)) = \langle f(x) \rangle$ for all $m$. <li> Define the categories for which $H^0$ is a functor, check that the above map works (or find a new one?) <li> Prove that the subalgebras $H^0$ become coarser, as stated above. (Intuitively, it makes sense that they do, but I haven't thought about a proof yet. Also, if they do become coarser, then it seems ``obvious'' that the limit goes is $\mathbb{F}_2$, as those are the only elements left in an series of smaller and smaller subalgebras. </ol></p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-52156212821951397102012-04-16T16:04:00.000-07:002012-04-16T16:05:07.310-07:00You should re-read the stuff you think you know.I started my PhD in June 2011. While I've only worked on a few problems since then, a pattern is emerging: I always spend a couple months stuck on something that looks obvious and silly (both in fore- and hindsight.) Most recently I was stuck on a problem involving basic arithmetic over finite fields. I felt that it was something I <em>should</em> be able to do, but nonetheless was stuck on it for a few weeks.<br /><br />Maybe it's that I'm doing the PhD thing wrongly, or maybe it's that I'm not spending enough time on my mathematics (I've got a job doing predictive modeling, too!). In any case, the story of how I got unstuck might interest some people:<br /><br />I've thought it important to continue doing maths reading independent of my research projects. But while there are several subjects I wish to learn about (and do research in), I have not engaged in much outside study. When I approach a subject, I have two separate reactions: Either I find that the subject is too advanced, and I give up quite early - not wanting to get lost in background material. Or I think the material I have repeats too much of stuff I already know, and I give up - not wanting to waste my time on things I've seen before. <br /><br />After doing this for a few months, I decided that I should read something that I think I already know - just to get in the habit of things. I started reading the field/Galois theory chapters in the abstract algebra book by Dummit and Foote. Pretty light reading as far as mathematics goes. It's stuff I <em>should</em> know. <br /><br />Those chapters contain some straightforward proofs of basic statements on algebraic field extensions. Exempli gratia, if $\mu$ is the root of a polynomial $f$, and $\sigma$ is an element of the Galois group, then one can prove that $\sigma(\mu)$ is also a root of the polynomial by observing the action of $\sigma$ on the polynomial $f$. Not particularly new or deep material, but it turns out just the stuff I needed...<br /><br />Now back to my problems with arithmetic over finite fields: I've spent the last few weeks trying to decompose an operator on polynomials into separate components. My supervisor and I both had our own ideas of writing out matrices for these operators. I spent a few late nights on it, wrote down some messy results, and went to bed hoping I'd have an epiphany on how to clean it up. I had been hoping for one for weeks! But soon after I started re-reading the basics I got it. I was just reading about applying the action of the Galois group to polynomials, and here I was staring at an operator on polynomials. So I abandoned the ``matrix approach'', naively started looking at the action of the Galois group, and a few minutes later I had the result I was looking for. It seemed so obvious in hindsight, but it <a href="http://mebassett.blogspot.co.uk/2011/06/you-dont-understand-something-until-you.html">often seems obvious only after you've understood it</a>. <br /><br />I should probably keep reading the basics.Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com1tag:blogger.com,1999:blog-6128955598718204808.post-16768803021935406232012-04-15T09:06:00.002-07:002012-04-16T07:23:27.370-07:00Cohomology of Finite Fields Part IIII gave a proof (with help from math.stackoverflow) in my last blog for the 0th cohomology of the field extension ${\mathbb{F}_4/\mathbb{F}_2}$. While I have sketches for descriptions of ${H^0(\mathbb{F}_{p^m} / \mathbb{F}_p)}$ for any prime ${p}$ and positive integer ${m}$, I've not yet been able to describe ${H^1}$ for even the simplest cases. I shall describe the problem in this post.<p>First, recall that I mention <a href="http://mebassett.blogspot.co.uk/2012/03/noncommutative-de-rham-cohomology-of.html">in my first post on NCG cohomologies of field extensions</a> that a first order differential calculus gives rise to a differential graded algebra. Let's briefly describe this:</p><p>Given our [possibly noncommutative] algebra ${A = \Omega^0}$, we've already described it's first order differential calculus as an ${A}$-bimodule ${\Omega^1}$ together with a linear map ${d: \Omega^0 \rightarrow \Omega^1}$ that obeys the rules 1) ${d(ab) = a \cdot db + da \cdot b}$ (where ${\cdot}$ is the action of the algebra on the bimodule) and 2) ${\Omega^1 = \{a \cdot db : a, b \in \Omega^0\}}$. The differential graded algebra is then defined as <a href="http://en.wikipedia.org/wiki/Graded_algebra">graded algebra</a> ${\Omega = \bigoplus_n \Omega^n}$ together with the map ${d : \Omega^n \rightarrow \Omega^{n+1}}$ and the rules: ${\Omega^0 = A}$, ${\Omega^1}$ as before, ${d^2 = 0}$, and ${\Omega}$ generated by ${\Omega^0}$ and ${\Omega^1}$. The product of the differential graded algebra is the <a href="http://en.wikipedia.org/wiki/Wedge_product">wedge product</a> ${\wedge}$, as in classical differential geometry.</p><p>Putting the above together, we get a complex ${A = \Omega^0 \xrightarrow{d^0} \Omega^1 \xrightarrow{d^1} \ldots}$. Here ${d^n}$ is the restriction ${d^n = d |_{\Omega^n}}$. The cohomology is then defined as ${H^n(A) = \text{ker}(d^n)/\text{Im}(d^{n-1})}$, with ${H^0(A) = \text{ker}(d^0)}$ as we saw in the last two posts.</p><p>In our case of the field extension ${\mathbb{F}_4/\mathbb{F}_2}$, we have as our algebra the polynomial ring ${A = \mathbb{F}_2[x]}$. We've already defined a differential calculus on it, the polynomial ring ${\Omega^1 = \mathbb{F}_4[x]}$, where ${\mathbb{F}_4 = \mathbb{F}_2(\mu)}$, together with the differential map ${df = \frac{f(x + \mu) - f(x)}{\mu}}$. Using this, how can we describe the differential graded algebra ${\Omega = \bigoplus_n \Omega^n}$ ?</p><p>First, recall that ${\mathbb{F}_4}$ is a two dimensional vector space over ${\mathbb{F}_2}$ with basis ${\{1, \mu\}}$. This means that the polynomial ring ${\Omega^1}$ can be ``spanned'' by ${\{1, \mu\}}$ over the polynomial ring ${\mathbb{F}_2[x]}$. In other words, every ${f \in \Omega^1}$ can be written as ${f = f_1 \cdot 1 + f_\mu \cdot \mu}$, with ${f_1, f_\mu \in \Omega^0}$. In order for us to get a graded algebra, ${\Omega^2}$ must have a basis ${\{1 \wedge \mu\}}$, hence it's a one dimensional space over ${\Omega^0}$ and thus is isomorphic to ${\Omega^0}$. ${\Omega^n = 0}$ for ${n > 2}$, as we're using a classical anticommutative wedge product.</p><p>But what is the map ${d^1: \Omega^1 \rightarrow \Omega^2}$ ? Well, from our description of ${\Omega^1}$, we can easily express the action of ${d}$ on ${\Omega^1}$ as an action on ${\Omega^0}$, which is well defined. Id est, for ${f \in \Omega^1}$,</p><p align=center>$\displaystyle d^1 f = d^0(f_1) \wedge 1 + f_1 \wedge d^0(1) + d^0(f_\mu) \wedge \mu + f_\mu \wedge d^0(\mu)$</p><br /><p>Now, ${d^0(a) = 0}$ for any constant polynomial, and ${d^0(g) \in \Omega^1}$ for any ${g \in \mathbb{F}_2}$. So ${d^0g = \partial_1 g \cdot 1 + \partial_\mu g \cdot \mu}$. Hence we have linear maps ${\partial_1, \partial_\mu : \mathbb{F}_2[x] \rightarrow \mathbb{F}_2[x]}$. We return to ${d^1}$ using this idea:</p><p align=center>$\displaystyle d^1 f = (\partial_1 f_1 \cdot 1 + \partial_\mu f_1 \cdot \mu) \wedge 1 + (\partial_1 f_\mu \cdot 1+ \partial_\mu f_\mu \cdot \mu) \wedge \mu$</p> <p>The wedge product is anticommutative, so this leaves us with:</p><p align=center>$\displaystyle d^1 f = (\partial_1 f_\mu - \partial_\mu f_1) \cdot 1 \wedge \mu$</p><br /><p>And ${d^2 f = 0}$, as the rest of the ${\Omega^n}$'s are 0. Now back to the cohomologies: ${H^1(\mathbb{F}_4/\mathbb{F}_2) = \text{ker}(d^1)/\text{Im}(d^0)}$, where ${\text{ker}(d^1) = \{f \in \Omega^1 : \partial_1 f_\mu = \partial_\mu f_1 \}}$ and ${\text{Im}(d^0) = \{f_1 \cdot 1 + f_\mu \cdot \mu \in \Omega^1 : \exists f \in \mathbb{F}_2[x] \; \text{such that} \; df = f_1 \cdot 1 + f_\mu \cdot \mu\}}$. Hence the we need to find a description of the ``partial derivatives'' ${\partial_1}$ and ${\partial_\mu}$. In other words, we need to describe ${d^0f}$ as:</p><p align=center>$\displaystyle \frac{f(x + \mu) -f(x)}{\mu} = \partial_1 f + \partial_\mu f \cdot \mu$</p><p>By noting that ${\mu^3 = 1}$, I was able to write down a formula for ${\partial_i x^{3n}}$, and then formulas for ${\partial_i x^{3n+1}}$ and ${\partial_i x^{3n+2}}$ in terms of ${\partial_i x^{3n}}$. But the binomial expansion makes these formulas just too messy to work with. I then tried finding a basis under which the maps would have a nice, neat formula, but I had no luck there. Next I tried looking at the action of the Galois group on $d$.</p><br /><p>Now, the only non-trivial element of the $\text{Gal}(\mathbb{F}_4/\mathbb{F}_2)$ is the Frobenius automorphism $\sigma(x) = x^2$, exempli gratia, $\sigma\mu=\mu^2=\mu+1$. The action of the Galois group extends to an action on the polynomial ring - one that leaves $\mathbb{F}_2$ fixed. Hence we have</p><p align=center>$\displaystyle \sigma d f = \sigma(\partial_1 f) + \sigma(\partial_\mu f \cdot \mu) = \partial_1 f + \partial_\mu f \cdot (\mu + 1) = df + \partial_\mu f$</p><p>This allows us to write the ``partial derivatives'' in terms of $d$ and the Frobenius automorphism. Id est,</p><p align=center>$\displaystyle $$\partial_\mu f = \sigma (d f) - d f$</p><p>and</p><p align=center>$\displaystyle \partial_1 f = df - \sigma(d f) \mu + (df) \mu$</p><p>It remains to be seen if these formulas for the partials will actually help me calculate the remaining cohomologies.</p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-11556889025247673102012-04-11T15:15:00.001-07:002012-04-11T15:17:19.313-07:00Cohomology of Finite Fields Part IILast time we described a theory of cohomology for finite field extensions, but we fell short of calculating ${H^0(\mathbb{F}_4 / \mathbb{F}_2)}$ as I had discovered a mistake in my proof shortly after I had posted the blog. Thanks to the <a href="http://math.stackexchange.com/questions/121833">kind folks on math.stackoverflow</a> I have a correct proof (and result!). Let's go through it.<br /><p><a href="http://mebassett.blogspot.co.uk/2012/03/noncommutative-de-rham-cohomology-of.html">Recall</a> that ${H^0(\mathbb{F}_{4} / \mathbb{F}_2) = \text{ker}(d)}$, where ${\mathbb{F}_{4} = \mathbb{F}_2(\mu)}$, ${\mu}$ a root of a monic irreducible polynomial of degree ${2}$ in ${\mathbb{F}_2}$, and </p><p align=center>$\displaystyle d: \mathbb{F}_2[x] \rightarrow \mathbb{F}_{4}[x] \; \text{by} \; df = \frac{f(x + \mu) - f(x)}{\mu}$</p><br /><p> Hence ${H^0(\mathbb{F}_{4}/\mathbb{F}_2)}$ is the set of all polynomials ${f \in \mathbb{F}_2[x]}$ such that ${f(x + \mu) = f(x)}$. To find it, first note that if ${f(x) \in H^0}$, then so is ${g(f(x))}$ for any ${g \in \mathbb{F}_2[x]}$, as ${g(f(x + \mu)) = g(f(x))}$. If we can show that some polynomial ${f}$ is the only polynomial in ${H^0}$ with smallest degree, then we know that every other ${g(f(x)) \in H^0}$ is spanned by ${\{f(x)^n : n\in \mathbb{N}\}}$ and further show that every other ${h \in H^0}$ has degree divisible by the degree of ${f}$, then we have that ${\{f(x)^n : n\in \mathbb{N}\}}$ is the basis for ${H^0}$.</p><p>Notice also that ${f(x) = x^{4} + x \in H^0}$. This follows from the fact that ${\mathbb{F}_{4}^{\ast}}$ is the cyclic multiplicative group of order ${3}$, hence ${\mu^{3} = 1}$. So we have </p><p align=center>$\displaystyle f(x + \mu) = (x + \mu)^{4} + x + \mu = x^{4} + \mu^{4} + x + \mu = x^{4} + x + \mu + \mu = f(x)$</p><br /><p>It's not hard to show (exempli gratia, by exhaustion) that ${x^4 + x}$ is the smallest degree polynomial in ${H^0}$, and that it's the only polynomial in ${H^0}$ of degree ${4}$. </p><p>Now let ${f \in H^0}$, and let ${\text{deg}f = n}$, say ${f(x) = x^n + a_1 x^{n-1} + \ldots + a_n }$. Clearly ${n}$ cannot be odd, as then ${\binom{n}{1}}$ is also odd, so in the expansion </p><p align=center>$\displaystyle f(x + \mu) = (x+\mu)^n + \ldots + a_n = x^n + \mu x^{n-1} + \ldots + \text{lower order terms}$</p><br /><p> has a ${\mu}$ as a coefficient of the ${x^{n-1}}$ term, so it can't possibly equal ${f(x)}$.</p><p>Now, we want that ${4 | \text{deg}f}$. Say ${\text{deg}f = n = 4k + 2}$. Let's look at the coefficient of ${x^{4k}}$ in ${f(x + \mu)}$. We have ${(x+\mu)^{4k + 2} = (x+\mu)^{4k}(x+\mu)^2 = (x^4 + \mu^4)^k(x^2 + \mu^2) = x^{4k+2} + \mu^2 x^{4k} + \text{lower order terms...}}$. Also ${(x+\mu)^{4k + 1} = (x^4 + \mu^4)^k (x + \mu) = x^{4k+1} + \mu x^{4k} + \text{lower order terms...}}$. Hence in ${f(x + \mu) - f(x)}$, ${x^{4k}}$ has a coefficient ${(\mu^2 + \mu)}$. Thus, if ${f}$ is not divisble by 4, it's not in ${H^0}$.</p><p>So say ${f \in H^0}$ has degree ${4k}$. Then ${f(x) - (x^4 + x)^k}$ is also in ${H^0}$ and divisible by 4. In this way we see that polynomials in ${t = x^4 + x}$ form ${H^0}$.</p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-9242086841129193552012-03-12T17:42:00.014-07:002012-03-18T16:46:11.467-07:00Noncommutative de Rham cohomology of finite fieldsAs my lack of blogs might suggest, I've not made much progress on the Ph.D. front for the past several weeks. I've been reading papers by Drinfeld, et al, trying to crack the Grothendieck-Tiechm\"{u}ller group. I've not understood much. While I'm not giving up on those ideas, I am taking my supervisor's suggestion to work more closely with him on a smaller project. I'm computing the Noncommutative de Rham cohomology of extensions of finite fields.<br /><br /><h2>Finite Fields and Differential Calculi </h2><br />As a specific example of this, let's look at the finite field ${\mathbb{F}_2}$ and a field extension of degree 2. Readers familiar with basic field theory should know that, in order to extend this field, we need a degree 2 irreducible polynomial in ${\mathbb{F}_2}$, say ${\mu^2 + \mu + 1}$. This polynomial will generate a prime ideal in the polynomial ring ${\mathbb{F}_2[\mu]}$, and the quotient ring ${\mathbb{F}_2[\mu] / (\mu^2 + \mu + 1)}$ will be our new field.So what does this field look like? Essentially, elements of this field are polynomials with the condition that ${\mu^2 = \mu + 1}$. The reader should see that all the elements of the new field are ${0, 1, \mu, \; \text{and}\; \mu+1}$. What we've really done here is adjoined the root of the polynomial ${\mu^2 + \mu + 1}$ to our field. Call this new field ${\mathbb{F}_4}$. It can also be thought of as a vector space over ${\mathbb{F}_2}$ with basis ${\{1, \mu \}}$.Our ``space'' is the finite line, which we're modeling with ${A = \mathbb{F}_2[x]}$. Now how do we get to noncommutative de Rham cohomology? First, we need the differential calculus of 1-forms. Recall from my <a href="http://mebassett.blogspot.com/2011/10/starting-my-phd-noncommutative-geometry.html">early post on NCG black holes</a> that we can define a differential calculus over an algebra ${A}$ as the pair ${(d, \Omega^1)}$, with ${\Omega^1}$ a bimodule over ${A}$ and ${d: A \rightarrow \Omega^1}$ a linear map that obeys the product rule ${d(ab) = d(a)\cdot b + a\cdot d(b)}$. Additionally, we require that the set ${\{a \cdot d(b) : a, b \in A\}}$ spans our calculus ${\Omega^1}$.Now, ${\mathbb{F}_2[x]}$ has an obvious action on the polynomial ring ${\mathbb{F}_4[x]}$, so we can think of ${\Omega^1 = \mathbb{F}_4[x]}$ as our calculus of one forms. For an ${f \in A = \mathbb{F}_2[x]}$, the derivative looks like:<p align=center>$\displaystyle df = \left( f(x + \mu) - f(x) \right) \mu^{-1}$</p><br />We also have an NCG notion of an <a href="http://en.wikipedia.org/wiki/Exterior_algebra">exterior algebra</a>, which gives us spaces of n-forms extended from ${\Omega^1}$. This is how we get a de Rham cohomology. In our case we have a complex:<p align=center>$\displaystyle \mathbb{F}_2[x] = \Omega^0 \xrightarrow{d^0} \Omega^1 = \mathbb{F}_4[x] \xrightarrow{d^1} \Omega^2 \xrightarrow{d^2} \ldots $</p><br />Where ${d^n = d |_{\Omega^n}}$ is the derivative restricted to the calculus of n-forms (exempli gratia, ${d^0}$ is the ${d}$ we defined above.) The n-th de Rham cohomology is then ${H^n(A) = \text{ker}(d^n) / \text{Im}(d^{n-1})}$.But what are ${\Omega^2, d^1}$, et cetera? We'll save that topic for another post. The zeroth cohomology group is just ${H^0 = \text{ker}(d^0)}$ and for now we'll just worry about calculating that.<br /><br /><h2>Finding ${H^0}$ </h2><br />So ${H^0}$ will consist of exactly the polynomials ${f \in F_2[x]}$ such that:<p align=center>$\displaystyle f(x+u) - f(x) = 0$</p><br />Obviously constants fit this. Because we're working in a field of characteristic ${p=2}$, we know that ${(x + a)^p = x^p + a^p}$ is the Frobenius automorphism. This causes me to suspect that only polynomials of the form:<p align=center>$\displaystyle f_n(x) = x^{2^n} + a_{n-1} x^{2^{n-1}} + \ldots + a_0 x$</p><br />can lie in the kernel. We'll prove this later, but first let's see what exactly happens with such a polynomial: <p align=center>$\displaystyle f_n(x + \mu) = (x+\mu)^{2^n} + a_{n-1} (x+\mu)^{2^{n-1}} + \ldots + a_0(x+\mu) $ <br/> $<br /> = x^{2^n} + \mu^{2^n} + a_{n-1}( x^{2^{n-1}} + \mu^{2^{n-1}}) + \ldots + a_0 x + a_0 \mu $</p>Hence we have<p align=center>$\displaystyle f_n(x + \mu) - f_n(x) = \mu^{2^n} + a_{n-1} \mu^{2^{n-1}}+ \ldots + a_0 \mu$</p><br />From earlier, we have that ${\mu^2 = \mu + 1 = \mu^{-1}}$. So ${\mu^3 = 1}$, more explicitly, <p align=center>$\displaystyle \mu^r = \begin{cases} 1 & r \equiv 0 \mod 3 \\ \mu & r \equiv 1 \mod 3 \\ \mu^{-1} & r \equiv 2 \mod 3 \end{cases} $</p><br />Moreover, we have that ${2^r \equiv 1 \mod 3}$ when ${r}$ is odd, and ${2 \mod 3}$ when ${r}$ is even. Combing all this, we have that for even ${n}$:<p align=center>$\displaystyle f_n(x+\mu) - f_n(x) = \mu (a_{n-1} + a_{n_3} + \ldots + a_1) + \mu^{-1} (1 + a_{n-2} + \ldots + a_0)$</p><br />Hence ${df_{2k} = 0}$ when ${\sum_{i=1}^{k} a_{2(k-i) + 1} = 0}$ and ${\sum_{i=1}^{k} a_{2(k-i)} = 1}$. Similarly, when ${n= 2k + 1}$, we have that ${df_{2k+1} = 0}$ when ${\sum_{i=1}^{k} a_{2(k-i) + 1} = 1}$ and ${\sum_{i=0}^{k} a_{2(k-i)} = 0}$.We can use these formulas to write down a basis for some things in kernel by concentrating on the shortest polynomials with the highest powers. The above formulas say that the shortest possible polynomial in the kernel will have two elements of degree 2 to the power of the same parity. Hence our basis is ${\{ x^{2^n} + x^{2^{n-2}} : n \geq 2\}}$.Now we need to prove that such things are the only thing in the kernel. <strike>To see this, let's calculate the action of...</strike><br /><br /><h2>Update 18 Mar 2012</h2><p>The proof I originally posted here was wrong. I made two mistakes: first, my calculation of $f(x + \mu) - f(x)$ neglected constant terms. Second, I never bothered to check that the coefficient of $x^k$ is zero when $f$ isn't spanned by our powers-of-two basis (I only checked to see if there is a nonzero term, but since we're working in a field of characteristic 2, the sum of nonzero terms can still be zero.) I spent several days trying to correct this proof, but I can't. The statement itself is wrong. ${\{ x^{2^n} + x^{2^{n-2}} : n \geq 2\}}$ <strong>is not the basis</strong>. I have several counter examples, exempli gratia $f(x) = x^{12} + x^9 + x^6 + x^3$ and $f(x) = x^{20} + x^{17} + x^5 + x^2$ and I believe I can find polynomials in the kernel that have 8, 16, et cetera terms that aren't the sum of smaller things in the kernel. So...I still have some work to do before I can find $H^0$. </p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-37381968425538893602012-01-18T14:24:00.001-08:002012-01-18T14:26:05.191-08:00Don't Censor the Web.Congress is considering two Orwellian-named laws, SOPA and PIPA, that are threatening free speech, internet security, and innovation.<br /><br />This is a reminder to <a href="http://projects.propublica.org/sopa/">call your representatives in Congress</a> and/or donate to the <a href="http://www.eff.org">EFF</a> today to help stop internet censorship.Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-57652521018021472982012-01-07T19:05:00.000-08:002012-01-07T19:11:10.501-08:00Constructing the Grothendieck-Teichmuller Group<p>So for the past <b>six or seven months</b> I've been trying to get a copy of the paper ``On quasitriangular Quasi-Hopf algebras and a group closely connected with ${\text{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})}$'' by V.G. Drinfel'd. My online search uncovered only a Russian copy (which, unfortunately, I don't read). My library searches were equally unfruitful: the British library's copy is ``unavailable'' and I was unable to find a copy at another UK institution. My supervisor was sure he had a copy, but when he discovered that it mysteriously disappeared he encouraged me to request an inter-library loan with my university. I did, and my university said they found a copy in Leeds, but the last ten pages are missing. So - if any readers have a copy of the aforementioned paper: I'd greatly appreciate your lending me it!</p><br /><p>In the meanwhile, I've been reading ``On Associators and the Grothendieck-Teichmuller Group'' by Dror Bar-Natan. Among other things, that paper described the construction of said group via a ``Parenthesized Braid Category''. I'll discuss that construction in this post. </p><br /><h2>1. Braids and Categories </h2><br /><p>I really don't want to give a precise definition of a braid (I've not yet seen one that doesn't make my head hurt), but hopefully I won't be too wrong if I propose the following:</p><br /><blockquote><b>Def1 1</b> <em> A <i>braid on ${n}$ strands</i> is a set of ${n}$ oriented curves with distinct starting and ending points where the starting points lie on the same line, each endpoint lies on a separate line, and no curve contains a loop. </em></blockquote><br /><p>For example, the following is a braid: </p><br /><p align=center> <a href="http://2.bp.blogspot.com/-HHmKJw8kA5s/TwkIEYrt5kI/AAAAAAAAACQ/9kVTzQeCG7U/s1600/braid1.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 105px; height: 112px;" src="http://2.bp.blogspot.com/-HHmKJw8kA5s/TwkIEYrt5kI/AAAAAAAAACQ/9kVTzQeCG7U/s320/braid1.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5695092075225015874" /></a> </p><br /><p>Clearly we can enumerate the starting and ending points, and treat a braid as a permutation on them. In this way, braids form a category where the braid itself is the morphism and the objects are simply sets to be permuted. </p><br /><p>We're going to change our class of objects slightly - we're going to consider <i>parenthesizations</i> of sets. Id est, pretend that the sets have some underlying multiplicative structure. We want to order the sets and denote the order of multiplication. Exempli gratia, the ordering ${(1 \, 2 \, 3)}$ has two parenthesizations: ${((1 \, 2) \, 3)}$ and ${(1 \, (2 \, 3))}$. Our braid morphisms are allowed to act on any parenthesizations. </p><br /><p>We aren't done complicated things yet: we're going to modify the morphisms themselves so that they become formal sums:</p><br /><p align=center>$\displaystyle \sum_{i=1}^{k} \beta_i B_i$</p><br /><p>Where each ${B_i}$ is a braid on ${n}$ strands, all ${B_i}$'s have the same effective permutation, and ${\beta_i}$ belongs to some ${\mathbb{Q}}$-algebra, usually ${\mathbb{Q}}$ or ${\mathbb{C}}$. We're forming the ``algebriod'' over ${\mathbb{Q}}$ or ${\mathbb{C}}$ over the set of braids on ${n}$ strands, with composition defined to be a bilinear map in the same way you would do for a group algebra. <p>This category is our <b>Parenthesized Braid Category</b> ${\textbf{PaB}}$. </p><br /><br /><h2>2. Fibred Linear Categories </h2><br /><br /><p>Our category is <i>fibred</i>. To see this, consider the category ${\textbf{PaP}}$ of parenthesized objects as before, but with simple permutations as the morphisms. Then consider the functor ${\textbf{S} : \textbf{PaB} \rightarrow \textbf{PaP}}$ that maps each object to itself, but takes each formal sum of braids to that one effective permutation. Clearly the set of morphisms in ${\text{PaB}}$ from ${O_1}$ to ${O_2}$ is: <p/><br /><p align=center>$\displaystyle \text{Hom}_\textbf{PaB}(O_1, O_2) = \textbf{S}^{-1} \left( \text{Hom}_\textbf{PaP}(O_1, O_2) \right)$</p><br /><p> Where P is the effective permutation.</p><br /><p>One might also note that for a fixed permutation ${P}$, the fibre of morphisms ${\textbf{S}^{-1}(P)}$ is a vector space (actually, its an algebra, because we can compose morphisms as our product). This allows us to define some very ``linear'' notions, for instance:</p><br /><p>We can define <i>subcategories</i>: a category is a subcategory of ${\textbf{PaB}}$ if it shares the same collection of objects and each ${\text{Hom}(O_1, O_2)}$ is a subspace of the corresponding ${\text{Hom}_\textbf{PaB}(O_1, O_2)}$.</p><br /><p>A subcategory is an <i>ideal</i> if whenever one of two composable ${B_1}$ or ${B_2}$ belongs to it then the composition belongs to it, too. Exempli gratia, let ${\textbf{I}}$ be the ideal where all morphisms ${\sum \beta_i B_i}$ have ${\sum \beta_i = 0}$. Then we can define the <i>quotient</i> ${\textbf{PaB}/\textbf{I}}$ in an obvious way: each set of morphisms ${\text{Hom}(O_1, O_2)}$ is the corresponding quotient in the bigger set of morphisms.</p><br /><p>Additionally, we can define idealic powers: ${\textbf{I}^n}$ has morphisms that can be written as compositions of ${n}$ morphisms in ${\textbf{I}}$. We can continue this and take an inverse systems with an inverse limits, in fact, we can even do the ${\textbf{I}}$-${adic}$ completion: </p><br /><p align=center>$\displaystyle \widehat{\textbf{PaB}} = \varprojlim_n \textbf{PaB}/\textbf{I}^n$</p><br /><p>Finally, we can do a tensor product of ${\textbf{PaB}}$ with itself (in the same way we would do tensor powers of an algebra). For ${\textbf{PaB}^{2\otimes}}$ we define the morphism as the disjoint union: </p><br /><p align=center>$\displaystyle \text{Hom}_{\textbf{PaB}^{2\otimes}}(O_1,O_2) = \coprod_{P \in \text{Hom}_{\textbf{PaB}}(O_1, O_2)} \textbf{S}^{-1}(P) \otimes \textbf{S}^{-1}(P)$</p><br /><p> Now we can even do a coproduct ${\Delta: \textbf{PaB} \rightarrow \textbf{PaB}^{2\otimes}}$ by ${B \mapsto B \otimes B}$. This coproduct is a functor. (I suspect that there is a bialgebra/hopf algebra structure hidden somewhere in here but I've not thought about it enough yet. Not for this post.)</p><br /><br /><h2>3. Some Functors </h2><br /><br /><p>In addition to ${\textbf{S} : \textbf{PaB} \rightarrow \textbf{PaP}}$, we also have a few functors ${\textbf{PaB} \rightarrow \textbf{PaB}}$. We can describe these by their action on a single braid and one can expand them as a linear action on the formal sums we use. First, we have the extension functors ${d_0}$ and ${d_{n+1}}$ that add a single straight strand to the left or right of the braid, respectively. </p><br /><p>Second, we have cabling functors ${d_i}$ for ${1 \leq i \leq n}$ on a braid of n strands. This functor simply doubles the ${i}$-th strand.</p><br /><p>Finally, we have a strand-removal functor ${s_i}$, which, as you might guess, removes the ${i}$-th strand.</p><br /><p>We can now define the Grothendieck-Teichmuller group ${\widehat{\text{GT}}}$. This is the group of all invertible functors ${a: \textbf{PaB} \rightarrow \textbf{PaB}}$ such that ${\textbf{S} \circ a = S}$, ${d_i \circ a = a \circ d_i}$, ${s_i \circ a = a \circ s_i}$, ${\Delta \circ a = a \otimes a \circ \Delta}$. We also require that $a$ leaves the braid constant:</p><br /><p align=center> <a href="http://2.bp.blogspot.com/-7qL2Kf9tvmU/TwkINeT5NoI/AAAAAAAAACc/74igkgVlmdI/s1600/braid2.png"><img style="cursor:pointer; cursor:hand;width: 55px; height: 65px;" src="http://2.bp.blogspot.com/-7qL2Kf9tvmU/TwkINeT5NoI/AAAAAAAAACc/74igkgVlmdI/s320/braid2.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5695092231354529410" /></a> </p><br /><p>Right! There we are. Sooner or later I actually do some maths, but for now I'm stuck understanding trying to understand definitions, so hence this post.</p><br /><br /><h2>4. Source </h2><br /><br /><p>This is all just a reiteration of the material in the aforementioned paper by Bar-Natan. </p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-68078021849527017392011-11-28T13:47:00.000-08:002011-11-28T14:05:43.555-08:00Climbing Out of Black Holes<p><a href="http://mebassett.blogspot.com/2011/10/confusion-in-black-holes.html">Last month</a> I published a two questions that had me stuck inside Majid's ``almost commutative black hole''. We'll start today be answering those questions while trying to avoid insulting myself and my readers, and then move on to calculate the Grassmann connection on our Black Hole Algebra.</p><br /><h2>1. NCG Vector Bundles For the Schwarzschild Solution </h2><br /><p>Our ``space'' is the algebra $ {A = k[x_1, x_2, x_3, r, r^{-1}]}$, our ``vector bundle'' projective module $ {\mathcal{E}}$ is the image of the operator given by the idempotent matrix $ {P \in \text{M}_3(A)}$ with $ {(P)_{ij} = \delta_{ij} - \frac{x_i x_j}{r^2}}$, and our calculus of 1-forms $ {\Omega_1}$ is the bimodule generated by symbols $ {\text{d}x_i}$ for $ {i = 1, 2, 3}$. The question was how can $ {\mathcal{E} \subset A^{\oplus 3}}$ by spanned by $ {\omega_i = \text{d}x_i - \frac{x_i \text{d}r}{r}}$ for $ {i = 1, 2, 3}$.<br/>The key here is to realize that the $ {\Omega_1}$ is isomorphic to $ {A^{\oplus 3}}$. Consider $ {A^{\oplus 3}}$ to be the space of row 3-vectors $ {v = (a_1 \; a_2 \; a_3)}$, and give it the ``standard basis'' $ {e_i = (\delta_{1 i} \; \delta_{2 i} \; \delta_{3 i} )}$. In this case, the idempotent acts on the right, and its action on the standard basis is $ {e_i \cdot P = P_{ik}e_k}$. Under the isomorphism given by $ {e_i \mapsto \text{d}x_i}$, we see that $ {\mathcal{E}}$ is spaned by $ {e_i \cdot P \mapsto P_{ik}\text{d}x_k = \text{d}x_i - \frac{x_i \text{d}r}{r} = \omega_i}$ as desired. </p><br /><h2>2. Grassmann Connections in NCG </h2><br /><p>Last time we defined a connection on an NCG vector bundle $ {\mathcal{E}}$ as a linear map $ {\nabla:\mathcal{E} \rightarrow \Omega_1 \otimes_{A} \mathcal{E}}$ that obeys certain conditions. We also defined the ``Grassmann Connection'' a map $ {\nabla_P = p \circ \text{d}}$, where $ {P}$ is the idempotent of the projective module. More explicitly, the Grassmann connection is, for a $ {v \in A^{\oplus n}}$</p> <p align=center>$ \displaystyle \nabla_P (v \cdot P) = \text{d} ( v \cdot P ) \cdot P = v \cdot \text{d} (P) \cdot P + \text{d}v \cdot P$</p><br /><p>Where $ {\text{d}}$ acts on $ {v}$ and $ {P}$ component-wise. We had asked how this definition makes sense, as its image appears to lie in $ {\Omega_1^{\oplus n}}$. <br/>My chief problem here was that I don't understand tensor products over a ring (or over an algebra, in this case). The key out of this problem is to realize two facts about tensor product spaces: Given module (over $ {A}$) $ {M}$, $ {N}$, and $ {R}$,</p><br /><ol> <li> $ {(M \oplus N) \otimes R}$ is isomorphic to $ {M \otimes R \oplus N \otimes R}$, by $ {(m, n) \otimes r \mapsto (m \otimes r, n \otimes r)}$, and <li> $ {M \otimes A}$ is isomorphic to $ {M}$ by $ {m \otimes a \mapsto ma}$. <br /></ol><br /><p>Given these two facts, it's not hard to see that $ {\Omega_1^{\oplus n} \cong (\Omega_1 \otimes A)^{\oplus n} \cong \Omega_1 \otimes A^{\oplus n}}$. Then it's easy to see that the map $ {P \circ \text{d}}$ maps to the right space.</p><br /><h2>3. The Grassmann Connection for our Black Hole Model </h2><br /><p>We'll calculate the action of the Grassmann Connection on our spanning set $ {\omega_i}$. In this section, we'll identify (by the previously defined isomorphims) the ``standard basis'' $ {e_i}$ with $ {\text{d}x_i}$, so that $ {\omega_i = e_i \cdot P = P_{ik}\text{d}x_k = \text{d}x_i - \frac{x \text{d}r}{r}}$. From the above definition, we have:</p><p align=center>$ \displaystyle \nabla_P (\omega_i) = \text{d}e_i \cdot P + e_i \text{d}P \cdot P $</p><br /><p>$ {\text{d}e_i = 0}$, as $ {e_i}$ is constant (or equivalently, because $ {\text{d}^2=0}$). So the bulk of the problem lies in calculating the matrix product $ {\text{d}P \cdot P}$. Let's write it out:</p><p align=center>$ \displaystyle (\text{d}P \cdot P)_{ij} = (\text{d}P)_{ik} P_{kj} = -\text{d}\left( \frac{x_i x_k}{r^2} \right) \left( \delta_{kj} - \frac{x_k x_j}{r^2} \right) = -\frac{x_i}{r^2} P_{kj}\text{d}x_k$</p><br /><p>So that $ {e_i \text{d}P \cdot P = -\frac{x_i}{r^2} P_{kl} P_{lj} dx_k \otimes dx_j = -\frac{x_i}{r^2} \omega_l \otimes \omega_l}$. Here I used the fact that P is idempotent and the generators of our algebra commute. This gives us our connection:</p><p align=center>$ \displaystyle \nabla_P (\omega_i) = -\frac{x_i}{r^2} \omega_l \otimes \omega_l$</p><br /><h2>4. Next Steps </h2><br /><p>We won't actually be using this connection much. Rather, we're going to define a new one based on the idea that $ {\nabla (\text{d}x_i)}$ should be $ {0}$. From here, we'll begin discussing NCG metrics, and write down one for our black hole.</p><br /><h2>5. Sources </h2><br /><ol> <li> Section 4 of <a href="http://arxiv.org/abs/1009.2201">Almost commutative Riemannian geometry: wave operators</a> <li> Email conversations with Majid <li> Atiyah and MacDonald, Introduction to Commutative Algebra <br /></ol>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-30225880111560495472011-10-31T15:29:00.000-07:002011-11-28T14:05:40.596-08:00Confusion in Black Holes<p>I've been trying to make my way through Majid's paper, ``Almost commutative Riemannian geometry: wave operators'', particularly the section where he constructs the model for a Schwarzschild black hole. I've not had much success. I'm meant to reconstruct the model over a $ {\mathbb{F}_p}$, but I'm stuck on basic definitions. I'll discuss two of those things in this blog, first the vector bundle (aka projective module) and then the Grassmann connection.</p><br /><h2>1. NCG Vector Bundle for The Schwarzschild Solution </h2><br /><p>To construct the model, we start by reconsidering our notion of 3-dimensional space. Rather than thinking of coordinates $ {(x_1, x_2, x_3)}$, we're going to recast ``space'' as a ``coordinate algebra'', in particular, an algebra of polynomials $ {k[x_1, x_2, x_3]}$ over a field $ {k}$ (we'll let $ {k = \mathbb{R}}$ for now, but my task is to redo this section of the paper with $ {k=\mathbb{F}_p}$. Moreover, we're working in a sphere, so we also request that our algebra contain functions rational in $ {r}$, where $ {r^2 = x_1^2 + x_2^2 + x_3^2}$. Hence our ``space'' is the algebra $ {A = k[x_1, x_2, x_3, r, r^{-1}]}$ modded out by the aforementioned relation. </p><br /><p>For such an NCG space (nevermind that $ {A}$ is actually commutative here), we define a vector bundle as a projective module. An easy way to get a projective module is to take a few copies of $ {A}$ under the image of an idempotent $ {E \in M_n(A)}$, e.g, let $ {E}$ be such an idempotent, then $ {\mathcal{E} = Im(E)}$ is our vector bundle. In this case we're taking the 3 by 3 matrix:</p><p align=center>$ \displaystyle E = \begin{pmatrix} 1 - \frac{x_1^2}{r^2} & - \frac{x_1 x_2}{r^2} & - \frac{x_1 x_3}{r^2} \\ - \frac{x_2 x_1}{r^2} & 1 - \frac{x_2^2}{r^2} & - \frac{x_2 x_3}{r^2} \\ - \frac{x_3 x_1}{r^2} & - \frac{x_3 x_2}{r^2} & 1 - \frac{x_3^2}{r^2} \end{pmatrix}$</p><br /><p>Thus our vector bundle is the subspace $ {\mathcal{E} = Im(E) \subset A^3}$. We expect elements of this vector bundle to be ``3-vectors'' with entries from $ {A}$. Yet in the paper, Majid states that $ {\omega_i = \text{d}x_i - \frac{x_i \text{d}r}{r}}$ for $ {i = 1, \, 2, \, 3}$ spans the 2-dimensional bundle $ {\mathcal{E}}$. But (judging by the $ {\text{d}x_i}$ and $ {\text{d}r}$ terms) each $ {\omega_i}$ is in our <a href="http://mebassett.blogspot.com/2011/10/starting-my-phd-noncommutative-geometry.html">bimodule of 1-forms</a> $ {\Omega_1}$. Where am I going wrong?</p><br /><h2>2. Grassmann Connections in NCG </h2><br /><p>Let's assume I'm not hopelessly confused about that vector bundle thing. <a href="http://mebassett.blogspot.com/2011/10/starting-my-phd-noncommutative-geometry.html">Recall</a> that a connection on an NCG vector bundle is a linear map $ {\nabla_{\mathcal{E}}: \mathcal{E} \rightarrow \Omega_1 \otimes \mathcal{E}}$ that obeys the following rule:</p> <p align=center>$ \displaystyle \nabla_{\mathcal{E}} (a s) = \text{d} a \otimes s + a\nabla_{\mathcal{E}}(s) \; \forall a \in A \; s \in \mathcal{E} $</p><br /><p> According to a proposition I've read in Majid's lecture notes, if $ {\mathcal{E} = Im(E)}$ for a projector $ {E \in M_n(A)}$, then we have a ``Grassmann connection''</p> <p align=center>$ \displaystyle \nabla_{\mathcal{E}} (E v) = E \text{d}(Ev) = E\left(\text{d}(E)v + E(\text{d}v)\right) = E\text{d}(E)v + E(\text{d}v)$</p><br /><p>Where $ {v \in A^n}$ and $ {\text{d}}$ acts on $ {v}$ and $ {E}$ component-wise. But the image of the connection is suppose to live in $ {\Omega_1 \otimes \mathcal{E}}$. $ {\text{d}v}$ lives in $ {\Omega_1^n}$, and $ {\text{d}E}$ lives in $ {M_n(\Omega_1)}$. How do we get to our tensor product space $ {\Omega_1 \otimes \mathcal{E}}$ ?</p><br /><h2>3. Sources </h2><br /><ol> <li> Section 4 of <a href="http://arxiv.org/abs/1009.2201">Almost commutative Riemannian geometry: wave operators</a> <br /></ol>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-64001702171080623042011-10-11T14:19:00.001-07:002011-11-02T02:31:21.712-07:00Starting my PhD - Noncommutative Geometry and Black Holes<p>I've had a great summer holiday. I've done a few programming projects. I've restarted my position with Universal Pictures International. It's time to start writing about my mathematics consistently. My PhD supervisor has given me a small project based on <a href="http://arxiv.org/abs/1009.2201">this paper</a>: he has this noncommutative geometric model of a black hole, and I'm suppose to see what happens if I re-do each step over a finite field. It's been two weeks and, as usual, I've made very little progress on it. I don't actually understand what's going on. So let's spend a moment or two trying to clear the fog and see what a ``noncommutative geometric model'' means.</p><br /><h1>1. Noncommutative Geometry Re-visited </h1><br /><p>Regular readers (ha! as if there are any...) will know that my previous experience with NCG was with the Bost-Connes system, a topic I hope to take up again soon. My supervisor's work; however, is considerably different. We keep the general philosophy of starting from an algebra and trying to construct a geometry from it, but instead of working over Operator Algebras we're sticking to less-analytic algebraic structures and imposing additional objects on them that are meant to emmulate geometric notions. Let's go over this in detail and discuss some of these objects - keeping in mind that I don't know any geometry, canot provide any motivation for these concepts, and generally don't have a clue what I'm doing.</p><br /><h1> 1.1. Differential Forms </h1><br /><p>Most readers will know that one can impose a <i>Differentiable Structure</i> on a topological manifold and use it to start doing some geometry. We'll dispose of the topological manifold and replace it with an algebra $ {A}$ and impose on it the notion of a 1-form. NCG 1-forms live in ``differential calculus`` $ {\Omega_1}$, which we define to be a bimodule (id est, a module on both sides) over $ {A}$. The ``differentiable structure'' comes in the form of a linear map $ {d: A \rightarrow \Omega_1}$ that obeys the product rule: <p align=center>$ \displaystyle d(ab) = d(a) \cdot b + a \cdot d(b) \; \forall a, b \in A$</p><br /> Additionally, we require that $ { (a, b) \mapsto a \cdot d(b) }$ spans the bimodule $ {\Omega_1}$.<br/>Some readers may wonder what sort of algebras $ {A}$ can have a differential calculus. Actually, each algebra $ {A}$ necessarily has one, but we won't discuss that in this post.</p><br /><h2> 1.2. Vector Bundles & Connections </h2><br /><p>My knowledge of modern geometry ends at this point and I am solely trusting the NCG literature. Initially, when seeing these terms, I think of the long definitions needed in Differential Geometry and start chasing down the terms in various textbooks. We don't need that here, and can describe these objects with simple algebraic ideas. That said, the next tool in our discussion is the NCG notion of a Vector Bundle. The definition here is a chain of algebraic structures: a NCG vector bundle is a finitely generated projective module over $ {A}$. A projective module is the image of a free module under a projection/idempotent. A free module is a module with basis vectors. E.g., $ {A^n}$ is a free module and if $ {E \in M_n(A)}$ is idempotent, then $ {E A^n}$ is a vector bundle. <br/>Now if $ {\mathcal{E}}$ is an NCG vector bundle, and $ {\Omega_1}$ a differential calculus, both over $ {A}$, we can also define the NCG notion of a connection: A connection is a linear map <p align=center>$ \displaystyle \nabla : \mathcal{E} \rightarrow \Omega_1 \otimes \mathcal{E}$</p><br /> with the following condition: <p align=center>$ \displaystyle \nabla (as) = d(a) \otimes s + a \nabla(s) \; \forall a\in A, \; s \in \mathcal{E}$</p><br /><br/>Next time we'll (Lord willing) discuss metrics, curvature, and how they all fit together to get a model for a black hole.<br/></p><br /><h1>2. Sources </h1><br /><br /><ol> <li> LTCC Lecture notes in NCG by S Majid <li> Section 4 of <a href="http://arxiv.org/abs/1009.2201">Almost commutative Riemannian geometry: wave operators</a> <li> Section 2 of <a href="http://arxiv.org/abs/1011.5898">Noncommutative Riemannian geometry on graphs</a> <br /></ol>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-74113593305515194512011-06-20T16:44:00.001-07:002012-04-16T16:04:28.303-07:00You don't understand something until you think it's obvious.I learned a valuable life lesson in the course of completing my MSci degree. It's not so much about mathematics as it is about understanding mathematics or any other complicated subject.<br /><br />Often in studying maths, I would spend hours or days stuck on a problem, proof, or section of a textbook. After I finally found a solution or an epiphany in what I read I would feel terrible. "I'm so stupid!" I'd exclaim. I would throw out or scratch out pages of notes, trying to blot out any evidence of the embarrassingly long time it took me to grasp the concept. <br /><br />I had the same feeling just a few hours ago - but not with mathematics, with programming. I had spent the past three days trying to figure out the macro system in the Racket dialect of Lisp. I managed to write a simple message-passing object system, based on the one used in Structure and Interpretation of Computer Programs, just to see if I can. I patched together pieces of code from various samples, cargo-cult programming, until I had something working. I spent the next two days trying to understand what I wrote - noting my progress in another blog. Bit by bit, things started to come together - lot's of little epiphanies. "This is all so simple. It shouldn't have taken me three days to do this." <br /><br />I remembered all the other times I had this feeling. Most of them had to do with mathematics or coursework. The most severe episode occurred as I wrote the final draft of my thesis. There was no sudden big epiphany. Rather, as I combed through my work, I recalled hundreds of little ones. "This is all so simple. It shouldn't have taken me a year. A *real mathematician* could have done it more quickly". I still have a tinge of embarrassment whenever I send it off to someone.<br /><br />It's a common malady of myself and my friends who study mathematics to think that we're stupid because it's taken so long to understand something so simple. As I recalled all this occurrences I had a new epiphany: It suddenly seems simple because we suddenly understand it. <br /><br />I stopped that last thought and restarted: "This is all so simple. I'm glad I finally get these macros." It's probably a lesson I could have learned without paying all that money for tuition. It seems so simple, after all.<br /><br />Mathematical posts will resume shortly.Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com6tag:blogger.com,1999:blog-6128955598718204808.post-14419699010499905322011-03-14T17:15:00.000-07:002011-03-14T17:16:38.961-07:00MSci Presentation this WeekAnother hiatus in blog posts: I've been attending Prof Shahn Majid's course in Noncommutative Geometry at the <a href="http://www.ltcc.ac.uk/">London Taught Course Center</a> Additionally, this whole month I've been busy writing my MSci project and preparing for my presentation. I still have so much to do, and little more than a week left! Exam revision comes afterwards, so I don't expect to have any more exciting maths post during that period either. Though I am hoping to write a post or two about 1) Ergodic theory 2) Cornelissen and Marcolli's paper on Bost-Connes systems and isomorphisms of number fields, 3) Bora Yaklinoglu's proof of a full arithemtic subalgebra for BC-systems (assuming I get a chance to see it!) and/or 4) Things I've learned from the LTCC course. <br /><br />In the meantime, I thought I'd post a link to draft of my <a href="http://mebassett.gegn.net/msci-pres.pdf">slides for my MSci presentation</a>.<br /><br />Happy Pi day!Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-35087282537102640992011-02-08T15:18:00.000-08:002011-02-08T15:18:43.023-08:00Fun and Games with X and f: A talk on Ergodic TheoryWhile I'm not running the UCL Undergrad colloquium anymore, I promised the new management that I could be their back-up speaker in case someone else drops out. Someone did, and I had an opportunity give an undergrad-level talk on motivations for studying Ergodic theory. I tried to present it as a game: trying to determine orbits of increasingly complex systems. You can <a href="http://mebassett.gegn.net/FunAndGames.pdf">download my notes as a pdf</a> or just read them here:<br /><br /><br /><br /><b>1. First Game: Sets and Functions </b><br /><br /><br /><p>Let $X$ be an arbitrary set (e.g. points in space, animals in a zoo.) and let $f: X \rightarrow X$ be some function on $X$. For some $x \in X$, we're gonna look at what happens to the set $\{ x, f(x), f(f(x)), \ldots \}$. We'll call this set the <i>orbit</i> of $x$, and denote it $\text{orb}(x)$. We'll write $f^0(x)$ for $x$, $f^2(x)$ for $f(f(x))$, $f^3(x)$ for $f(f(f(x)))$, et cetera. In this way we have an <i>action</i> of the natural numbers $\mathbb{N}$ on $X$. To be explicit, each $n \in \mathbb{N}$ acts on each $x \in X$ by <p align=center>$\displaystyle n \mapsto f^n(x)$</p> It might be worth pointing out that: <p align=center>$\displaystyle n+m \mapsto f^{n+m}(x) = f^n ( f^m(x))$</p> If $f$ is invertible, then we have an action of the group $\mathbb{Z}$ on $X$. In fact, we can play our game with any group $G$ that acts on $X$, but for now we're only considering $\mathbb{N}$. <br/>We're gonna play three games with the pair $(X, f)$. The first has to do with sets and functions, the second with topological spaces and continuous functions, and the third with a measurable space $X$. Each game will get progressively harder, but progressively more rewarding. <br/>So here's the first game: given a set $X$, we have to find an $x \in X$ and a $f: X \rightarrow X$ so that as we cycle through $x$, $f(x)$, $f^2(x)$, \ldots we end up with all of X. Id est, $\text{orb}(x) = X$. <br/>At first glance it may seem impossible to tell without more information about X. But we can already exclude an entire category of sets $X$. Can you see it?<br/>That's right, uncountable sets won't play. $\text{orb}(x) = \{ f^n(x) \; : \; n \in \mathbb{N}\}$, thus its always countable. Let's try it for a really easy set: the natural numbers $\mathbb{N}$ themselves. Can you find a number $n$ and a function $f: \mathbb{N} \rightarrow \mathbb{N}$ such that $\text{orb}(n) = \mathbb{N}$ ?<br/>I hope no one has trouble seeing that $n=0$ and $f(n) = n + 1$ does the trick. What about for the integers $X = \mathbb{Z}$? Can one find a function $f: \mathbb{Z} \rightarrow \mathbb{Z}$ and an integer $n$ such that $\text{orb}(n) = \mathbb{Z}$ ? How about for $\mathbb{Q}$? <br/>Now lets try to finish the game: Can we do this for an arbitrary countably infinite set $X$?<br/>Since $X$ is countably infinite, we know we have a bijection $\phi: X \rightarrow \mathbb{N}$. Let $x \in X$, and choose $\phi$ so that $\phi (x) = 0$. Now remember our function for the natural numbers? We'll rename it $p(n) = n+1$ here. We have something like this:<br/><p align=center>$\displaystyle X \xrightarrow{\phi} \mathbb{N} \xrightarrow{\phi^{-1}} X$</p><br/>So we pass into the natural numbers, use the orbit there, and then pass back into $X$. Id est, let $f : X \rightarrow X$ by $f(x)= \phi^{-1} \circ p \circ \phi (x)$. Then $\text{orb}(x) = X$. As it turns out, the condition that there exists an $f: X \rightarrow X$ and an $x \in X$ such that $\text{orb}(x) = X$ is exactly the same condition that X is countably infinite, as $\text{orb}(x)$ puts $X$ in a bijection with the natural numbers. This game was rigged so that we could win!<br/>But while we're at it, let's try a slightly harder game: given a countably infinite set $X$, can we find a function $f: X \rightarrow X$ such that $\text{orb}(x) = X$ for all $x \in X$ ?<br/>No! Can you see why not? Let $x, y \in X$. since $\text{orb}(x) = \text{orb}(y) = X$, there exist numbers $n, m$ such that $f^n(x) = y$ and $f^m(y) = x$. Hence $f^m(f^n(x)) = x$. In other words, $f^{n+m}(x) = x$, meaning the action of the natural numbers on $X$ is periodic, and $|\text{orb}(x)| \leq n+m$. The orbits of an element tell us something special about that element, and unless we're in a finite set, only a few elements can visit the entire set.<br/><br /><br /></p><br /><b>2. Second Game: Orbits for Topological Spaces </b><br /><br /><br /><p>So as we stated, the last game (finding a function and element such that the orbit is the entire set) was rigged so that we always win. But it's helpful to see just how badly its rigged. In the <i>category</i> of sets, the only ``structure'' we have to preserve is the cardinality of the set. Bijective functions obviously preserve cardinality. Since countably infinite sets are all bijective with the natural numbers by definition they are, in a sense, all the same. The category of countably infinite sets has only one element, $\mathbb{N}$. Anything we prove about $\mathbb{N}$ <i>as a countably infinite set</i> is automatically true for all other countably infinite sets.<br/>So to make the game a bit more interesting, we're going to have to play in a more exciting category. Instead of looking at countably infinite sets, we're going to look at <i>topological spaces</i>. Loosely speaking, a topological space is our most general notion of a space. It gives us a mathematical language for describing when points are ``near each other'', or ``in a neighborhood''. If we say that $X$ is a topological space, we mean that there exists a collection of <i>open subsets</i> $U \subset X$, and these subsets must behave in a particular way. (For more information, see a decent book on topology, like those by Munkres or Armstrong, or talk to students in our General Topology study group.) <br/>Instead of dealing with arbitrary functions $f: X \rightarrow X$, we'll also want a new condition on $f$. Since $X$ is a topological space, we want functions that live on topological spaces, in the same way a linear map lives on vector spaces, or group homomorphism live on groups. These functions are the ``continuous functions''. We require $f: X \rightarrow X$ to be continuous, that is, if $U \subset X$ is open, then $f^{-1} (U) = \{ x \in X : f(x) \in U \}$ must be open, too. <br/>Now that we have $X$ a topological space and $f$ a continuous function (we'll call the pair $(X, f)$ a <i>topological dynamical system</i>), we can get back to our game. When is $\text{orb}(x) = X$ ?<br/>Almost never. Topological spaces (like $\mathbb{R}$ or $\mathbb{C}$) are almost always uncountable, and as we already discussed, this is impossible. So we have to modify our game a bit. If $\text{orb}(x) \neq X$, what is the next best thing? (If you've not taken measure theory, functional analysis, or general topology, you may be forgiven for not knowing.)<br/>In this game, we want $\text{orb}(x)$ to be <i>dense</i> in $X$. If you've never seen dense sets before, think of the rationals $\mathbb{Q}$ sitting in $\mathbb{R}$. In $\mathbb{R}$, the open sets are exactly the open intervals. And any open interval in $\mathbb{R}$ intersects $\mathbb{Q}$. If you've had more analysis, another way to say this is that a subset of $X$ is dense in $X$ if its closure is all of $X$. <br/>So this is the game: given a pair $(X,f)$, $X$ a topological space and $f$ a continuous function, can you find an element $x \in X$ such that $\overline{\text{orb}(x)} = X$ (id est, the closure of $\text{orb}(x)$ is $X$).<br/>This game is a lot harder, and it'll require some new tools. Let me give you a definition and a few lemmas.<br/><br /><blockquote><b>Def1 1</b> <em> Let $(X, f)$ be a topological dynamical system. We call the system <b>minimal</b> if given a closed set $V \subset X$ such that $f(V) = V$, then $V = X$ or $V = \emptyset$. (That is, if the only invariant closed sets are the whole space or the null set.) </em></blockquote><br /></p><br /><p>Minimality allows us to win the game. Actually, its equivalent to winning; consider the following proposition:<br/> Let $(X, f)$ be a topological dynamical system. The system is minimal if and only if $\overline{\text{orb}(x)} = X$ for all $x \in X$. <br/><em>Proof:</em> Assume that $(X, f)$ is minimal. $\overline{\text{orb}(x)}$ is clearly closed, invariant under $f$, and non-empty. Thus it must be X. Conversely, assume X is not minimal, and let $V$ be a closed, non-empty $f$-invariant subset of X. Let $x \in V$. Since V is $f$-invariant, $\text{orb}(x) \subset V$, and hence its closure can't be all of X. $\Box$</p><br /><p>This game is called <i>Topological Dynamics</i>. Another lemma, not stated here, says that any topological space $X$ has an minimal subsystem. The proof is a simple application of Zorn's lemma. We can actually state (and prove!) a key theorem in the subject:<br/><br /><blockquote><b>Thrm 2 (Birkhoff Recurrence Theorem)</b> <em> Let $(X,f)$ be a minimal topological dynamical system. Then there is some $x \in X$ and a sequence $1 \leq n_1 < n_2 < \ldots $ of natural numbers such that $f^{n_k}(x) \rightarrow x$ as $k \rightarrow \infty$. </em></blockquote><br /></p><br /> <em>Proof:</em> Let $x \in X$. Since the system is minimal, we know that $\text{orb}(x)$ is dense in $X$. If there is some $n$ such that $f^n(x)=x$, then we're done. Otherwise, $\text{orb}(x)$ dense means that its closure is the whole space, which means that we can find some sequence $y_k \in \text{orb}(x)$ such that $y_n \rightarrow x$. Naturally, each $y_k = f^{n_k}(x)$. So we're done. $\Box$<br /><br /><p>Who else plays this game? It's actually been used successfully to prove theorems in number theory. More specifically, it's been used to prove a Ramsey-type problem about colouring the integers. I'll state it.<br/><br /><blockquote><b>Thrm 3 (Van der Waerden's Theorem)</b> <em> Suppose that the integers are coloured in r colours. Then for every $k \geq 2$ there is a monochromatic arithmetic progression of length k. </em></blockquote><br /></p><br /><p>The proof uses a generalization of the above Recurrence Theorem. It also uses compact metric spaces instead of general topological spaces. But, of course, compact metric spaces is a sub-category of topological spaces, and they're much easier to work with (general topological spaces can be quite pathological.) <br/>So we've seen how to make the game more interesting my looking at more exciting categories. We've stated an equivalent condition to winning the game (minimality), and though I haven't shown you any specific examples of the game, I've given you some of the rewards (e.g. Van der Waerden's Theorem) of winning. Let's look at the third game:<br/><br /><br /></p><br /><b>3. Third game: Measurable Systems </b><br /><br /><br /><p>Now we're going to look at another category of sets and functions. This time $X$ must be a compact metric space. But we want a bit more structure. $X$ must be <i>measurable</i>, that is, for certain ``well behave'' subsets $U \subset X$, we have a function, called a measure, that assigns a ``size'' to U, $\mu(U) \in [0,1]$, and we require the size of the total space to be 1, id est, $\mu(X) = 1$. The subsets for with $\mu$ is defined are called the <i>measurable sets</i>. If you've taken measure theory or probability, then you should recognize that we want $X$ to be a probability space. You can check those courses for more details on this category.<br/>The function $f: X \rightarrow X$ must meet new requirements, too. It must ``live'' on measurable spaces, id est, $f$ must be a <i>measurable map</i>. The definition is similar to that for continuous functions: $f$ is measurable if for all measurable subsets $U \subset X$, $f^{-1}(U)$ is also a measurable set. The measuring function $\mu$ must also be $f$-invariant. That is, $\mu(f^{1}(U) = \mu (U)$.<br/>We call the triple $(X, \mu, f)$ a measure-preserving system. Like in the last two games, we want to know what happens to $\text{orb}(x)$. But things get very subtle here. Like in the last game, we had a condition (minimal systems) that told us when we were onto something. We have a similar condition here:<br/><br /><blockquote><b>Def1 4</b> <em> We call the measure preserving system $(X, \mu, f)$ <b>ergodic</b> if the only $f$-invariant sets have measure 0 or measure 1. </em></blockquote><br /></p><br /><p>This game is called <b>Ergodic Theory</b>. Who do you think plays it and why? Can you prove a result similar to our proposition on minimality for ergodicity? <br/>At the core, this game is concerned with the statistical study paths of motion of points in some space, whether it be the phase space of a Hamiltonian in physics, or the state spaces of Markov chains (random processes) in statistics. <br/>The idea is to think of the action $n \in \mathbb{N}$ as a particular time, and then $\text{orb}(x) = \{ f^n(x) : n\mathbb{N}\}$ is the time-path of state $x$. This allows us to get a ``time average'' and a ``space average'' of functions on the measure preserving system. Let $\phi: X \rightarrow \mathbb{C}$ be an integrable function. <br/><br /><blockquote><b>Def1 5</b> <em> Let $(X, \mu, f)$ be a measure preserving system. The <b>time average</b> of $\phi$ starting from $x \in X$ is denoted <p align=center>$\displaystyle <\phi>_x = \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=0}^{n} \phi(f^i(x))$</p> </em></blockquote><br /></p><br /><blockquote><b>Def1 6</b> <em> The <b>space average</b> of $\phi$ is denoted <p align=center>$\displaystyle \overline{\phi} = \int \phi(x) d\mu$</p> (This is the integral with respect to the measure $\mu$. If you've not seen this before, talk to someone who has taken measure theory.) </em></blockquote><br /></p><br /><p>To see think about these definitions, consider a subset $U \in X$, and the indicator function on $A$ (that's the function $I: X \rightarrow \mathbb{C}$, $I(x) = 1$ is $x \in UA$, otherwise $I(x)=0$). The time average of $<I>_x$ is time that the orbit of $x$ will spend in A, while the space average is the probability that a random state $x$ is in $U$. <br/>One of the important results about measure theory is the following theorem:<br/><br /><blockquote><b>Thrm 7</b> <em> For an ergodic measure preserving system $(X, \mu, f)$ and $\phi: X \rightarrow \mathbb{C}$ a measurable function, the limit $<\phi>_x$ exists and is equal to $\overline{\phi}$ for almost all $x \in X$. </em></blockquote><br /></p><br /><p>Since the two are almost always equal, almost all paths cover the state space in the same way. In other words, this theorem tells us that for a sufficiently large amount of time (id est, a sufficiently large sample) we can learn information about the entire system (or entire population.) Think ``law of large numbers''.<br/>While we can't play too many ergodic games, I do want to give an example.<br/>Let $X = \mathbb{R}^2/\mathbb{Z}^2$, that is, let $X$ be a torus, and let $\alpha \in \mathbb{R}^2$. Let $f: X \rightarrow X$ be the function $f(x) = x + \alpha$. With a bit of work from measure theory, you can show that this is an ergodic measure preserving system. <br/>When $\alpha = (\sqrt{2},\sqrt{3})$ and $x = (0,0)$, then $\text{orb}(x)$ is dense in $X$.<br/>If $\alpha = (\frac{1}{3},\frac{2}{5})$ and $x = (0,0)$, then $\overline{\text{orb}(x)}$ is a finite set of fifteen points.<br/>If $\alpha = (\sqrt{2},\sqrt{2})$ and $x = (0,0)$, then $\text{orb}(x)$ is dense in the subspace $\{ (x,x) : x \in \mathbb{R}/\mathbb{Z} \}$.<br/>These weird facts pop out of a much deeper theorem, <i>Ratner's theorem</i>, that I cannot even state, but it roughly says that orbit closures are ``algebraic sets''. Ratner actually has several related theorems, and these theorems are related to the proof of the Oppenheim conjecture. The Oppenheim conjecture is an important statement about analytic number theory and real quadratic forms in several variables. It was proven using Ergodic theorems on Lie Groups. The proof of this theorem could actually make a third year project.<br/><br /><br /></p><br /><b>4. Sources </b><br /><br /><p><br />I lifted material for this talk from several sources, most prominently:<br /><br /><ol><li> <a href="http://www.dpmms.cam.ac.uk/~bjg23/ergodic-theory.html">Ben Green's notes for Ergodic theory at Cambridge.</a> <li> <a href="http://terrytao.wordpress.com/category/teaching/254a-ergodic-theory/page/2/">Terrence Tao's Blog</a> <li> Wikipedia <li> <a href="http://cscs.umich.edu/~crshalizi/notabene/ergodic-theory.html">Cosma Shalizi's notebook</a> <br /></ol></p>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0tag:blogger.com,1999:blog-6128955598718204808.post-9248080579869246732011-01-30T07:28:00.000-08:002011-01-31T02:01:02.115-08:00The Global Artin Map for $\mathbb{Q}$<em>I'm discussing the global Artin homomorphism $\theta : \mathcal{C}_\mathbb{Q} \rightarrow \text{Gal}(\mathbb{Q}^\text{ab}/\mathbb{Q})$ used in our key statement about the Bost-Connes system, as well as the Adeles, Ideles, and other components needed to consider it.</em><br /><br />In my <a href="http://mebassett.blogspot.com/2011/01/class-field-theory-bost-connes-system.html">last significant post</a> I stated the global Artin map for the rationals intertwined with the Galois action on the values of Bost-Connes KMS states. I wanted to talk about the Artin map in a bit more detail. The general case for any number field $\mathbb{K}$ is a bit too complicated for me to discuss right now, but the case for $\mathbb{Q}$ isn't bad at all. Let's start with $\mathcal{C}_\mathbb{Q} = \mathbb{A}^{\ast}_\mathbb{Q}/\mathbb{Q}^{\ast}$, the Idele class group of $\mathbb{Q}$.<br />First recall the definition of the Adeles, $\mathbb{A}_\mathbb{Q}$. <p align=center>$ \displaystyle \mathbb{A}_\mathbb{Q} = \mathbb{R} \times \prod_{p \; \text{prime}}^{\prime} \mathbb{Q}_p $</p> Where $\mathbb{Q}_p$ is the completed field of $p$-adic numbers. The idea here is that we're looking at all possible completions of the number field $\mathbb{Q}$, including the standard absolute value and all $p$-adic valuations. Each of these completed fields is a ``local'' field, containing information about $\mathbb{Q}$ at a particular prime, and the ring of Adeles combines all that information into one giant ring. The $\prime$ on the product $\prod^{\prime}$ indicates that this product is <i>restricted</i> over $\mathbb{Z}_p$, id est, it has the condition that <p align=center>$ \displaystyle \mathbb{R} \times \prod_{p \; \text{prime}}^{\prime} \mathbb{Q}_p = \{ a=(a_\infty,a_2,a_3,\ldots) \in \mathbb{A}_\mathbb{Q} \; \text{iff} \; a_p \in \mathbb{Z}_p \; \text{for all but finitely many p}\$</p> $\mathbb{A}^{\ast}_\mathbb{Q}$ is the set of all invertible elements in the ring, we can write it $\mathbb{R}^\ast \times \prod_p^\prime \mathbb{Q}_p^\ast$. As $\mathbb{Q}_p$ is the field of fractions for $\mathbb{Z}_p$, we have an isomorphism $\mathbb{Q}_p \cong \mathbb{Z}_p[\frac{1}{p}] \cong p^\mathbb{Z} \times \mathbb{Z}_p$. If we identify $p^\mathbb{Z}$ with $\mathbb{Z}$ we can write $\mathbb{Q}_p^\ast \cong \mathbb{Z} \times \mathbb{Z}^\ast_p$ and <br /><p align=center>$ \displaystyle \mathbb{A}_\mathbb{Q}^\ast = \mathbb{R}^\ast \times \prod_{p \; \text{prime}}^{\prime} \mathbb{Q}_p^\ast \cong \{\pm1\} \times \mathbb{R}_{>0} \times \prod \mathbb{Z}_p^\ast \times \bigoplus_p \mathbb{Z}$</p><br />Now for an $r \in \mathbb{Q}^\ast$, we can write $r = \text{sgn}(r) \prod_p p^n(p)$, so we can take $\mathbb{Q}^\ast$ to $\{\pm1\} \times \bigoplus_p \mathbb{Z}$ by $r \mapsto (\text{sgn}(r),n(p)_p)$. By noted that $\mathbb{R}_{>0} \cong \mathbb{R}$ by logarithm, we can write: <p align=center>$ \displaystyle \mathbb{A}_\mathbb{Q}^\ast \cong \mathbb{Q}^\ast \times \mathbb{R} \times \prod_p \mathbb{Z}_p^\ast$</p> Now recall our favourite profinite group $\hat{\mathbb{Z}} = \varprojlim_k \mathbb{Z}/k\mathbb{Z} \cong \text{End}(\mathbb{Q}/\mathbb{Z})$. Since we can write each $k$ as $p_1^{n(p_1)} \ldots p_l^{n(p_l)}$, we also have <p align=center>$ \displaystyle \mathbb{Z}/k\mathbb{Z} = \prod_{i=1}^{l} \mathbb{Z}/p_i^{n(p_i)} \mathbb{Z}$</p> After passing to the inverse limit we have: <p align=center>$ \displaystyle \hat{\mathbb{Z}} = \varprojlim_k \mathbb{Z}/k\mathbb{Z} = \prod_p \varprojlim_n \mathbb{Z}/p^n\mathbb{Z} = \prod_p \mathbb{Z}_p$</p> Thus $ \mathbb{A}_\mathbb{Q}^\ast \cong \mathbb{Q}^\ast \times \mathbb{R} \times \hat{\mathbb{Z}}^\ast $ and finally: <p align=center>$ \displaystyle \mathcal{C}_\mathbb{Q} \cong \mathbb{R} \times \hat{\mathbb{Z}}^\ast$</p> Now <b>I cannot prove it</b>, but class field theory tells us that the map $\theta:\mathcal{C}_\mathbb{Q} \rightarrow \text{Gal}(\mathbb{Q}^\text{ab}/\mathbb{Q})$ is surjective, and the kernel is the connected component of the identity on $\mathcal{C}_\mathbb{Q}$, which we denote $\mathcal{D}_\mathbb{Q} = \mathbb{R}$. Hence $\mathcal{C}_\mathbb{Q} / \mathcal{D}_\mathbb{Q} \cong \hat{\mathbb{Z}}^\ast$ and we have an isomorphism: <p align=center>$ \displaystyle \theta : \hat{\mathbb{Z}}^\ast \rightarrow \text{Gal}(\mathbb{Q}^\text{ab}/\mathbb{Q})$</p> We can see that these two groups are isomorphic assuming only the Kronecker-Weber theorem (which says that $\mathbb{Q}^\text{ab} = \mathbb{Q}^\text{cycl}$) and some facts about inverse limits and Galois theory. $\mathbb{Q}^\text{cycl} = \bigcup_n \mathbb{Q}(\zeta_n)$, where $\zeta_n$ is the n-th root of unity. Recall that $\text{Gal}(\mathbb{Q}(\zeta_n)/\mathbb{Q}) \cong (\mathbb{Z}/n\mathbb{Z})^\ast$. Hence we have <p align=center>$ \displaystyle \text{Gal}(\mathbb{Q}^\text{cycl}/\mathbb{Q}) = \varprojlim_n \text{Gal}(\mathbb{Q}(\zeta_n)/\mathbb{Q}) \cong \varprojlim_n (\mathbb{Z}/n\mathbb{Z})^\ast = \hat{\mathbb{Z}}^\ast$</p><br />Thanks to the following sources for help on this post: <br /><br /><ul><li> Conversation with Prof Minhyong Kim <li> Lenstra, <a href="http://websites.math.leidenuniv.nl/algebra/Lenstra-Profinite.pdf">Profinite groups</a> and <a href="http://websites.math.leidenuniv.nl/algebra/Lenstra-Idele.pdf">The Idele Class Group</a> <li> Fields and Galois Theory, JS Milne <li> p-adic Numbers, Fernando Gouvea <br /></ul>Matthew Eric Bassetthttp://www.blogger.com/profile/10576806213820001905noreply@blogger.com0