# Young Tableaux

I am currently working through the book on Young Tableaux by Fulton, and I find it a very nice read - in a prose1 kind of way. As you might notice from the general sound of it, I am getting into representation theory. However, this book is more about the combinatorical aspects of the field. Since combinatorics is a very hands-on kind of math, I really think I should do a certain amount of exercises. I am only skimming through the book since ultimately, I want to get back to abstract nonsense really bad, but I will write down my solutions for any exercise I do2 .

# Solutions for §2

Exercise 2. We clearly have $\mu\trianglelefteq\lambda$ $\Rightarrow$ $\mu\le\lambda$ $\Rightarrow$ $K_{\lambda\mu}\ne 0$. For the converse, assume that $K_{\lambda\mu}\ne 0$, so there exists some sequence $\lambda^{(1)}\subset\lambda^{(2)}\subset\ldots\subset\lambda^{(\ell)}=\lambda$ with the property that $|\lambda^{(i)}/\lambda^{(i-1)}|=\mu_i$, and this skew diagram has no two boxes in the same column. We now claim that $|\lambda^{(i)}|\ge\mu_1+\ldots+\mu_i$ for all $i$. The proof is by induction on $i$ and trivial for $i=1$. In the induction step, we simply use that $|\lambda^{(i)}|-\mu_i=|\lambda^{(i-1)}|\ge\mu_1+\ldots+\mu_{i-1}.$ Now note that $\lambda^{(i)}$ has at most $i$ rows - this is easily seen since $\lambda^{(i)}/\lambda^{(i-1)}$ has no two boxes in the same column, for all $i$. In other words, the maximal length of a column can only increase by one in each step. Thus, $\lambda_1+\ldots+\lambda_i\ge|\lambda^{(i)}|$ and we have verified $\mu\trianglelefteq\lambda$.

# Solutions for §3

Exercise 1. Perform the same handwaving as he did on page 32, just use the fact that $w_{\mathrm{row}}=w_{\mathrm{col}}$ and argue similarly. Exercise 2. This is silly. Why do exercises like this even exist? It's a waste of space. Seriously! This should be a remark. As a remark, I would like it. But this just feels like a really pitiful pat on the head, rather than an exercise. Exercise 3. Pick $w=(1,4,2,5,3,6)$, where $(1,\ast,2,\ast,3,6)$ is increasing of length $4$ and we can choose $(1,\ast,2,\ast,3,\ast)$ plus $(\ast,4,\ast,5,\ast,6)$ for $L(w,2)=6$. However, for any choice of an increasing subsequence of length $2$, the rest of the sequence will not be increasing.

# Solutions for §5

Exercise 1. Consider the set of reverse lattice words $t\cdot u$ with * $t=w(T)$, $u=w(U)$ and $T$, $U$ of shape $\lambda$ and $\mu$ respectively * $t\cdot u$ contains exactly $\nu_j$ instances of the number $j$. By Lemma 1, the above conditions are equivalent to $T\cdot U=U(\nu)$, where $U(\nu)$ is the tableaux on $\nu$ whose $k$-th row contains only the letter $k$. Indeed, one definition of $T\cdot U$ is by rectifying the skew tableaux $T\ast U$, whose row word is $t\cdot u$. Hence, it is clear that the above set is in bijection with $\mathcal{T}(\mu,\lambda,U(\nu))$, whose cardinality is $c_{\lambda\mu}^\nu$ by Corollary 2. Exercise 2. We consider the set $\mathcal{S}(\nu/\lambda,U(\mu))$ under the additional condition $\mu_i = \nu_i - \lambda_i$. In other words, we are considering skew tableaux $S$ on the shape $\nu/\lambda$ with as many boxes in each row as its own rectification. During rectification, an empty box can only be moved to the right or down. Therefore, we know that no box can ever be moved down during the process of rectifying $S$, since this would increase the number of nonempty boxes in a row with no chance of ever decreasing again. In other words, we know that $S$ can only be $U(\mu)$ with a padding of $\lambda_j$ empty boxes in row $j$. In particular, the set contains only this one element and by Corollary 2, this means $c_{\lambda\mu}^\nu=1$. Exercise 3. We need to show that $T\cdot U=U(\nu)$ with $T$ and $U$ of shape $\lambda$ and $\mu$ respectively implies $U=U(\mu)$. Then, we are done by Corollary 2 (ii) with $V_0=U(\nu)$. By Lemma 1, we know that $T\ast U$ is a Littlewood-Richardson skew tableau, and from the definition of $T\ast U$, it follows that $U$ must also be a Littlewood-Richardson skew tableau, but one that is a tabelau itself. Hence, using the other direction of Lemma 1, we are done. Exercise 4. Let us write $x_{i+k}:=y_i$. We have to show that $\sum_{\mathrm{shape}(V)=\nu} x^V = \sum_{\lambda\subset\nu~\mathrm{shape}(S)=\nu/\lambda,~\mathrm{shape}(T)=\lambda} x^T\cdot y^{\mathrm{Rect}(S)}.$ where $y^{\mathrm{Rect}(S)}=y^S$, since the definition depends only on the content of $S$. Pick any tableau $V$ of shape $\nu$, then it suffices to show that the boxes with entries less or equal to $k$ form a subtableau $\lambda\subset\nu$. However, this is easy to see: In each row, there can be no box with entries larger than $k$ before the last box with an entry less or equal to $k$. Also, there can be no box with a value less or equal than $k$ below a box whose value is larger than $k$. Comment. For the following exercise, I had to look up the fact that $f^\mu$ denotes the number of standard tableaux on $\mu$, because I skipped that part of the book - and at this point, I have to emphasize on the greatness of lists of symbols, in addition to a normal index. Every written text on mathematics should have one. If your written texts on mathematics do not have one, add one now. To each of them. Exercise 5. We know that $s_{\nu/\lambda}(x_1,\ldots,x_k) = \sum_\mu c_{\lambda\mu}^\nu s_\mu(x_1,\ldots,x_k)$ and hence, the coefficient of the monomial $x_1x_2\cdots x_k$ is $\sum_\mu c_{\lambda\mu}^\nu f^\mu$.

# Solutions for §6

For reference, let us recall that $\begin{array}{rcl} h_n(x_1,\ldots,x_m) = & \sum_{|\alpha|=n} & x^\alpha \\ e_n(x_1,\ldots,x_m) = & \sum_{|\alpha|=n,\,\alpha_i\le 1} & x^\alpha \\ m_\lambda(x_1,\ldots,x_m) = & \sum_{\exists\sigma\in\mathfrak{S}_m:~\alpha_i=\lambda_{\sigma(i)}} & x^\alpha \\ s_\lambda(x_1,\ldots,x_m) = & \sum_{\mathrm{shape}(T)=\lambda} & x^T \end{array}$ Exercise 1. Sounds like a typical exercise in "insert the definitions", but there's actually some fun combinatorics to be had here. I will show that $\sum_{k=0}^n (-1)^k p_ke_{n-k} = \sum_{k=0}^n p_kh_{n-k} = 0$ where $p_0=n$. For the first part, we are considering the sum $\sum_{k=1}^n (-1)^{k+1} p_k e_{n-k}$ which is the sum over all $(-1)^{k+1} \cdot x_{i_1}\cdots x_{i_{n-k}}\cdot x_j^k$ for $1\le k\le n$ and $1\le j\le m$. Fix $k>1$ and $j$. Consider among these monomials those that contain the factor $x_j^k$. Each of them appears exactly twice, once with a negative sign and once with a positive sign: More precisely, the expression once appears as $(-1)^{k+1} x^\alpha x_j^k$ with $\alpha_j=0$ and once as $(-1)^k x^\beta x_j^{k-1}$ with $\beta_j=1$ and in all other positions, $\alpha$ and $\beta$ coincide. Hence, all of these monomials sum to zero and we are left with the monomials $x^\alpha x_j=x^\beta$ for all $1\le j\le m$ and all $\alpha\in\{0,1\}^m$ with $\alpha_j=0$ and $|\alpha|=n-1$. Consider now the $\beta\in\{0,1\}^m$ with $|\beta|=n$. For each such $\beta$, there are precisely $n$ ways to choose an $\alpha$ and a $j$ in our sum to obtain this $\beta$. Consequently, our sum is $n$ times the sum over all these $\beta$, i.e. equal to $n\cdot e_n$. For the second statement, we consider the sum $\sum_{k=1}^n p_k h_{n-k}$ which is the sum over all $x^\alpha x_j^k=x^\beta$ for $1\le k\le n$ and $|\alpha|=n-k$.

# Solutions for §7

Exercise 1. Part (a) follows directly from the fact that $R(T)$ and $C(T)$ are subgroups: $\begin{array}{rcl} p'a_T = \sum_{p\in R(T)} p'p &=& \sum_{p\in R(T)} pp' = a_Tp' \\ &=& \sum_{p\in p'R(T)} p = \sum_{p\in R(T)} p = a_T \\ b_Tq' = \sum_{q\in C(T)} \mathrm{sgn}(q)qq' &=& \sum_{q\in C(T)} q'\mathrm{sgn}(q)q = q'b_T \\ &=& \sum_{q\in C(T)} \mathrm{sgn}(q')\mathrm{sgn}(qq')qq' \\ &=& \mathrm{sgn}(q')\sum_{q\in C(T)q'} \mathrm{sgn}(q)q = \mathrm{sgn}(q')b_T \end{array}$ Part (b) then follows immediately from part (a) since $a_Ta_T=\sum_{p\in R(T)}a_Tp=\sum_{p\in R(T)}a_T=\sharp R(T) a_T$ and equivalently for $b_T$. Exercise 2. For once, I really like Fulton's solution to this one. Exercise 3. By Equation (1), we have $\begin{array}{rcl} \sigma\cdot v_T &=& \sum_{q\in C(T)} \mathrm{sgn}(q) \{\sigma\cdot q\cdot T\} \\ &=& \sum_{q\in C(\sigma\cdot T)} \mathrm{sgn}(\sigma^{-1}q\sigma) \{\sigma\cdot \sigma^{-1}q\sigma\cdot T\} \\ &=& \sum_{q\in C(\sigma\cdot T)} \mathrm{sgn}(q) \{q\cdot\sigma\cdot T\} = v_{\sigma\cdot T}. \end{array}$ Exercise 4. Let $T$ be any numbering of $\lambda$. In the case $\lambda=(n)$, simply note that $R(T)=S_n$ and $C(T)=\{\mathrm{id}\}$. This means that there is only one tabloid $\{T\}$ and $M^\lambda = \mathbb{C}\cdot\{T\}$. Also, $\sigma\cdot\{T\}=\{T\}$ for all $\sigma\in S_n=R(T)$. In the case $\lambda=(1^n)$, we have $C(T)=S_n$ instead and $R(T)=\{\mathrm{id}\}$. Hence, by Exercise 1, $\sigma\cdot v_T=\sigma\cdot b_T\cdot \{T\} = \mathrm{sgn}(\sigma)\cdot b_T\cdot\{T\} = \mathrm{sgn}(\sigma)\cdot v_T.$ Furthermore, $M^\lambda$ is freely generated by the numberings $T$. There exists some $\sigma\in S_n$ with $\sigma\cdot T$ standard, set $\mathrm{sgn}(T):=\mathrm{sgn}(\sigma)$. Then, $v_T=\sum_{q\in S_n} \mathrm{sgn}(q)\cdot qT = \pm\sum_{\mathrm{shape}(T')=\lambda} \mathrm{sgn}(T')T'=: \pm v$ and $S^\lambda=\mathbb{C}v$. Comment on (5a) and (5b). I found these two lines quite confusing. I would have found it more understandable had it been written as $\begin{array}{rcl} b_TS^\lambda &\subseteq& b_TM^\lambda = \mathbb{C}v_T \subseteq b_TS^\lambda \\ b_TS^{\lambda'} &\subseteq& b_TM^{\lambda'} = 0 \end{array}$ where the last equality actually requires an application of Lemma 1: Given any numbering $T'$ on $\lambda'$, since $\lambda\ne\lambda'$, it follows that two distinct integers must exist that occur in a row of $T'$ and a column of $T$. Then by Lemma 2, we can conclude $b_T\{T'\}=0$. Comment on (∗). Again, I think he could have been more precise. He denotes by $r_T(p,q)$ the number of cycles of length $q$ in the cycle decomposition of $\sigma$ with entries in the $p$-th row of the tabloid $T$. He also ment to say that $\sigma\in C(\lambda)$, i.e. $\lambda=(\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_1>0)$ is a partition of $n$ such that the cycles of $\sigma$ have length $\lambda_k$. Then, the conditions below (∗) state that * Each row of $T$ consists of the elements from certain cycles of $\sigma$ * There is a total of $m_q$ cycles of length $q$ in $\sigma$. The sum in (∗) is then over all $T$ satisfying these conditions. Under these conditions, we then count the number of ways in which we can choose the cycles of $\sigma$ to occur in the rows of $T$, which is expressed by (∗): There are $\prod_{q=1}^n r(p,q)!$ ways to arrange the cycles in row $p$ once they have been chosen and there are $m_q!$ ways to choose a cycle of length $q$. Exercise 5. We already know that any such $\lambda'$ arises from $\lambda$ by adding $m$ boxes. Now, number these boxes with $1$ to $m$ in the order they were added and consider the skew diagram $V$ given by these boxes, which is of the shape $\nu=\lambda'/\lambda$. We want to show that it has to be a skew tableau. This is easily done by induction on $m$: For $m=1$, it is trivial. Now, new boxes are always added at the end of a row (or column), so the numbering remains a tableau. On the other hand, removing the box with the largest entry reverses this process, so there is a bijection between the number of ways in which we can add $m$ boxes to a shape $\lambda$ and the skew tableaux on $\lambda'/\lambda$ numbered $1$ to $m$. Exercise 6. We use (7) from §6.1 which says that $s_\lambda = \frac{\det((x_j^{\lambda_i+k-i})_{1\le i,j\le k})}{\det((x_j^{k-i})_{1\le i,j\le k})}$ Then, let $\mu\vdash n$. We calculate $\begin{array}{rcl} \prod_{1 \le i < j \le k} (x_i-x_j)\cdot p_\mu &=& \prod_{1 \le i < j \le k} (x_i-x_j) \cdot \sum_{\lambda \vdash n} \chi_\mu^\lambda s_\lambda \\ &=& \sum_{\lambda\vdash n} \chi_\mu^\lambda\cdot\frac{\sum_{\sigma\in S_k} \mathrm{sgn}(\sigma) \cdot \prod_{i< j} (x_i-x_j) \cdot x_\sigma^\ell}{\sum_{\sigma\in S_k} \mathrm{sgn}(\sigma) \cdot \prod_{i=1}^k x_{\sigma(i)}^{k-i}} \\ &=& \sum_{\lambda\vdash n} \chi_\mu^\lambda\cdot\frac{\sum_{\sigma\in S_k} \mathrm{sgn}(\sigma) \cdot x_\sigma^\ell}{\sum_{\sigma\in S_k} \mathrm{sgn}(\sigma) \cdot \prod_{i< j} x_{\sigma(i)}/(x_i-x_j) } \end{array}$ and then we give up. Comment on Proposition 3. It was not quite clear to me why the character of $\mathbb{U}_n$ on $C(\mu)$ was $\prod_{i=1}^k (-1)^{\mu_i}.$ Basically, one has to show that for a permutation $\sigma$ with cycles of length $\mu_1,\ldots,\mu_k$, this is the formula for its signum. For some reason, I only knew the grotesque formula $\mathrm{sgn}(\sigma) = \prod_{1\le i < j \le k}\frac{\sigma(i)-\sigma(j)}{i-j}$ for any $\sigma\in S_k$. Hence, it might be reassuring for you to know that the above is also true. It can be shown easily: Any transposition has signum $-1$. Then, realize that a cycle of length $n$ is just the composition of $n-1$ transpositions: It can be visualized as "moving" the last element of an $n$-tuple in $n-1$ steps from the last to the first position by successively switching with its neighbour to the left. Exercise 13. For (a), write $\mathbb{U}=\mathbb{C}v$ and define a map $\begin{array}{rcl} \phi: \tilde{M}^\lambda & \longrightarrow & \mathbb{C}[S_n]\otimes_{\mathbb{C}[C(T)]} \mathbb{U} \\ \left[\sigma T\right] & \longmapsto & \sigma\otimes v. \end{array}$ It is well-defined since for any $q\in C(\sigma T)$, $\phi[q\sigma T] = q\sigma\otimes v = \sigma\otimes qv = \mathrm{sgn}(q)\cdot(\sigma\otimes v).$ It is an isomorphism because it has an obvious inverse.
1. Attention Nikolai: Do not touch this book. []
2. I know that the book already contains "solutions", but I'd rather call them "hints". []