Polynomials and Representations XV

Tableaux and Words

In our context, a word is a sequence of positive integers; concatenation of words is denoted by w * w'. Given a skew SSYT T, the corresponding word w(T) is obtained by taking the tableau entries from left to right, then bottom to top. For example the following skew SSYT gives w(T) = (1, 4, 2, 4, 5, 2, 3, 4, 5, 1, 1, 4, 6).

ssyt_to_word_example

The following are quite easy to verify.

  • If T is an SSYT, then it can be recovered from w(T).
    • We pick the longest weakly increasing sequence starting from the first number; this forms the last row. Repeat with the next number to obtain the second-to-last row, etc. E.g. given (4, 5, 3, 3, 4, 6, 1, 2, 3, 3, 5, 7), we get a three-row SSYT comprising of (1, 2, 3, 3, 5, 7), (3, 3, 4, 6) and (4, 5).
  • Not every word occurs as w(T) from some SSYT T.
    • E.g. (1, 2, 1, 2) does not come from an SSYT.
  • However, every word occurs as w(T) for some skew SSYT. E.g. just place the n elements of the sequence diagonally: 1st row and n-th square, 2nd row and (n-1)-th square etc.
  • In general, a skew SSYT T cannot be uniquely recovered from w(T).
    • E.g. for (1, 2), we can either take the SSYT (1, 2), or the skew diagram (2,1)/(1) and fill in 1, 2.

blue-lin

Words and Row Insertion

Next we formulate rules such that row-inserting (bumping) an entry m into an SSYT T is equivalent to concatenation:

w(T \leftarrow m) \equiv w(T) * (m).

Suppose we have a row (v_1, \ldots, v_n) in the SSYT with m added to the end, and assume

v_1 \le v_2 \le \ldots \le v_{i-1} \le m < v_{i} \le \ldots \le v_n,

so that m bumps off v_{i} and we get (v_1, \ldots, v_n, m) \mapsto (v_i, v_1,\ldots, v_{i-1}, m, v_{i+1}, \ldots, v_n). Break this down into a series of steps:

\begin{aligned} (v_1, \ldots, \boxed{v_{n-1}, v_n, m}) &\mapsto (v_1, \ldots, v_{n-1}, m, v_n) \\&\ldots \\&\mapsto (v_1, \ldots, v_{i-1}, v_i, m, \ldots, v_n).\end{aligned}

From this we obtain the first rule:

Rule R1. For any consecutive (a,b,c) in the word satisfying c <a \le b, we switch (a,b,c) \mapsto (a,c,b).

Next, to shift v_i to the beginning of the row, we break it down as follows:

\begin{aligned}(v_1, \ldots, \boxed{v_{i-1}, v_i, m}, \ldots, v_n) &\mapsto(v_1, \ldots, v_{i}, v_{i-1}, m, \ldots, v_n) \\ &\ldots\\ &\mapsto (v_i, v_1, \ldots, v_{i-1}, m, \ldots, v_n).\end{aligned}

From here, we obtain the second rule:

Rule R2. For any consecutive (a,b,c) in the word satisfying a \le c <b, we switch (a,b,c) \mapsto (b,a,c).

Example

Suppose we insert (1, 2, 2, 3, 4, 5, 5) \leftarrow 3. Applying R1 and R2 gives:

\begin{aligned} &(1, 2, 3, 4, \overset{R1}{\boxed{5, 5, 3}}) \mapsto (1, 2, 3, \overset{R1}{\boxed{4, 5, 3}}, 5)\mapsto (1, 2, \overset{R2}{\boxed{3, 4, 3}}, 5, 5)\\ \mapsto &(1, \overset{R2}{\boxed{2, 4, 3}}, 3, 5, 5) \mapsto (\overset{R2}{\boxed{1, 4, 2}}, 3, 3, 5, 5)\mapsto (4, 1, 2, 3, 3, 5, 5).\\ \end{aligned}

Now we wish to revert the inserting process, so we will consider reverse of R1 and R2 as well, denoted by R1′ and R2′ respectively.

Summary of Rules

Given a consecutive triplet (a,b,c) in the word, we consider the following distinct cases:

  • a \le b \le c or a > b > c: no change;
  • c < a \le b: switch to (a, c, b) by R1;
  • a \le c < b: switch to (b, a, c) by R2;
  • b < a \le c: switch to (a, c, b) by R1′;
  • b \le c < a: switch to (b, a, c) by R2′.

One checks that these cases are disjoint and cover all possibilities. Two words are said to be Knuth-equivalent, written as w\equiv w', if we can obtain one from the other via the above 4 operations.

Mnemonic

The following diagram helps us in recalling R1 and R2.

\begin{aligned} c < a\le b\ : \quad &\left(\ \boxed{\begin{matrix}a & b\end{matrix}}\ \leftarrow\ c\ \right)\ = \boxed{\begin{matrix} c & b \\ a\end{matrix}} \\ a\le c < b\ :\quad &\left(\ \boxed{\begin{matrix}a & b\end{matrix}}\ \leftarrow\ c\ \right)\ = \boxed{\begin{matrix} a & c \\ b\end{matrix}} \end{aligned}.

blue-lin

Consequences of Equivalence

From our construction of R1 and R2, we immediately have:

Proposition 1. w(T \leftarrow m) \equiv w(T) * (m).

The following result is less trivial.

Proposition 2. If T' is obtained by sliding an inner corner of skew SSYT T, then w(T') \equiv w(T).

Proof

Clearly, horizontal sliding does not change the word, so we will only consider vertical sliding. By trimming off excess elements in the two rows:

two_row_vertical_slide

we may assume the two rows have the same length. We will prove that the corresponding words are Knuth-equivalent via induction on p. Write \vec n for (n_1, \ldots, n_q), \vec m for (m_1, \ldots, m_p) etc. First consider the case p=0, i.e. we need to show x \vec n \vec v \equiv \vec n x \vec v.

  • Imagine performing row insertion for v_1 into x \vec n. Since x \le v_1 <n_1, this bumps out n_1 and we transform x\vec n v_1 to n_1 x v_1 n_2 n_3 \ldots.
  • Inserting v_2 into this row then bumps out n_2, and we obtain n_1 n_2 x v_1 v_2 n_3 \ldots. Moving on, we eventually obtain \vec n x \vec v as desired.

Now assume p>0. Write \vec u' = (u_2, \ldots, u_p) and  \vec m' = (m_2, \ldots, m_p) so that \vec u = (u_1) * \vec u' and \vec m = (m_1) * \vec m'. We need to show

m_1 \vec m' \boxed{x \vec n u_1 \vec u'} \vec v\ \overset{?}\equiv\ m_1 \vec m' \boxed{\vec n u_1 \vec u' x} \vec v.

Inserting u_1 into the row m_1 \vec m' x \vec n bumps off m_1 which gives m_1 u_1 \vec m' x \vec n. Thus the LHS is equivalent to m_1 u_1 \vec m' x \vec n \vec u' \vec v. By induction hypothesis for p-1, this is equivalent to m_1 u_1 \vec m' \vec n \vec u' x \vec v.

On the other hand, on the RHS above we insert u_1 into the row m_1 \vec m' \vec n to give m_1 u_1 \vec m' \vec n. Hence the RHS is equivalent to m_1 u_1 \vec m' \vec n \vec u' x \vec v. This completes the proof. ♦

Corollary. Every word w is Knuth-equivalent to w(T) for some SSYT T.

Proof

Indeed write w = w(T') for some skew SSYT T'. Apply the sliding algorithm to it to obtain an SSYT T. By proposition 2, w(T') \equiv w(T). ♦

This T will later turn out to be unique. 🙂

blue-lin

Additional Result

Finally, the following result is interesting but will not be used.

Lemma. If the skew tableau T is cut into T' and T'' by a horizontal or vertical line (where T' is respectively left or above), then w(T) \equiv w(T') * w(T'').

Proof

In the case of horizontal cut, we in fact have w(T) = w(T') * w(T'') so the result is trivial. Now suppose we have a vertical cut as follows.

skew_tableau_split_vertically

Start removing the inner corners from T', starting from the rightmost column. E.g. in the above diagram, we remove the corner labelled a, then the one above etc. Note that upon removing a, the column beneath it gets shifted up one unit. Eventually, we obtain the LHS diagram. By proposition 2, the above two tableaux have equivalent words and the tableau T_1 on the right gives w(T_1) = w(T') * w(T''). ♦

blue-lin

This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s