SlideShare a Scribd company logo
INVESTIGATING THE QUANTUM CASE OF HORN’S QUESTION
AUTHOR: DORIAN EHRLICH, COLLABORATORS: DOCTOR ELIZABETH BEAZLEY
1. Abstract
Given the weakly decreasing sequences of real numbers, α = (α1 ≥ α2 ≥ · · · ≥ αk), and
β = (β1 ≥ β2 ≥ · · · ≥ βk), Alfred Horn asks in his 1963 paper for what arbitrary sequence of real
numbers γ = (γ1 ≥ γ2 ≥ · · · ≥ γk) do there exist Hermitian matrices A, B and C = A + B whose
eigenvalues are α, β, and γ respectively [10]. Alexander Klyachko provided the first solution to this
query in 1999 [11], and since then a number of alternative solutions have been released as well.
The solution to Horn’s Question we focus on was first incorporated by Knutson and Tao in 1999
[14], who provide a solution for when α, β and γ ∈ Zk
≥0. Knutson and Tao’s solution involves using
Littlewood-Richardson coefficients to determine intersections of subvarieties of the Grassmannian
known as Schubert Varieties. We look to internalize Knutson and Tao’s method of proof as a means
of tackling the question with which this thesis is ultimately concerned: Whether Horn’s Question
has a solution if rather than intersections of Schubert Varieties, we consider the existence of a
smooth curve that passes through triples of Schubert Varieties. This variant on Horn’s Question is
known as the quantum case of Horn’s Question, and is currently an open problem.
Date: April 27, 2014.
1
2. Introduction
The type of matrices with which Horn’s Question is concerned, “Hermitian matrices, may be
unfamiliar, so we formally define this concept below.
Definition 2.1 (Hermitian Matrix). A matrix A is Hermitian if it is equal to its own conjugate
transpose, i.e. AT = A.
The only nontrivial concepts involved in the original statement of Horn’s Question are Hermeiian
matrices and eigenvalues, so we are in fact ready to state this question formally.
Question 2.2 (Horn’s Question). [10]
Given two weakly decreasing sequences of arbitrary real numbers,
α = (α1 ≥ · · · ≥ αk)
β = (β1 ≥ · · · ≥ βk)
for which sequences
γ = (γ1 ≥ · · · ≥ γk)
do there exist Hermitian matrices A, B and A + B = C such that their eigenvalues are α, β and γ
respectively.
A concern about the statement of Horn’s Question quickly arises: Given that Hermitian matrices
are equal to their own conjugate transpose, Hermitian matrices may have complex entries, so it’s
not clear that we necessarily have real eigenvalues. It is the case, however, that the eigenvalues of
a Hermitian matrix are real, and we offer a cute proof below [7].
Proposition 2.3. Given a Hermitian matrix A, if there exists a vector −→x ∈ Ck such that A−→x =
λ−→x for some scalar λ, it follows that λ ∈ R.
Proof. Given A Hermitian, and A−→x = λ−→x for some −→x ∈ Ck, and scalar λ, consider multiplying
both sides of the equation by −→x T on the left:
−→x T A−→x = −→x T λ−→x = λ−→x T −→x(2.1)
Looking at the (dot) product −→x T −→x , since −→x ∈ Ck, we can express the jth entry of −→x as aj +bji,
for some real numbers ai and bi. Computing, we have
−→x T −→x = (a1 + b1i) (a2 + b2i) . . . (ak + bki)





a1 + b1i
a2 + b2i
...
ak + bki





= (a1 + b1i)(a1 + b1i) + (a2 + b2i)(a2 + b2i) + · · · + (ak + bki)(ak + bki)
= (a1 − b1i)(a1 + b1i) + (a2 − b2i)(a2 + b2i) + · · · + (ak − bki)(ak + bki)
(a2
1 + b2
1) + (a2
2 + b2
2) + · · · + (a2
k + b2
k) =
k
j=1
a2
j + b2
j .
This means −→x T −→x is a sum of real numbers, which is a real number, say S. Now, let’s take the
conjugate transpose of equation (2.1), and recall that A is Hermitian:
(−→x T A−→x )T = (λ−→x T −→x )T
2
⇐⇒ −→x T AT −→x T
T
= λ (S) = λS
⇐⇒ −→x T A−→x = λS
According to equation (2.1), however, we also have
−→x T A−→x = λS
which means λ = λ, and thus λ ∈ R.
Although Horn’s Question may at first glance, appear to be a challenging linear algebra exercise,
a prerequisite for even posing Horn’s Question in the quantum case is an understanding of the
mathematically complicated object known as the cohomology ring of the Grassmannian. This
cohomology ring, denoted H∗(Gr(k, n)), is where we determine intersections of triples of Schubert
Varieties of the Grassmannian, the latter which we denote Gr(k, n). Schubert Varieties are what’s
known as the Zariski Closures of a specific choice of n
k disjoint subspaces of Grassmannian called
Schubert Cells, which are obtained by creating n
k partitions of the group of invertible matrices
over the complex numbers, GLn(C) [7].
Knutson and Tao’s solution to Horn’s Question in the non-quantum, or classical case, involves
considering the operation known as the cup product of elements in H∗(Gr(k, n)) that index the
Schubert Varieties of Gr(k, n) called Schubert classes . The cup product of Schubert classes is
what computes the intersections of their corresponding Schubert Varieties. If we consider the
Schubert classes σα and σβ that represent the Schubert Varieties our original k-tuples α and β
index respectively, , and their cup product σα · σβ, we have the following formula:
σα · σβ =
Pα,β
cγ
α,βσγ
Pα,β = {γ = (γ1, . . . , γk) |
k
j=1
γj =
k
j=1
αj +
k
j=1
βj}.
In this sum, the terms cγ
α,β are positive integers known as the Littlewood-Richardson coefficients
corresponding to α, β and γ. Computing Littlewood-Richradson coefficients requires understanding
an algorithm known as the Littlewood-Richardson Rule, or equivalent algorithms such as the Puzzle
Rule, which we use for this thesis. Knutson and Tao’s solution to Horn’s Question states that for
some Schubert class σγ, where γ ∈ Pα,β [7],
∃ Hermitian matrices A, B, C with integer eigenvalues α, β and γ respectively ⇐⇒ cγ
α,β = 0.
Knutson and Tao only make this conclusion after first proving the following theorem regarding
Littlewood-Richardson coefficients [14].
Theorem 2.4 (The Saturation Conjecture). Given two weakly decreasing sequences of k positive
integers, λ and µ, and some weakly decreasing sequence ν ∈ Pλ,µ,
∃ N ∈ N such that cNν
Nλ,Nµ = 0 ⇐⇒ cν
λ,µ = 0
where λ = {λ1, . . . , λn−k}, Nλ = {Nλ1, . . . , Nλn−k}
3
The quantum case of Horn’s Question, the focus of this thesis, looks at the quantum cup product
in the quantum cohomology ring of the Grassmannian QH∗(Gr(k, n)). The quantum cup product
determines if three Schubert Varieties are linked by smooth curves, rather than if they contain an
ordinary intersection. If we now look at the Schubert Varieties indexed by our k-tuples α and β,
and the quantum cup product, denoted σα σβ, we have the formula
σα σβ =
Pα,β
cd,γ
α,βqd
σγ,
where cd,γ
α,β is the quantum Littlewood-Richardson coefficient that determines how many degree d
curve pass through the triple of Schubert Varieties with Schubert classes σα, σβ and σγ respectively,
and q is an indeterminate.
To determine if in fact, there exists the same connection between eigenvalues of Hermitian ma-
trices and quantum Littlewood-Richardson coefficients, we set out to prove a “quantum analogue”
to Theroem 2.3, which we call the quantum Saturation Conjecture.
Conjecture 2.5 (The Quantum Saturation Conjecture).
∃ N ∈ N such that cNd,Nν
Nλ,Nµ = 0 ⇐⇒ cd,ν
λ,µ = 0
3. Orbit-Stabilizer for the Complete Flag
Our first task for research into the Quantum Saturation Conjecture is to derive a key object of
study: The Schubert Cells of the Grassmannian. We build the intuition for deriving the Schubert
Cells of the Grassmannian by first building the analogous results for the complete flag. Now, let
us fix n ∈ N.
Definition 3.1. A complete flag, F, is an increasing sequence of n linear subspaces of Cn [7]
F : {0} = F0 ⊆ F1 ⊆ F2 · · · ⊆ Fn−1 ⊆ Fn = Cn
dim(Fi) = i, for every index 1 ≤ i ≤ n
Remark 3.2. A partial flag is an increasing sequence of linear subspaces where unlike the complete
flag, there are fewer than n − 1 steps in the sequence. Gr(k, n) is an example of a partial flag.
Given a complete flag F, we know dim(F1) = 1, so we can express the subspace F1 as the span
of some basis vector:
F1 = span(−→v1)
Looking now at F2, since F2 is a two-dimensional subspace, it can be expressed as the span of
two basis vectors. We know, however, that F1 ⊂ F2, which means that span(−→v1) ⊂ F2. This means
there exists a basis for F2 where one basis vector is −→v1. We can express F2 as follows:
F2 = span(−→v1, −→v2)
where −→v2 is some independent vector in Cn.
For the next step, F3, since F2 ⊂ F3, we know span(−→v1, −→v2) ⊂ F3, so by the same argument used
above, there exists an independent vector −→v3 ∈ Cn such that
F3 = span(−→v1, −→v2, −→v3).
Applying this argument at each step in F, we eventually have that
Fn = span(−→v1, −→v2, −→v3, . . . , −→v n).
4
for a set of n independent vectors in Cn, {−→v1, . . . , −→vn}.
We can now reexpress our flag F:
F : {0} ⊂ span(−→v1) ⊂ span(−→v1, −→v2) ⊂ · · · ⊂ span(−→v1, −→v2 . . . , −→vn) = Cn
As we will with Gr(k, n), we consider the group action GLn(C) {F | F is a complete flag}
given by matrix multiplication on the left. We define “multiplication on the left” of a complete flag
by taking each subspace Fk, and multiplying each basis vector by an invertible matrix:
M ∈ GLn(C), Fk = span(−→v1, . . . , −→vk)
MFk = span(M−→v1, . . . , M−→vk)
Since we are multiplying by invertible matrices, we retain a k-dimensional basis after multiplying
each basis vector. Note that the action takes one complete flag to another complete flag:
MF : {0} = MF0 ⊆ MF1 ⊆ · · · ⊆ MFn = Cn
We claim this group action is transitive.
Proposition 3.3. GLn(C) {F} is a transitive group action, i.e. for every two flags F and
F ∈ {F}, there exists a matrix M ∈ GLn(C) such that MF = F .
Proof. Given two arbitrary flags, F, F ∈ {F},
F : {0} = F0 ⊂ F1 ⊂ F2 · · · ⊂ Fn−1 ⊂ Fn = Cn
F : {0} = F0 ⊂ F1 ⊂ F2 · · · ⊂ Fn−1 ⊂ Fn = Cn
there exist n vectors −→v1, . . . , −→vn, and n vectors −→w1, . . . , −→wn such that
F : span(−→v1) ⊂ span(−→v1, −→v2) ⊂ · · · ⊂ span(−→v1, −→v2, . . . , −→vk) ⊂ span(−→v1, −→v2, . . . , −→vk, . . . , −→vn)
F : span(−→w1) ⊂ span(−→w1, −→w2) ⊂ · · · ⊂ span(−→w1, −→w2, . . . , −→wk) ⊂ span(−→w1, −→w2, . . . , −→wk, . . . , −→wn).
Since dim(Fn) = dim(Fn) = n, we can find a change of basis matrix M such that M−→vi = −→wi for
every 1 ≤ i ≤ n. By the definition of our group action, this means MFn = Fn.
MFn = Mspan(−→v1, −→v2, . . . , −→vn) = span(M−→v1, M−→v2, . . . , M−→vn) = span(−→w1, −→w2, . . . , −→wn) = Fn
Now, given k < n,
MFk = Mspan(−→v1, −→v2, . . . , −→vk) = span(M−→v1, M−→v2, . . . , M−→vk) = span(−→w1, −→w2, . . . , −→wk) = Fk
Since k < n was arbitrary, we have that
MF : {0} = MF0 ⊂ MF1 ⊂ MF2 · · · ⊂ MFn−1 ⊂ MFn =
{0} = F0 ⊂ F1 ⊂ F2 · · · ⊂ Fn−1 ⊂ Fn = F .
Now that we know our group action is transitive, by the Orbit-Stabilizer Theorem [1], we
have that for any flag F and its stabilizer StabF = {M ∈ GLn(C) | MF = F}, the quotient
GLn(C)/StabF will be in one-to-one correspondence with the entire set of complete flags. It thus
suffices to work only with the easiest flag, the standard flag, which we denote as E:
E : {0} ⊂ span(−→e1) ⊂ span(−→e1, −→e2) ⊂ · · · ⊂ span(−→e1, −→e2, . . . −→en) = Cn
GLn(C)/StabE
∼= {F}
5
We claim the stabilizer has a very recognizable form.
Proposition 3.4. The stabilizer of E under the group action GLn(C) {F} is the set of upper
triangular matrices.
Proof. To find the stabilizer, we need to establish what matrices M ∈ GLn(C) can multiply each
subspace in E, span(−→e1, . . . , −→ek), so that Mspan(−→e1, . . . , −→ek) = span(−→e1, . . . , −→ek).
Given k ≤ n, we express span(−→e1, . . . , −→ek) as the column space of an n by k matrix to carry out
multiplication of a subspace:
span(−→e1, −→e2, . . . , −→ek) = C −→e1
−→e2 . . . −→ek = C




























1 0 . . . 0 . . . 0
0 1 . . . 0 . . . 0
...
...
...
...
...
...
0 0 . . . 1 . . . 0
...
...
...
...
...
...
0 0 . . . 0 . . . 1
...
...
...
...
...
...
0 0 . . . 0 . . . 0




























To find the matrices that retain this subspace after left multiplication, given any k vectors
−→x1, . . . , −→xk ∈ span(−→e1, . . . , −→ek), we find a matrix M that takes span(−→e1, . . . , −→ek) to span(−→x1, . . . , −→xk)
Mspan(−→e1, . . . , −→ek) = span(−→x1, . . . , −→xk).
If we denote each −→xi
T = x1i x2i . . . xki 0 . . . 0 , we have the following matrix equality:









m11 m12 . . . m1k m1k+1 . . . m1n
m21 m22 . . . m2k m2k+1 . . . m2n
...
...
...
...
...
...
...
mk1 mk2 . . . mkk mkk+1 . . . mkn
...
...
...
...
...
...
...
mn1 mn2 . . . mnk mnk+1 . . . mnn


















1 0 . . . 0
0 1 . . . 0
...
...
...
...
0 0 . . . 1
...
...
...
...
0 0 . . . 0









=









x11 x12 . . . x1k
x21 x22 . . . x2k
...
...
...
...
xk1 xk2 . . . xkk
...
...
...
...
0 0 . . . 0









MEk = Xk.
Note that Ek and Xk have all 0’s past the kth row, since the first k standard basis vectors only
span the first k dimensions.
Now, let −→mi
T be the row vector that contains the entries in the ith row of M so that −→mi
T =
mi1 mi2 . . . min . By the definition of matrix multiplication, we know that the entry in the
ith row and jth column of Xk is defined as follows:
−→mi
T −→ej = mi10 + mi20 + · · · + mij1 + · · · + min0 = mij
Since for every i > k, the ith row of Xk is a zero row vector, we need the first k entries in the
ith row of M to be 0. Looking now at every row i ≤ k, the entry in the jth column is the arbitrary
complex number xij. This means we need −→mi
T −→ej = xij, so we can set the entry in the ith row and
jth column of M equal to xij. Since there are only k columns in Xk, these results hold only for
every mij where j ≤ k, which means we can leave the other entries in M untouched. Since we have
accounted for every xij in Xk, we can state the following result:
6









x11 x12 . . . x1k m1k+1 . . . m1n
x21 x22 . . . x2k m2k+1 . . . m2n
...
...
...
...
...
...
...
xk1 xk2 . . . xkk mkk+1 . . . mkn
...
...
...
...
...
...
...
0 0 . . . 0 mnk+1 . . . mnn


















1 0 . . . 0
0 1 . . . 0
...
...
...
...
0 0 . . . 1
...
...
...
...
0 0 . . . 0









=









x11 x12 . . . x1k
x21 x22 . . . x2k
...
...
...
...
xk1 xk2 . . . xkk
...
...
...
...
0 0 . . . 0









In particular, the matrix M we constructed has 0’s in every entry past the kth row and up to
and including the k column, and arbitrary entries elsewhere, since each xij was arbitrary.
Recall that k ≤ n was arbitrary. This means that since we need a matrix M that retains each
step in E after matrix multiplication, we require that for every k ≤ n, this matrix have 0’s past the
kth row and before the (k + 1)st column:
M =











m11 m12 . . . m1k m1k+1 . . . m1n
0 m22 . . . m2k m2k+1 . . . m2n
...
...
...
...
...
...
...
0 0 . . . mkk mkk+1 . . . mkn
0 0 . . . 0 mk+1k+1 . . . mk+1n
...
...
...
...
...
...
...
0 0 . . . 0 0 . . . mnn











Since the equation ME = E is satisfied when M is upper triangular, we can state the following:
StabE = {U ∈ GLn(C) | U is upper triangular}
Using the stabilizer just derived, we can now reexpress the set of complete flags {F}:
{F} ∼= {MStabE | M ∈ GLn(C)} ⊂ GLn(C)
Remark 3.5. Each element in {F} corresponds to a left coset of the set of upper triangular
matrices:
MStabE = {MU | U ∈ StabE}
Since StabE is not a normal subgroup, however, the cosets do not have a group structure on
them i.e. the quotient is not a quotient group.
Remark 3.6. For the rest of this thesis, we will refer to StabE simply as U.
4. Bruhat Decomposition for the Complete Flag
We put aside our quotient momentarily and focus on the method of partitioning GLn(C) to
derive the Schubert Cells of {F} (and ultimately Gr(k, n)). It is known that we can express
GLn(C) as a disjoint union of cosets of the permutation group Sn using what is known as the
Bruhat Decomposition [5]. We provide the formal statement below.
Theorem 4.1 (Bruhat Decomposition (of GLn(C))).
GLn(C) ∼=
σ∈Sn
UσU
7
Given σ ∈ Sn, we define the set UσU:
UσU = {UσU | U and U ∈ U, σ ∈ Sn}
.
This means each element UσU is a product of an upper triangular matrix, a permutation matrix,
and another upper triangular matrix. We view each set in the disjoint union as a “double coset”,
or product of sets UσU
For this thesis, we use the so-called “window notation” for permutation matrices, where the
indices a1, a2, . . . an ∈ {1, 2, . . . , n}, and the element a1 a2 . . . an ∈ Sn defines the following
permutation:
a1 → 1, a2 → 2, . . . , an → n
Permutation matrices act on other matrices with right multiplication by permuting columns, and
with left multiplication by permuting rows. We illustrate with an example.
Examples 4.2. Let σ = 2 1 3 . By the mapping described above, the corresponding matrix
has a 1 in the second row, first column, first row, second column and third row, third column:
2 1 3 →


0 1 0
1 0 0
0 0 1


Given some matrix


a b c
d e f
g h i

, note the permutation of columns by right multiplication, and
permutation of rows by left multiplication:


a b c
d e f
g h i




0 1 0
1 0 0
0 0 1

 =


b a c
e d f
h g i




0 1 0
1 0 0
0 0 1




a b c
d e f
g h i

 =


d e f
a b c
g h i


Recall now, the isomorphism established in the last section:
{F} ∼= GLn(C)/U
Using the Bruhat Decomposition, we can derive a new isomorphism:
{F} ∼= GLn(C)/U ∼=
σ∈Sn
UσU/U ∼=
σ∈Sn
Uσ
By this result, we can partition the set of all complete flags into n! disjoint unions of a subset of
GLn(C), where each partition is the set of upper triangular matrices with the columns permuted.
These disjoint unions are a crucial piece of this thesis, and we provide them with a formal definition.
Definition 4.3. We call each disjoint set Uσ a Schubert Cell of the set of complete flags [7].
8
5. Stabilizer of the Grassmannian
Intuition under our belt, we are now ready to derive the Schubert Cells of the Grassmannian.
We begin by deriving the necessary stabilizer.
Definition 5.1. Given k ≤ n, the Grassmannian, denoted Gr(k, n), is the set of all k-dimensional
subspaces of Cn [7]:
Gr(k, n) = {L ⊆ Cn
|dim(L) = k}
We will use the same group action GLn(C) Gr(k, n) given by left multiplication, and look to
find the stabilizer using a well-chosen element of Gr(k, n). The group action is once again transitive;
the proof of this follows a similar argument to the case of the complete flag but will be omitted.
We choose the following Grassmannian element to derive the stabilizer:
LE = span(−→e1, −→e2, . . . −→ek) ∈ Gr(k, n)
To find StabLE
, we consider what matrices M are LE-invariant. As with the complete flag case,
the stabilizer has a very nice algebraic form.
Proposition 5.2. StabLE
= {G ∈ GLn(C) | G =
A ∗
0 B
, A ∈ GLk(C), B ∈ GLn−k(C)} i.e. the
set of “block” matrices.
For the purpose of this thesis, we derive the stabilizer, StabLE
, using Gr(2, 4) as a readily
understood example rather than prove the result for the general Gr(k, n); the proof would otherwise
also assume a structure too similar to the proof for complete flags.
Examples 5.3. Since we elect to work in Gr(2, 4), we have k = n − k = 2. We can view LE as
both the xy-plane and a 4 by 2 matrix whose columns are −→e1, −→e2 :
LE =




1 0
0 1
0 0
0 0




For a matrix M to multiply LE and retain the xy-plane, M must have zeroes in the first two
columns and last two rows:
M




1 0
0 1
0 0
0 0



 =




x w
y z
0 0
0 0



 ⇒ M =




x w ∗ ∗
y z ∗ ∗
0 0 a b
0 0 c d




Since M is an invertible matrix, and there are zeroes in the bottom-left “square,” we need an
invertible 2 by 2 matrix in the bottom-right “square” to keep M invertible. We are left with the
following result:
StabLE
= {G ∈ GLn(C) | G =
A ∗
0 B
, A ∈ GL2(C), B ∈ GL2(C)}.
where ∗ denotes arbitrary entries in the upper-right “square.”
We can now use the Orbit-Stabilizer theorem to find this time, the quotient isomorphic to
Gr(k, n):
GLn(C)/StabLE
∼= Gr(k, n)
9
Remark 5.4. For ease of notation, we let StabLE
= G for the rest of this thesis.
6. “Bruhat” Decomposition for the Grassmannian
Since U is a proper subset of G, since upper triangular matrices are block matrices but block
matrices are not necessarily upper triangular, partitioning GLn(C) with the quotient involving G
using the traditional Bruhat Decomposition used in §4, ∪σ∈Sn GσG, creates unions with nontrivial
intersections.
Let Qkn denote the quotient of permutations Sn/(Sk × Sn−k), where an element in Sk permutes
the first k components, and an element in Sn−k permutes the last n − k components. We claim the
following variant of the Bruhat Decomposition properly partitions GLn(C):
Proposition 6.1. GLn(C) = σ∈Qkn
UσG
First, we prove a lemma regarding “unnecessary” permutations.
Lemma 6.2. σ∈Sn
UσG = σ∈Qkn
UσG
Proof. Since the difference in the left hand and right hand side sets is the set over which the
permutations in σG range, it suffices to show that
σ∈Sn
σG =
σ∈Qkn
σG
Since the above equality implies that given σ ∈ Sk × Sn−k, the coset σG is the same as the set
G, it also suffices to show that [7]
σ∈Sk×Sn−k
σG = G.
Since G = 1G, where 1 denotes the identity permutation, G ⊆ ∪σ∈Sk×Sn−k
G.
Conversely, given G ∈ G, and a permutation σ ∈ Sk × Sn−k, since σ is in the product group
Sk × Sn−k,σ permutes the first k and last n − k items among themselves. Since σ is multiplying a
matrix G on the left, this means σ rearranges the first k and last n−k rows of G among themselves.
Now, recall that G has the following form:
G =
A ∗
0 B
A ∈ GLk(C), B ∈ GLn−k(C)
If we express σ as a product of two permutations from Sk and Sn−k, i.e. σ = σ1 × σ2, we can
compute σG as follows:
σG = (σ1 × σ2)G = (σ1 × σ2)
A ∗
0 B
=
σ1A σ1∗
σ20 σ2B
Since A and B are invertible matrices, σ1A = A and σ2B = B , where A ∈ GLk(C) and
B ∈ GLn−k(C). Since ∗ are arbitrary entries, so is the permutation of them σ1∗. Since permuting
a 0 matrix recovers the 0 matrix, σ20 = 0. This means
σ1A σ1∗
σ20 σ2B
=
A ∗
0 B
∈ G,
which means σ∈Sk×Sn−k
σG ⊆ G and thus:
σ∈Sk×Sn−k
σG = G.
10
Now, we can prove Proposition 6.1.
Proof. (Proposition 6.1) The containment σ∈Qkn
UσG ⊆ GLn(C) is obvious, since given a matrix
UσG, we know that each of U, σ, and G is invertible, and a product of invertible matrices is
invertible.
The reverse containment can be shown using algebra. Recall the original Bruhat Decomposition
of GLn(C):
GLn(C) =
σ∈Sn
UσU
Since upper triangular matrices are block matrices, we have that U ⊆ G. Now, using the lemma
we just proved, we can state the following:
σ∈Sn
UσU ⊆ σ∈Sn UσG =
σ∈Qkn
UσG
This means that GLn(C) ⊆ σ∈Qkn
UσG, thus proving the necessary equality.
Remark 6.3. Since our proof does not show that the union of GLn(C) using this decomposition
is necessarily disjoint, we credit the fact that we have a disjoint union to William Fulton, who
provides this fact in his text, Young Tableaux, through an example on page 147 [9]. Note that the
example in this text involves using a completely different means of expressing Schubert Cells.
Remark 6.4. We note the cardinality of our quotient of permutations:
|Qkn| =
n!
k!(n − k)!
=
n
k
.
Now that we are only working with a quotient of Sn, we will want to work with one particular
representative in each coset. Before picking our favorite representative, we need make the precise
equivalence relation among the permutations, a corollary of having just established that a per-
mutation that permutes the first k rows of G among themselves, and the last n − k rows among
themselves is equivalent to the identity. We state and prove the equivalence relation in our quotient
Qkn.
Corollary 6.5. σ ∼ σ ⇐⇒ σ and σ have the same window notation after shuffling the first k
and last n − k positions.
Proof. First, assume σ ∼ σ . That means σ and σ are in the same coset, so there exists a
permutation π ∈ (Sk × Sn−k)
π = a1 a2 . . . ak ak+1 . . . an
a1, . . . , ak ∈ {1, . . . , k}
ak+1, . . . , an ∈ {k + 1, . . . , n}.
such that σ = σ π. Since π ∈ (Sk × Sn−k), we know that π permutes the first k and last n − k
entries of σ . Since σ and σ differ only by a permutation that shuffles the first k and last n − k
columns, σ has the same window notation as σ after the first k and last n−k entries in σ undergo
the permutation given by π.
Now, assume the converse, namely that σ and σ have the same window notation after shuffling
the first k and last n−k entries in one, say σ . Then, the two permutations differ by a permutation
11
factor that shuffles the first k and last n−k columns, which means σ = σ π for some π ∈ (Sk×Sn−k).
Since σ and σ differ by a factor of some element in (Sk × Sn−k), we have that σ, and σ are in the
same coset and thus σ ∼ σ .
We can now say that for the coset [σ] ∈ Qkn we will use the element that has the first k and last
n − k entries in ascending order to represent all of [σ]:
σ = a1 a2 . . . ak | ak+1 . . . an
a1 < a2 < · · · < ak
ak+1 < · · · < an.
Remark 6.6. For this thesis, we visually separate the first k and last n − k entries to remind the
reader of the ordering among the n indices ai.
7. Orbit-Stabilizer and Schubert Cells of the Grassmannian
Having split GLn(C) into disjoint unions involving the stabilizer of the Grassmannian, we now
use the Orbit-Stabilizer theorem to express the Schubert Cells of the Grassmannian. Recall that
the stabilizer of the Grassmannian:
StabLE
= G = G ∈ GLn(C) G =
A ∗
0 B
,
Since our group action was transitive, by the Orbit-Stabilizer Theorem,
Gr(k, n) ∼= GLn(C)/StabEG
= GLn(C)/G =
σ∈Qkn
UσG/G
Since each coset UσG/G has all matrices G as the identity, we also can establish the following
isomorphism:
σ∈Qkn
UσG/G ∼=
σ∈Qkn
Uσ
Definition 7.1. We define the Schubert Cells of the Grassmannian to be each partition Uσ, where
σ ranges over Qkn.
Remark 7.2. Whereas there were n! Schubert Cells of {F}, there are only n
k Schubert Cells of
Gr(k, n), since the permutations only range over the quotient Qkn.
The Schubert Cells of the Grassmannian are cosets, and as we did with the quotient Qkn, we need
to find an element of each coset that allows for the simplest computations. We will choose these
coset representatives by taking an arbitrary upper triangular matrix U, and then constructing a
particular matrix G that reduces Uσ as much as possible. Since every G is equivalent to the identity
matrix in Uσ by the isomorphism above, the matrix UσG is a member of the coset Uσ.
We omit the proof of which matrix is the most reduced for each Uσ for Gr(k, n) in favor of the
more illustrative process of deriving the Schubert Cells for Gr(2, 3), and generalizing the result.
Since k = 2, and n − k = 1, our quotient is S3/(S2 × S1) = Q23. As we computed in a previous
example, we have the following choices for coset representatives:
Q23 = {[σ1] ∼ 1 2 | 3 , [σ2] ∼ 1 3 | 2 , [σ3] ∼ 2 3 | 1 }
We now take the matrix representations of each coset representative:
12
[σ1] ∼ 1 2 | 3 →


1 0 0
0 1 0
0 0 1

 , [σ2] ∼ 1 3 | 2 →


1 0 0
0 0 1
0 1 0

 , [σ3] ∼ 2 3 | 1 →


0 0 1
1 0 0
0 1 0


Proposition 7.3. Given an arbitrary U ∈ U, the following matrices are more reduced, or have all
1’s in nonzero entries, and more 0’s than any other matrix in their respective Schubert Cells:


1 0 0
0 1 0
0 0 1

 ∈ Uσ1


1 0 0
0 ∗ 1
0 1 0

 ∈ Uσ2


∗ ∗ 1
1 0 0
0 1 0

 ∈ Uσ3
Remark 7.4. In the perspective of the disjoint union,
(Uσ1 Uσ2 Uσ3) ∼= Gr(k, n).
Proof. Given U ∈ U:
U =


a b d
0 c e
0 0 f

 , a, c, f = 0
We construct G ∈ G:
G =


x w r
y z s
0 0 t

 , either x, z, t = 0 or y, w, t = 0 and xz − yw = 0
The choice of which entries are nonzero in G actually does not matter, since the permutation
that switches the first two rows and the identity are equivalent in Q23. For ease of computation,
we choose that x, z, t = 0. We construct the G that reduces Uσ as much as possible by letting the
entries from G act as variables, and choosing each entry accordingly:
Uσ1G =


a b d
0 c e
0 0 f

 1 2 3


x w r
y z s
0 0 t

 =


a b d
0 c e
0 0 f




x w r
y z s
0 0 t

 =


ax + by aw + bz ar + bs + dt
cy cz cs + et
0 0 ft


Uσ2G =


a b d
0 c e
0 0 f

 1 3 2


x w r
y z s
0 0 t

 =


a b d
0 c e
0 0 f




x r w
y s z
0 t 0

 =


ax + dy aw + dz ar + ds + bt
ey ez es + ct
fy fz fs


Uσ3G =


a b d
0 c e
0 0 f

 2 3 1


x w r
y z s
0 0 t

 =


a b d
0 c e
0 0 f




w r x
z s y
0 t 0

 =


bx + dy bw + dz br + ds + at
cx + ey cw + ez cr + es
fy fz fs


To find each reduced form, i.e. reduce as many entries to 0 or 1 as possible, we start with Uσ1G:.
Uσ1G =


ax + by aw + bz ar + bs + dt
cy cz cs + et
0 0 ft


Since f and t are nonzero, ft must be nonzero. The same is true of cz, which then forces ax+by
to be nonzero. The first clear selection for the variable entries from G is that t = f−1:
13


ax + by aw + bz ar + bs + dt
cy cz cs + et
0 0 ft

 ∼


ax + by aw + bz ar + bs + df−1
cy cz cs + ef−1
0 0 1


Since cz = 0, we also need z = c−1:


ax + by aw + bz ar + bs + df−1
cy cz cs + ef−1
0 0 1

 ∼


ax + by aw + bc−1 ar + bs + df−1
cy 1 cs + ef−1
0 0 1


Now, we have cy in an entry we wish to make 0, so we can simply take y = 0:


ax + by aw + bc−1 ar + bs + df−1
cy 1 cs + ef−1
0 0 1

 ∼


ax aw + bc−1 ar + bs + df−1
0 1 cs + ef−1
0 0 1


These substitutions leave us with ax in a nonzero position, so we need x = a−1:


ax aw + bc−1 ar + bs + df−1
0 1 cs + ef−1
0 0 1

 ∼


1 aw + bc−1 ar + bs + df−1
0 1 cs + ef−1
0 0 1


We are trying to make the rest of the entries 0; we can start by setting w = −bc−1
a , and s = −ef−1
c :
a
−bc−1
a
− bc−1
= 0
c
−ef−1
c
− ef−1
= 0


1 aw + bc−1 ar + bs + df−1
0 1 cs + ef−1
0 0 1

 ∼


1 0 ar + bs + df−1
0 1 0
0 0 1


For ease of notation, s was not substituted. We can complete the reduction by setting r =
−(bs+ef−1)
a :
a
−(bs + ef−1)
a
+ bs + df−1
= 0


1 0 ar + bs + df−1
0 1 0
0 0 1

 ∼


1 0 0
0 1 0
0 0 1


We now reduce the next product, Uσ2G:
Uσ2G =


ax + dy aw + dz ar + ds + bt
ey ez es + ct
fy fz fs


We have that ft = 0, since again f and t are nonzero. Although ax = 0 by the same logic, the
fact that ax + dy = 0 is not obvious. As we begin to construct G, however, the necessity that that
ax + dy = 0 becomes clear. Immediately, we see that we need z = f−1, y = s = 0:


ax + dy aw + dz ar + ds + bt
ey ez es + ct
fy fz fs

 ∼


ax aw + df−1 ar + bt
0 ef−1 ct
0 1 0


14
Since we need what’s now ax = ct = 1, we can set x = a−1, t = c−1:


ax aw + df−1 ar + bt
0 ef−1 ct
0 1 0

 ∼


1 aw + df−1 ar + bc−1
0 ef−1 1
0 1 0


The only variables left are w and r, and we can set w = −df−1
a , r = −bc−1
a , canceling the entries
on the top row:


1 aw + df−1 ar + bc−1
0 ef−1 1
0 1 0

 ∼


1 0 0
0 ef−1 1
0 1 0


Since we have no choice but to set x = a−1, z = f−1, t = c−1, this is the most we can reduce
Uσ2G. Since e, f ∈ C are arbitrary, the middle entry is left arbitrary in the reduced form, so we
have the following result:


1 0 0
0 ef−1 1
0 1 0

 ∼


1 0 0
0 ∗ 1
0 1 0


‘
Finally, we reduce Uσ3G, where fz is necessarily nonzero. This forces z = f−1:
Uσ3G =


bx + dy bw + dz br + ds + at
cx + ey cw + ez cr + es
fy fz fs

 ∼


bx + dy bw + df−1 br + ds + at
cx + ey cw + ef−1 cr + es
fy 1 fs


We set s = y = 0 to get 0’s in the last row:


bx + dy bw + df−1 br + ds + at
cx + ey cw + ef−1 cr + es
fy 1 fs

 ∼


bx bw + df−1 br + at
cx cw + ef−1 cr
0 1 0


Next, we need x = c−1, r = 0:


bx bw + df−1 br + at
cx cw + ef−1 cr
0 1 0

 ∼


bc−1 bw + df−1 at
1 cw + ef−1 0
0 1 0


Clearly, we need t = a−1. Yet, since df−1, ef−1 are arbitrary complex numbers, there is no choice
for w that both bw +df−1 and cw +ef−1 to 0, so for purposes of convention, we choose w = −ef−1
c :


bc−1 bw + df−1 at
1 cw + ef−1 0
0 1 0

 ∼


bc−1 bw + df−1 1
1 0 0
0 1 0

 =


∗ ∗ 1
1 0 0
0 1 0


Remark 7.5. Note that since we are constructing a matrix G, and each Uσ ∼= UσG/G, by choosing
variables at each step in proof we do not obtain equality but rather an equivalence relation, as we
are finding new matrices in the same coset Uσ. This is why we state “∼” rather than “=” after we
assign variables for our constructed G.
Looking now at the general case of Gr(k, n), with n
k permutations, we derive n
k distinct
reduced matrices that correspond to n
k disjoint Schubert Cells. We state our result formally:
15
Theorem 7.6. Given σ ∈ Qkn:
σ = a1 a2 . . . ak | ak+1 . . . an
a1 < a2 < · · · < ak, ak+1 < · · · < an
and U ∈ U, the following is the most reduced choice for UσG:
















∗ ∗ . . . ∗ 1 0 . . . 0
...
... . . .
...
...
... . . .
...
1 0 . . . 0 0 0 . . . 0
0 ∗ . . . ∗ 0 0 . . . 0
...
... . . .
...
...
... . . .
...
0 1 . . . 0 0 0 . . . 0
...
... . . .
...
...
... . . .
...
0 0 . . . ∗ 0 0 . . . 0
0 0 . . . 1 0 0 . . . 0
















The position of the 1’s in UσG corresponds to the permutation given by the window notation in
the following manner: For each index 1 ≤ i ≤ n, the 1 in the aith row, or jth column appears in
the ith column or ajth row. Furthermore, every entry in the ith row in a column j > ai, and in the
aith column in a row r > i, is 0.
Examples 7.7. Let n = 5, k = 3, and π = 1 3 5 | 2 4 . This means our “double coset”
representative UπG has the following form, where the row in which any 1 is determined by the
index of that number row in π.
Uπ ∼ UπG =






1 0 0 0 0
0 ∗ ∗ 1 0
0 1 0 0 0
0 0 ∗ 0 1
0 0 1 0 0






.
Note that the equivalence notation Uπ ∼ UπG is used because every matrix in Uπ is equivalent
to the matrix UπG.
8. Young Tableau Diagrams and Indexing Schubert Cells
As we continue to build the foundations for understanding how to compute intersections of
Schubert Varieties in H∗(Gr(k, n)), we now transition from algebra to combinatorics, which we
use to index our Schubert Cells, and ultimately, our Schubert Varieties. The transition is neces-
sary because operations on these indices is what calculates intersections of Schubert Varieties in
H∗(Gr(k, n)).
We now define our combinatorial object of choice.
Definition 8.1 (Young Tableau). [7]
Given k < n, and a weakly ordered sequence of nonnegative integers λ = (λ1, λ2, . . . , λn−k),
where λ1 ≥ · · · ≥ λn−k, and λ1 ≤ k, the Young Tableau diagram corresponding to λ is a diagram
of contiguous 1 by 1 boxes assembled in rows such that there are λi boxes in the ith row of the
diagram, and each diagram fits inside of a larger, (height) n − k by (length) k box.
Remark 8.2. For this thesis, weakly ordered sequences and Young Tableau diagrams will be in-
terchangeable.
16
We fix one piece of notation regarding the sets of Young Tableau diagrams:
{λ | λ fits inside a box of height r, length m} = r × m
Examples 8.3. Let λ = (5, 5, 4, 2, 1). The Young Tableau diagram corresponding to λ is:
To index each Schubert Cell, Uσ, using Young Tableau diagrams, we need to establish a bijection
between these two objects. There is in fact a natural correspondence between the coset representa-
tives UσG and diagrams that fit in an n−k by k box, determined by the placement of the arbitrary
entries ∗. The diagram is taken by pushing together each ∗, and then sliding each ∗ as far left as
the first row.
We provide a familiar example to demonstrate.
Examples 8.4. Recall from the last example in §7:
Uπ ∼ UπG =






1 0 0 0 0
0 ∗ ∗ 1 0
0 1 0 0 0
0 0 ∗ 0 1
0 0 1 0 0






To find what diagram λ ⊆ 2 × 3 indexes the Schubert Cell Uπ, we vertically “push” each row
of arbitrary entries ∗ together, and “slide” them until each row is completely left adjusted. This
algorithm creates a diagram with a row of two boxes above a row of one box, as seen below:
Uπ → λ = (2, 1) =
Remark 8.5. For dealing with establishing correspondences with Young Tableau diagrams, there
often occur slight disagreements about the conventions of how to form the Young Tableau diagram
corresponding to the matrices UσG of Schubert Cells. Frequently, the convention is to deal with
k by n − k Tableau diagrams, where λ = (λ1, . . . , λk). This is because the Schubert Cells we are
indexing are usually indexed by counting the codimension of their corresponding Schubert variety,
or the size of the entire Grassmannian that is outside the closure of some cell. Looking at how each
permutation σ orients the arbitrary entries in UσG, we noticed a much more natural correspondence
with n − k by k diagrams. This means our sequences λ are instead indexed up to n − k. Taking
partitions this way, and working instead with n − k by k boxes means that we are instead counting
the dimensions, or size, of the Schubert variety itself we are indexing. Later on in this paper when
we discuss Knutson and Tao’s Puzzle diagrams for computing Littlewood-Richardson Coefficients,
we will have to make the switch to adjust our convention to the “usual” convention.
To prove the correspondence between Young Tableau diagrams and Schubert Cells, or matrices
UσG, we introduce a formula that computes the corresponding diagram λ ⊆ n − k × k for each
UσG. First, we present some unfamiliar notation. Let σ = a1 . . . ak | ak+1 . . . an , where
a1 < a2 < · · · < ak, ak+1 < · · · < an, and define a0 = 0. We denote:
Rσ = {1, 2, . . . n}  {a1, a2, . . . ak} = {r1, r2, . . . rn−k} = {integers from 1 to n excluding a1, . . . , ak}
17
Aσi = {ai−1, ai−1 + 1, . . . , ai} = {integers between ai−1 and ai} where i ≤ k
The sets Rσ and Aσi depend on the permutation σ. Note the following cardinality:
|Aσi | = ai − ai−1 + 1
We now prove a small lemma regarding the significance of the set Rσ.
Lemma 8.6. The elements of Rσ are precisely the indices for which rows in UσG have arbitrary
entries ∗.
Proof. Given σ ∈ Qkn,
σ = a1 . . . ak | ak+1 . . . an
a1 < a2 < · · · < ak
ak+1 < · · · < an
by definition, the set Rσ are the integers ai in the window of σ such that i > k. Since our coset
representative UσG is either the identity matrix, or somewhere contains arbitrary entries ∗, it
suffices to show those entries do not occur in the rows ai where i ≤ k.
Given an index i such that ai ≤ ak, by construction of UσG, the 1 in the aith row occurs in the
ith column, and every entry to the right of that 1 in the ith row is 0.
∗ ∗ . . . 1 0 . . . 0
Now, either a1 = 1, and there are no zeros to the left, or a1 > 1. In this case, we know there
are also 1’s in the ajth row, jth column for every index j < i. Since j < i, we know aj < ai, so for
each column to the left of the ith column, there is a 1 in some row above the aith row. Since every
entry below a 1 in the same column is 0, and now every entry to the left of the 1 in the aith row
has a 1 above it, that entry must be 0. Thus, every entry in the aith row is either 1 or 0.
We now propose the following formula is a valid tool for readily counting the arbitrary entries in
each UσG, and will ultimately help prove the bijection between Schubert Cells Uσ and diagrams
in n − k × k.
Proposition 8.7. For λ = (λ1, λ2, . . . , λn−k), each λj = k
i=1(k − i + 1)|{rj} ∩ Aσi |, where the set
{rj} is a singleton set composed of the jth element of the set Rσ.
Remark 8.8. Akin to an indicator function, |{rj} ∩ Aσi | is either 1 or 0 depending on whether or
not the element rj is in that particular Aσi .
This claim carries a lot of the unfamiliar notation just introduced, so before the formal proof, we
carry out a computation to demonstrate what the formula means and how it computes the diagram
corresponding to a reduced matrix.
Examples 8.9. We will look at how the formula computes λ = (2, 1) from the ongoing previous
example where k = 3, n = 5, our Young Tableau diagram λ ⊆ 2 × 3, and π = 1 3 5 | 2 4 .
Uπ ∼






1 0 0 0 0
0 ∗ ∗ 1 0
0 1 0 0 0
0 0 ∗ 0 1
0 0 1 0 0






Applying the constructions just defined, we have the following three sets:
18
Rπ = {2, 4}, Aπ1 = {0, 1}, Aπ2 = {1, 2, 3}, Aπ3 = {3, 4, 5}
Note how Rπ is a list of the rows in the coset representative that have arbitrary entries. We now
use the formula we just introduced to compute each component of the partition λ that determines
our Young Tableau:
λ1 =
3
i=1
(3 − i + 1)|{r1} ∩ Aπi | = 3|{2} ∩ Aπ1 | + 2|{2} ∩ Aπ2 | + 1|{2} ∩ Aπ3 | = 3(0)+ 2(1) + 1(0) = 2
λ2 =
3
i=1
(3 − i + 1)|{r2} ∩ Aπi | = 3|{4} ∩ Aπ1 | + 2|{4} ∩ Aπ2 | + 1|{4} ∩ Aπ3 | = 3(0)+ 2(0) +1(1) = 1
λ = (λ1, λ2) = (2, 1) =
We now formally prove our claim about the computational formula for λj.
Proof. (Proposition 8.8)
Let σ ∈ Qkn. Since the shape of the indexing diagram λ is determined by the placement of the
arbitrary entries ∗ in the coset representative UσG, we determine the number of entries ∗ in each
row. By Lemma 8.6, either UσG = I, and the corresponding diagram λ = ·, or each row in UσG
with arbitrary entries is given by Rσ.
Given an index i ∈ {1, . . . , n}, looking at the entry ai, we know there are ai − ai−1 − 1 rows
between the ai and ai−1 rows with arbitrary entries. Since now, the first i−1 and last k+1 columns
have 1’s above the rows in question, there are ai − ai−1 − 1 times k − (i − 1) = k − i + 1 arbitrary
entries in these rows, so we need to append an (ai − ai−1 − 1) by (k − 1 + 1) box to our diagram.
Letting ai−1 < m < ai range, by Lemma 8.6, we have that each m = rj for some index j. Since
now, ai−1 < rj < ai, by the definition of Aσi , we have rj ∈ Aσi . We compute each λj in question
using the summation:
λj =
k
i=1
(k − i + 1)|{rj} ∩ Aσi | =
(k−1+1)|{rj}∩Aσ1 |+(k−2+1)|{rj}∩Aσ2 |+· · ·+(k−i+1)|{rj}∩Aσi |+· · ·+(k−k+1)|{rj}∩Aσk
| =
0 + 0 + · · · + (k − (i − 1)) + 0 + · · · + 0 = k − (i − 1)
Since there are ai −ai−1 −1 values of m that satisfy the range, we have ai −ai−1 −1 corresponding
indices j, so we have appended a box with the appropriate dimensions to λ i.e. ai − ai−1 − 1
components λj = k − (i − 1).
Remark 8.10. The formula used in the proof above is original to this thesis.
Using this computational formula, we can now prove the bijection between Schubert Cells and
n−k by k diagrams. Since each Schubet Cell is in one-to-one correspondence with the permutations
σ ∈ Qkn, it suffices to instead prove a bijection between Qkn and n − k × k.
Given σ ∈ Qkn, define φ : Qkn → n − k × k as follows:
φ(σ) = λ = (λ1, . . . , λk)
such that λj =
k
i=1
(k − i + 1)|{rj} ∩ Aσi | for every index j < k
19
Proposition 8.11. φ is a bijection.
Proof. Given two permutations σ, π ∈ Qkn, where σ = π:
σ = a1 . . . ak | ak+1 . . . an , π = b1 . . . bk | bk+1 . . . bn
We first show that φ is injective by proving φ(σ) = φ(π). Let φ(σ) = λ, and φ(π) = µ. Since
σ = π, for at least one index j ≤ k, we have that the entries aj = bj. Consider the smallest such
index, i. Since ai = bi, it follows that Aσi = Aπi . Then, either |Aσi | = |Aπi |, so there are a different
number of rows with (k − i + 1) boxes in λ and µ, necessarily making the partitions different, or
|Aσi | = |Aπi |. In this case, since we still have Aσi = Aπi , it must be that ai−1 = bi−1, but then there
is an index smaller that i where ai = bi, which is a contradiction. This means that with ai = bi,
we must have that |Aσi | = |Aπi |, so there are in fact a different number of rows with (k − 1 + 1)
boxes, which means the diagrams λ and µ are not the same.
To prove φ is surjective, note that there are n
k diagrams that fit in an n × k by k box [13], so
|Qkn| =
n
k
= |{λ | λ ⊆ n − k × k}|.
Since the cardinality of the two sets is the same, and φ is injective, the mapping must also be
surjective. Thus, φ is a bijection between Qkn and n − k by k diagrams.
9. Schubert Varieties and the cohomology ring
In this section, we (finally) discuss the geometry of the Schubert Varieties of Gr(k, n), and
introduce the cohomology ring H∗(Gr(k, n)) so that we may use the Young Tableau diagrams that
index our Schubert Cells (or varieties) to determine triple intersections of Schubert Varieites in §10.
To obtain Schubert Varieties, we look at zero-sets in Cn that contain their respective Schubert
Cells. We first require a formal definition for “zero-set.”
Definition 9.1. Given a set of polynomials S = {f1, f2, . . . } ⊆ C[x1, x2, . . . , xn], the variety
corresponding to the polynomials in S, denoted, V (S), is the following set:
V (S) = {p ∈ Cn
| f(p) = 0 ∀ f ∈ S}
We need one more definition required to understand the link between Schubert Varieties and
Schubert Cells.
Definition 9.2 (Zariski Closure). Given a subspace X ⊆ Cn, we define its Zariski Closure, denoted
X, to be the variety V such that X ⊆ V , and for any variety X ⊆ W, we have ⊆ W. If X = X,
we say X is Zariski closed, while if X ⊂ X, we say X is Zariski open.
Remark 9.3. The “Zariski Closure” refers to a particular topology known as the Zariski Topology,
which counts closed sets as varieties. Although crucial to the field of algebraic geometry, we do not
explore the Zariski Topology in depth for this thesis.
We now define Schubert Varieties [7].
Definition 9.4. We define the Schubert variety corresponding to the permutation σ (or equiva-
lently indexed by λ) to be the Zariski Closure of the Schubert Cell Uσ, denoted Uσ.
In general, given a Schubert Cell Uσ, the Schubert Variety Uσ = Uσ. This is to say, Schubert
Cells are not typically Zariski closed. The following theorem formalizes this statement [7].
Theorem 9.5. Each Schubert Cell of Gr(k, n) is isomorphic to the intersection of a Zariski open,
and Zariski closed subspace.
20
Theorem 9.5. offers a correspondence between the algebra we use to define Schubert Cells, and
the geometry of Gr(k, n). We thus will need to formally define a map between geometry and
algebra. There is in fact a natural isomorphism ϕ between n by n matrices, and points in Cn2
:
M =





a11 a12 . . . a1n
a21 a22 . . . a2n
...
...
...
...
an1 an2 . . . ann





, aij ∈ C
ϕ(M) = (a11, a12, . . . , ann) ∈ Cn2
.
The map ϕ “coordinatizes” matrices, i.e. takes an arbitrary n by n matrix to a point in space.
Proving this theorem requires implementing tools inaccessible given the scope of this thesis, so
we instead demonstrate the phenomenon with the following example [7].
Examples 9.6. Let n = 3 and k = 2 so that we are working in Gr(2, 3). Now, let σ = 2 3 | 1 ,
which gives for the Schubert Cell Uσ, the representative,
Uσ ∼ UσG =


∗ ∗ 1
1 0 0
0 1 0

 .
Note that the coset representative UσG is in fact a set of matrices, so ϕ(UσG) maps to a set of
points in C32
= C9. We are not quite ready to apply this map, however.
Although the coset representatives we derived have 1’s in the nonzero entries, these matrices are
equivalent to a matrix with any nonzero entry where there are currently 1’s. This is because given
three arbitrary complex numbers x, y and z, there exists a matrix G such that
UσG ∼ UσGG =


∗ ∗ x
y 0 0
0 z 0

 .
The equivalence is retained because G acts as the identity in Uσ.
We now are ready to compute ϕ(UσG):
ϕ(UσG) = ϕ




∗ ∗ 1
1 0 0
0 1 0



 ∼ ϕ




∗ ∗ x
y 0 0
0 z 0



 =
{(a11, a12, . . . , a33) ∈ C9
| a13, a21, a32 = 0, a22 = a23 = a31 = a33 = 0}
Note that each of these three sets denotes points in C9 subject to constraints that either com-
ponents aij are zero or components aij are nonzero. Treating the entries aij of matrices in Uσ as
monomials, this set describes the intersection of a variety of a12, a13, a21, a23, a31, a32, and everything
not in the variety of a11, a22, a33. We reexpress our set accordingly:
Uσ ∼= V (a22, a23, a31, a33) ∩ C9
 V (a13, a21, a32)
We are now ready to discuss the cohomology ring.
Definition 9.7. [7] The cohomology ring of the Grassmannian, H∗(Gr(k, n)), is a commutative
ring with unity whose basis elements correspond to the Schubert Varieties of Gr(k, n).
Definition 9.8. [7]
Given the Schubert Cells Uσ, its Schubert variety Uσ and index λ, we call the representative of
Uσ in H∗(Gr(k, n)) its Schubert class, denoted σλ.
21
The cohomology ring operation with which this thesis is concerned is the cup product. The cup
product of two Schubert classes σλ and σµ is denoted σλ · σµ, and given by the following familiar
formula:
σλ · σµ =
Pλ,µ
cν
λ,µσν
Pλ,µ = {ν = (ν1, . . . , νn−k |
n−k
j=1
νj =
n−k
j=1
λj +
n−k
j=1
µj}
cν
λ,µ ∈ Z+
The cup product computes intersections among the varieties represented by σλ, σµ and σν for
each ν ∈ Pλ,µ [7]. In the cup product appear the positive integers cν
λ,µ that depend on the diagrams
ν. Computing these integers, known as Littlewood-Richardson Coefficients, requires very involved
processes that are fully explained in the next section.
10. Computing Littlewood-Richardson Coefficients
For the Littlewood-Richardson Rule to compute cν
λ,µ, we took the approach outlined in Knutson
and Tao’s paper published in 2001 known as the “Puzzle Rule” [15].
Remark 10.1. This is the point in this paper where we momentarily switch convention, and instead
work with diagrams in k by n−k boxes. Although we use the k by n−k convention in this section,
and in general for making computations related to the Puzzle Rule, we retain the n − k by k
convention when referring to Schubert Cells, as there is a natural link between diagrams of those
particular dimensions, and the arbitrary entries in a matrix UσG
Since the Puzzle Rule requires using binary strings rather than Young Tableau diagrams to
represent partitions, our first task is to convert our diagrams into binary strings. Fix k and n
so that we are working with diagrams that fit in a k by n − k box. The ”conversion” process is
described as follows: Beginning in the upper right corner of the k by n − k box, we assign a 0 and
shift one box to the left if there is no box in that position, or assign a 1 and shift down a box if
there is. We repeat this process of either assigning 0 and moving left, or assigning 1 and moving
down will trace the diagram within the larger box. We demonstrate with an example.
Examples 10.2. Let n = 11, k = 6, λ = (5, 4, 3, 3, 1)
λ =
For the tracing process, we begin in the upper right corner. Since there is in fact a box there, we
trace down one box, and append a 1 to our string. Now, there is no box in our current position,
so we trace left and append 0 on the left of the first 1. We then trace down, and then to the
left, appending 10, giving so far, 1010. Next, we trace down two more boxes, giving us 101011,
left two boxes, giving 10101100, and then down, left and finally down again. The final string is
10101100101, i.e.
22
→ 10101100101
Now, we have to establish a one-to-one correspondence between strings and Young Tableau
diagrams to ensure there is in fact one string for every partition.
Proposition 10.3. There is a bijection ψ : {s | s is a binary string with k ones and n − k zeros} →
k × n − k, given by the rule ψ(s) = λ = (λ1, . . . , λk), where for each index j < k, λj =
k − the number of 0’s before the jth 1.
Proof. First, we show ψ is injective. Given two strings of k ones and n − k zeros, say s1 and s2,
where s1 = s2, let ψ(s1) = λ = (λ1, . . . , λk), and ψ(s2) = µ = (µ1, . . . , µk). We know that there are
a1 0’s before the first 1 in s1, where 0 ≤ a1 ≤ k, and b1 0’s before the first 1 in s2, where 0 ≤ b1 ≤ k.
This means by construction of λ and µ, λ1 = k − a1, and µ1 = k − b1. We have that either a1 = b1,
so λ1 = µ1, or a1 = b1. If a1 = b1, we know that λ1 = µ1, and we then consider the number of
0’s before the second 1. As before, denote by a2 the number of 0’s before the second 1 in s1 and
denote by b2, the number of 0’s before the second 1 in s2, where a1 ≤ a2 ≤ k, and b1 ≤ b2 ≤ k.
Once again, either a2 = b2, so λ2 = µ2, or λ2 = µ2. If λ2 = µ2, we repeat this process and check for
the number of 0’s before the third 1. If we repeat this process until we check the kth 1, and have
ai = bi for every index, then that contradicts s1 = s2. Thus, λ = (λ1, . . . , λk) = (µ1, . . . , µk) = µ,
so this mapping is injective.
Since there are n−k zeros and k 1’s, there are n
k possible strings, and since there are n
k Young
Tableau diagrams that fit in a k by n − k box, the cardinality of the sets is the same, so having
injective mapping implies it is bijective mapping.
Now that there is a clear mapping from diagrams to strings, we can move on to describing the
Puzzle Rule. Let λ, µ and ν by binary strings with n − k zeros and k ones. Puzzles are constructed
as follows: For our fixed n, we begin by creating an equilateral triangle with (n + 1) vertices on
each side. This creates n edges on each side. We label each edge with a 1 or a 0, and reading from
left to right, take the labelings on the edges as the binary strings. We write λ and µ on the upright
sides, and ν on the bottom. Then we fill in n2 equilateral triangles inside the large, labeled one.
These interior triangles will too have labels on them. There are three different choices for labeling,
or three different pieces that are used to complete puzzles: A triangle may have all 0’s, all 1’s or
moving counterclockwise, a 2, a 0 and a 1. The idea is, using only those pieces, to see if given
the three labelings on the edges of the larger triangles, how many ways the interior edges can be
labelled.
We are now ready to formally present the Puzzle Rule.
Theorem 10.4 (Puzzle Rule). The number of different valid puzzles using λ, µ and ν as the left,
right and bottom edges respectively is precisely cν
λ,µ [15].
Remark 10.5. Our version of the Puzzle Rule is slightly modified, as the original puzzles incorpo-
rated both rhombi and triangles, while we only use triangles.
Examples 10.6. Let n = 4, k = 2, λ = (2, 1), µ = (1, 0), ν = (2, 2)
λ = µ = , ν =
23
By the tracing algorithm, we have the following corresponding binary strings:
λ = 1010, µ = 0101, ν = 1100
Using λ as the labels on the left edges, µ as the labels on the right edges, and ν as the labels on
the bottom edge, there is one possible puzzle that can be completed [2]:
0 0
0 0
0 0 0 0 0
0
0
0 0
1 1
1 1 1 1 1 1
1
1
1 1 1
2 2
2
2
⇐⇒ cν
λ,µ = 1
Although these puzzles can be computed by hand, doing so can take a while for larger n’s. We
instead reserve our computations of puzzles for Sage.
11. Understanding the Saturation Conjecture
We now understand what the Schubert Varieties of the Grassmannian are, and how to determine
their intersections by computing cup products and determining Littlewood-Richardson coefficients;
we are ready to formally state the theorem Knutson and Tao prove that answers Horn’s Question
[7].
Theorem 11.1. There exist Hermitian matrices A, B and C, where A + B = C, such that α are
the eigenvalues of A, β are the eigenvalues of B, and γ are the eigenvalues of C, where α, β and
γ ∈ Zk
≥0 if and only if looking at the varieties
Uσ ↔ σα, Uπ ↔ σβ, Uρ ↔ σγ
we have that in the cup product σα · σβ, cγ
α,β = 0.
To build up to this statement that links cohomology all the way back to eigenvalues, Knutson
and Tao first prove the Saturation Conjecture (Theorem 2.3), a key statement about cohomology.
We provide the statement of the Saturation Conjecture below for convenience:
∃ N ∈ N such that cNν
Nλ,Nµ = 0 ⇐⇒ cν
λ,µ = 0
λ = {λ1, . . . , λk}, Nλ = {Nλ1, . . . , Nλk}
Note that one direction is trivial, since this statement is true for N = 1. To prove the other
direction of the Saturation Conjecture, Knutson and Tao make use of two devices, the honeycomb
model and the hive model [14]. Although understanding the honeycomb model reaches beyond the
scope of this thesis, the hive model is quite accessible, and explored in detail to gain the necessary
intuition for tackling the Quantum Saturation Conjecture [6].
For the hive model, we start with an equilateral triangle with m + 1 vertices along each edge, for
some m ∈ N. Then, creating vertices and connecting each vertex, we create m2 equilateral triangles
inside the larger triangle. A triangle with m = 3 is drawn below, where there are 3 + 1 = 4 vertices
on each side of the border, and 32 = 9 small equilateral triangles within the larger original one.
24
We now introduce a new round of definitions to explain this “triangle construction.”
Definition 11.2. In our current construction, we assign each vertex a real number called the label
of the vertex.
Definition 11.3. Looking at the union of two equilateral triangles adjoined by either a northeast
or northwest border, which we appropriately call a rhombus, if the sum of the labels on the obtuse
angles is greater than or equal to the sum of the labels on the acute angles, we say we have satisfied
a rhombus inequality.
We are ready to formally define this object we have developed.
Definition 11.4. A hive is a diagram where every rhombus inequality is satisfied.
Definition 11.5. We say that a hive or a border is integral if every vertex label is an integer.
To introduce a bit of notation, given a fully labeled (not necessarily hive) diagram of the form
described above, we denote the set of its vertices by H. While H is just the set of its vertices,
which could be indexed for convenience, RH is the space where the vertices in H are assigned real
numbers. Elements h ∈ RH are basically vectors in RM , where M is the number of vertices, i.e.
M =
m
k=1
k =
m(m + 1)
2
.
For the example above, where m = 3, we determine M = 10, so elements in RM are vectors in
R10
We will analogously denote the set of border vertices B, and the labelings of these border vertices
with real numbers RB. Elements b ∈ RB are likewise vectors in R3m. We also will refer to integral
hives and integral borders as belonging to ZH, and ZB, respectively, where instead of being labeled
by real numbers, vertices are labeled only by integers.
We refer to the subset C ⊂ RH as the set consisting of all M-tuples of vertex labels that form a
valid hive. According to Buch, this subset C forms a convex polyhedral cone [6].
Finally, we define a (restriction) map ρ : RH → RB by the following rule:
ρ(h) = b = the border labels of h
Also according to Buch, for the border of a fully-labeled diagram, b ∈ RB, we have the following
equality:
ρ−1
(b) ∩ C = {h ∈ RH
| h is a hive with the border b}
The set ρ−1(b) ∩ C is important enough to win its own definition.
Definition 11.6. We call the set ρ−1(b) ∩ C the hive polytope over b.
Remark 11.7. Every hive polytope forms a compact polytope [6].
We formally state a key claim used in Buch’s paper [6].
25
Proposition 11.8. Given a hive h ∈ RH, and its border ρ(h) = b,
ρ−1
(b) ∩ C = ∅ ⇐⇒ every rhombus equality in h is satisfied
The point of hives is to compute Littlewood-Richardson Coefficients. This theorem, named
Theorem 1 in Buch’s Paper, states how hives compute cν
λ,µ.
Theorem 11.9. [6] Given λ, µ and ν ∈ k × n − k, if we take a triangle with the label 0 at the top,
along the northeast border, j
i=1 λi for the jth vertex down along that border, k
i=1 λ + j
i=1 µi
for the jth vertex from the bottom right, and j
i=1 νi for the jth vertex down along the northwest
border, then cν
λ,µ is the number of integral hives formed from this choice of b.
We now need one last definition that follows from the statement of this theorem used to prove
the Saturation Conjecture.
Definition 11.10. Given the diagrams λ, µ and ν, if we have the border of a hive h satisfy the
construction in Theorem 11.9, we say that border, b, is defined by the triple (λ, µ, ν)
An example is necessary.
Examples 11.11. Examples of “interesting” hives are very hard to construct, so to retain sim-
plicity, we are forced to borrow the example Buch provides after the statement of this theorem
[6].
λ = µ = (2, 1) → ν = (3, 2, 1) →
The corresponding diagram has the following borders, with the lone interior vertex momentarily
undetermined.
6
6 5
3
5
x
3
3 2
0
The border b is the vector in R9, b = (0, 2, 3, 3, 5, 6, 6, 5, 3). We can determine the hive polytope
by looking at what values for x satisfy the rhombus inequalities:
x + 2 ≥ 3 + 3
x + 3 ≥ 5 + 2
x + 6 ≥ 5 + 5
x + 5 ≥ 6 + 3
5 + 3 ≥ x + 3
5 + 6 ≥ x + 6
Assessing the inequalities, we have that x is necessarily bigger than 4 and less than 5. We
then take the hive polytope to be {(0, 1, 3, 3, 5, 6, 6, 5, 3, x) ∈ R10 | x ∈ [4, 5]}, which is naturally
isomorphic to the compact subset [4, 5] ⊂ R. Since there are two choices for x that allow for an
integral hive, the theorem says that cν
λ,µ = 2.
26
The purpose of this section is to fully internalize the development and proof of the Saturation
Conjecture using the hive model, and adapt this method to begin our exploration into proving the
Quantum Saturation Conjecture.
Although Buch’s proof of the Saturation Conjecture requires arguments too advanced for this
thesis, internalizing the proof is crucial to attempting to adapt this treatment to tackle the Quantum
Saturation Conjecture, so we instead provide a rough outline of the strategy.
The argument used in proving the Saturation Conjecture consists of proving a particular sufficient
condition [6]. Fix λ, µ and ν such that the border b of a hive is defined by (λ, µ, ν), and the hive
polytope ρ−1(b) ∩ C, satisfies the condition
ρ−1
(b) ∩ C ∩ ZH
= ∅.
This inequality implies that there is some integral hive contained in the hive polytope over the
border defined by (λ, µ, ν).
Buch first claims that if we then scale the components of the hive polytope over b up by some
N, we are still left with a nonempty polytope, i.e. if we let
Nb = the border defined by (Nλ, Nµ, Nν),
then we have the following inequality:
ρ−1
(Nb) ∩ C = ∅.
Intuitively, this is because looking at the original rhombus inequalities from each h ∈ ρ−1(b)∩C,
scaling up the inequalities only results in shifting and scaling up the solution set. If we then
check whether the border of this new partition is in ZB, or an integral border, Buch proves we
are guaranteed that the hive polytope over (Nλ, Nµ, Nν) does contain an element of ZH, or an
integral hive; this is the sufficient condition needed to complete the Saturation Conjecture.
It’s not immediately obvious that simply because Nb is an integral border, there is some integral
hive in the set ρ−1(Nb). This is the heart of the proof Buch provides; he argues that a particular
hive he calls a “maximal” hive in the polytope ρ−1(Nb) ∩ C must have all of its vertices as Z-linear
combos of all of the border vertices, which gives us an integral hive [6].
We provide a hive computation example that shows the Saturation Conjecture in action.
Examples 11.12. Per our last example, let λ = µ = (2, 1) and ν = (3, 2, 1). The Saturation
Conjecture says that since cν
λ,µ = 0, we can find some natural number N ∈ N such that cNν
Nλ,Nµ = 0.
In this example, N = 2 satisfies the Saturation Conjecture:
Nλ = Nµ = (4, 2) →
Nν = (6, 4, 2) →
These diagrams give rise to the hive below
27
12
12 10
6
10
y
6
6 4
0
which creates the following rhombus inequalities, where of note, the new inequalities are the original
inequalities scaled by N = 2:
y = Nx
y + 4 ≥ 6 + 6 ⇐⇒ N(x + 2 ≥ 3 + 3)
y + 6 ≥ 10 + 4 ⇐⇒ N(x + 3 ≥ 5 + 2)
y + 12 ≥ 10 + 10 ⇐⇒ N(x + 6 ≥ 5 + 5)
y + 10 ≥ 12 + 6 ⇐⇒ N(x + 5 ≥ 6 + 3)
10 + 6 ≥ y + 6 ⇐⇒ N(5 + 3 ≥ x + 3)
10 + 12 ≥ y + 12 ⇐⇒ N(5 + 6 ≥ x + 6)
If we simultaneously evaluate the six inequalities with respect to y, we find that y ∈ [8, 10], which
is the original interval stretched and shifted by N, i.e. [4N, 5N].
Remark 11.13. We noticed that in general that scaling up the sequences λ, µ and ν stretches
solution set, which gives cNν
Nλ,Nµ > cν
λ,µ. The exception is for when cν
λ,µ = 1; in this case, when
cNν
Nλ,Nµ = 0, we have always found that cNν
Nλ,Nµ = cν
λ,µ = 1. We are not the first to notice this
trend, however. Buch mentions an unproven conjecture, Fultun’s Conjecture [6], with states that
if cν
λ,µ = 1, and cNν
Nλ,Nµ = 0, it’s necessarily the case that cNν
Nλ,Nµ = 1 as well.
12. The Quantum Case
The goal of this thesis is to consider a quantum analogue of Horn’s Question by investigating
a possible solution to the Quantum Saturation Conjecture; we now make the necessary transi-
tion from the classical cohomology ring, H∗(Gr(k, n) to the quantum cohomology ring, denoted
QH∗(Gr(k, n)).
Like its classical counterpart, QH∗(Gr(k, n)) is a commutative ring with unity whose basis
elements are the Schubert classes that represent the Schubert Varieties of Gr(k, n). The operation
analogous to the cup product in QH∗(Gr(k, n)) is called the quantum cup product, denoted σλ σµ.
Like the classical cup product, the quantum cup product requires an involved theorem to fully
compute. Before introducing this theorem, we provide two definitions [7].
Definition 12.1. We call a contiguous chain of n boxes along the rightmost edge of some diagram
an n-hook.
The idea of n-hooks seems arbitrary, but is an essential part of computing the quantum cup
product; obtaining quantum cup products requires first removing as many n-hooks of a diagram as
possible, and then considering the remainder, which we formally define.
Definition 12.2. Given some Young Tableau diagram, λ ⊆ k × n − k, if we remove all possible
n-hooks from λ and are left with a diagram also in the set k × n − k, we call this new diagram the
n-core of λ, denoted c(λ).
We are now ready to introduce the following theorem regarding computing σλ σµ [4].
28
Theorem 12.3 (Rimhook Rule). Fix n, k ∈ N. Given λ, µ ⊆ k × n − k, we compute the quantum
cup product σλ σµ ∈ QH∗(Gr(k, n)) by first taking σλ · σµ ∈H∗(Gr(k, 2n − k)), and applying the
following map to each term in the resulting sum σλ · σµ = Pλ,µ
cν
λ,µσν
σν =
(−1) i(n−k−ht(Ri))qdσc(ν) if c(ν) ⊆ k × n − k
0 otherwise
where ht(Ri) is the number of rows in the ith rimhook removed from the original diagram ν, and d
is the total number of rimhooks removed from ν.
Remark 12.4. The sum from the quantum cup product is denoted
Pλ,µ
= cd,ν
λ,µqd
σν
where cd,ν
λ,µ is the quantum Littlewood-Richardson coefficient, and
cd,ν
λ,µ = (−1)d
cν
λ,µ
This process is hard to internalize without an example, so we provide one below.
Examples 12.5. We first practice removing n-hooks. Let ν = (5, 4, 3) = .
If we were to remove 5-hooks, we would begin at the rightmost box, and trace to the left and
down, remaining on the edge at all times. We remove the first 5-hook:
remove 5-hook →
Looking again at the upper-rightmost part of the remaining diagram, we see there are still 5
boxes along the edge, so we can remove another 5-hook:
remove 5-hook →
Since there are not even 5 boxes left, there are no more 5-hooks to remove. This leaves us with
the 5-core of ν, as c(ν) = .
Examples 12.6. Let n = 4, k = 2, λ = (2, 2) and µ = (2, 0). To compute the quantum cup
product, we first need to take the classical product in H∗(Gr(2, 6)):
σλ · σµ = · = = σν
We now look to remove 4-hooks from ν. Beginning from the upper right hand corner, tracing a
diagram along the edge will remove an inverted L, leaving us with c(ν) = (1, 1). We then multiply
c(ν) by qd, where d = 1 rim hook removed, and −1n−k−ht(R1), where n − k = 2, ht(Ri) = 2:
= (−1)2−2
q = q
Thus, we have computed the quantum cup product
29
σλ σµ = qσc(ν)
c(ν) =
Remark 12.7. Although we specified our order of removing n-hooks, in general, the order of
removing n-hooks doesn’t matter, since c(ν) forms independent of the choice of n-hooks removed.
13. Current Progress
We now share our work on investigating a method of proving the Quantum Saturation Conjecture
(Conjecture 2.4), restated below for convenience:
∃ N ∈ N such that cNd,Nν
Nλ,Nµ = 0 ⇐⇒ cd,ν
λ,µ = 0.
Unfortunately, doubts that quickly arise when merely testing the Quantum Saturation Conjecture
with Puzzle Rule computations hinder any significant progress into this investigation. Although
there already exists a proof of Horn’s Question in more general cases including the quantum case
[3], the statement in this proof is far more advanced a quantum replica the Saturation Conjecture,
and doesn’t relieve the concerns that arise from our toy computations, such as the one found below.
Examples 13.1. Let n = 8, and k = 3 so that we are computing the quantum product in
QH∗(Gr(3, 8)), and applying the rimhook rule in H∗(Gr(3, 13)). Let λ = (3, 3), and consider
σλ σλ:
σλ σλ ∈ QH∗
(Gr(3, 8))
= σ · σ ∈ H∗
(Gr(3, 13))
= σ + σ + σ + σ ∈ H∗
(Gr(3, 13))
= qσ + qσ + qσ + ∈ QH∗
(Gr(3, 8))
Now, consider σ1
3
λ σ1
3
λ, and recall that we should have a result where if
σ1
3
λ σ1
3
λ = 0
then for every term σν in the original product, we would have σ1
3
ν in the new product. The
computation is as follows:
σ1
3
λ σ1
3
λ ∈ QH∗
(Gr(3, 8))
= σ + σ ∈ H∗
(Gr(3, 13))
= σ + σ ∈ QH∗
(Gr(3, 8))
30
Our concern is that the techniques used to prove the saturation conjecture would not have
predicted the result of the product σ1
3
λ σ1
3
λ in our naive understanding of the Quantum Saturation
Conjecture. The result of classical products involved having each index ν scaled by 1
3 as well, where
our result does not involve scaling in the familiar sense, but rather adding and subtract degrees
from the intersecting smooth curve indexed by qd. This is to say, although we did in fact find an
N ∈ N such that the quantum Littlewood-Richardson coefficient c
d, 1
N
ν
1
N
λ, 1
N
µ
= 0 and thus satisfied
even our “naive” understanding, there is no way to account for the terms qd in the quantum cup
product using either the current Hive Model or Puzzle Rule. Without a way of calculating the
appearance of terms qd given our current knowledge of computing the quantum cup product, we
feel actually proving the Quantum Saturation Conjecture falls too far out of the scope of this thesis.
14. A Few Original Observations
In our computations for investigating the quantum Saturation Conjecture, we noticed a few
interesting facts about the behavior of Schubert classes in both H∗(Gr(k, n)) and QH∗(Gr(k, n))
that do not appear in the current literature. We formally state and prove these observations as
lemmas in this section.
Our results make heavy use of Pieri’s formula [9], formally stated below.
Theorem 14.1 (Pieri’s Formula). Given any Schubert class in the classical cohomology ring
H∗(Gr(k, n)), σλ, let (mk) denote the Young Tableau diagram with a column of height k, length
m (i.e. k rows of m). We have for the cup product
σλ · σ(1)r =
Λ
σλ
Λ = {λ ⊆ k × n − k | λ no two boxes are appended to the same row}.
where r is any positive integer.
Fix k ∈ N and let m ≥ 3k. We now state our first lemma.
Lemma 14.2. σ(1)k
k
= σ(k)k in the classical cohomology ring, H∗(Gr(k, m)).
Proof. We can expand the term σ(1)k
k
:
σ(1)k
k
= σ(1)k · σ(1)k · · · · · σ(1)k
= (σ(1)k · σ(1)k ) · · · · · σ(1)k
Looking at the term in parentheses, Pieri’s formula states that
σ(1)k · σ(1)k =
Λ
σ(1)k
Λ = {σ(1)k ∈ k × n − k | no two boxes are appended to the same row in σ(1)k }
Since we can’t have diagrams in our sum with more than k rows, and σ(1)k already has k rows,
the only valid diagrams σ(1)k have k rows. Since also, Pieri’s formula states that at most one box
may be appended to each row, and σ(1)k has k boxes, the cup product has one term:
σ(1)k · σ(1)k = σ(2)k
Consider now, σ(2)k · σ(1)k . Since σ(2)k has k rows, no new rows may be added to any term in the
cup product. By another application of Pieri’s formula, the product is forced into only one term:
31
σ(2)k · σ(1)k = σ(3)k
By iterating through the rest of the terms in the expansion of σ(1)k
k
and applying the same
argument, at each step i in the cup product σ(i)k · σ(1)k , we add 1 box to each row of σ(i)k so that
σ(i)k · σ(1)k = σ(i+1)k
Looking back at the expansion of σ(1)k
k
, we can state the following equalities:
σ(1)k
k
= σ(1)k · σ(1)k · · · · · σ(1)k .
By the associativity of the cup product, we have that,
= (σ(1)k · σ(1)k ) · · · · · σ(1)k
= σ(2)k · σ(1)k · · · · · σ(1)k
= σ(3)k · σ(1)k · · · · · σ(1)k
...
= σ(k−1)k · σ(1)k = σ(k)k
Having proved Lemma 14.2, the proof of this next lemma is very straightforward. The significance
of the next lemma is that there is a specific result for the quantum cup product of the Schubert
classes of k by k squares.
Lemma 14.3. σ(k)k σ(k)k = qk in the quantum cohomology ring, QH∗(Gr(k, 2k))
Proof. We compute the quantum cup product σ(k)k σ(k)k = qk in QH∗(Gr(k, 2k)) by first taking
the classical product σ(k)k · σ(k)k = qk in H∗(Gr(k, 3k)) Recall that by Lemma 14.2.
σ(k)k = σ(1)k · σ(1)k · · · · · σ(1)k .
Applying this result, we have
σ(k)k · σ(k)k = σ(1)k · σ(1)k · · · · · σ(1)k · σ(k)k
= σ(1)k · σ(1)k · · · · · σ(1)k · σ(k+1)k
...
= σ(1)k · σ(2k−1)k = σ(2k)k
Since the rimhook rule calls for removing 2k-hooks from σ(2k)k , and since σ(2k)k has k rows of
length 2k, we remove k rim hooks of length 2k, and thus
σ(2k)k = qk
∈ QH∗
(Gr(k, 2k))
We offer one last lemma, a generalization of the “product of squares” in Lemma 14.3 to a product
of any two “rectangles” of height k in H∗(Gr(k, n)).
Lemma 14.4. σ(r)k · σ( )k = σ(r+m)k in the classical cohomology ring, H∗(Gr(k, m)).
32
Having practiced the argument involving Pieri’s formula in the last two lemmas, the proof for
Lemma 14.4. is once again rather trivial.
Proof. We simply compute the cup product in question:
σ(r)k · σ( )k = σ(1)k · σ(1)k · · · · · σ(1)k · σ( )k
There are r terms σ(1)k in this product, so we iterate this recurring argument r times:
σ(1)k · σ(1)k · · · · · σ(1)k · σ( )k =
σ(1)k · σ(1)k · · · · · σ(1)k · σ( +1)k =
...
σ(1)k · σ(r+ −1)k = σ(r+ )k
Corollary 14.5. In QH∗(Gr(k, 2k)), if r + = 2k, then σ(r)k σ( )k = qk.
Proof. Since σ(r)k · σ( )k = σ(r+ )k = σ(2k)k , we have
σ(r)k σ( )k ∈ QH∗
(Gr(k, 2k)) =
σ(r)k · σ( )k ∈ H∗
(Gr(k, 3k)) =
σ(2k)k ∈ H∗
(Gr(k, 3k)),
and by removing k rim hooks of length 2k, we have
qk
∈ QH∗
(Gr(k, 2k)).
15. The Next (Small) Steps
Other than continuing to determine if there is a positive answer to the quantum Saturation
Conjecture, the next step for us is generalize our lemmas to see if there are formulas that like
Pieri’s formula, predict the cup product of certain Schubert classes. Having proved the products of
rectangles in H∗(Gr(k, m)) and QH∗(Gr(k, 2k)), we hope these results, or the methods of proving
these results, apply to proving a formula for a product of this type:
σλ · σ(2)r
for some r ∈ N.
This product is intuitively a “double” Pieri product, where instead of taking a column with
r entries, we take two consecutive columns with r entries. We focused heavily on the following
example of a product of this type.
Let λ = (2, 1, 1) and consider σλ · σ(2)2 in a sufficiently large Grassmannian:
σλ · σ(2)2 = σ + σ + σ + σ + σ + σ + σ
We compare this to the product σλ · σ(1)2 , which Pieri’s formula predicts:
33
σλ · σ(1)2 = σ + σ + σ + σ + σ
The relevance of this example is to show that proving a formula for products of type σλ ·σ(2)r , we
need arguments more creative than the ones we incorporate in this thesis. If we were to hypothesize,
for example, that every term in σλ ·σ(2)2 can be found by taking the product of each term in σλ ·σ(1)2
with another σ(1)2 , (we refer to “stacking” elements σ(1)2 as products of type P for convenience),
we would be off by many terms.
Of note, there are a number of products of type P that have more than 5 rows, but interestingly
enough, no terms in σλ · σ(2)2 that have more than 5 rows. We realize this is mostly likely the
result of Pieri’s formula for rows [9], which has the analogous statement to the Pieri’s formula used
thus far with every instance of “row” and “column” swapped. Since every other product of type P
occurs in σλ · σ(2)2 , we would want to explore more examples to see if applying both Pieri formulas
would completely determine products of type σλ · σ(r)2 , and perhaps of a general type σλ · σ( )r .
34
References
[1] Artin, Michael. Algebra. Englewood Cliffs, NJ: Prentice Hall, 1991. Print.
[2] Beazley, Elizabeth, Anna Bertiger, and Kaisa Taipale. “An Equivariant Rim Hook Rule for Quantum Cohomology
of Grassmannians.” Discrete Mathematics and Theoretical Computer Science (2013): n. pag. Print.
[3] Belkale, Prakash. “Quantum Generalization of the Horn Conjecture.” Journal of the American Mathematical
Society 21.02 (2008): 365-409.
[4] Bertram, Aaron, Ionut Ciocan-Fontanine, and William Fulton. “Quantum Multiplication of Schur Polynomials.”
Journal of Algebra 219.2 (1999): 728-46.
[5] “Bruhat Decomposition.” Wikipedia. Wikimedia Foundation, 03 May 2014. Web. 11 Apr. 2014.
[6] Buch, Anders M. “The Saturation Conjecture (After A. Knutson and T. Tao).” American Mathematical Society
(1998)
[7] Conversations with Doctor Elizabeth Beazley.
[8] Fulton, William. “Eigenvalues, Invariant Factors, Highest Weights and Schubert Calculus.” The American Math-
ematical Society, 05 Apr. 2000. Web. 05 Oct. 2013.
[9] Fulton, William. Young Tableaux: With Applications to Representation Theory and Geometry. Cambridge:
Cambridge UP, 1997. pp. 145-47.
[10] Horn, Alfred. Eigenvalues of sums of Hermitian matrices. Pacific Journal of Mathematics 12 (1962), no. 1,
225–241.
[11] Klyachko, Alexander A. Birkhuser Verlag, Basel, 1998. Web. 09 Oct. 2013.
[12] Strang, Gilbert. Introduction to Linear Algebra. Wellesley, MA: Wellesley-Cambridge, 2009. Print.
[13] Stanley, Richard R. “Enumerative Combinatorics,” Vol 1 (2nd edition). Cambridge (2012), p. 60.
[14] Tao, Terence, and Allen Knutson. “Honeycombs and Sums of Hermitian Matrices.” American Mathematical
Society (2000): http://arxiv.org/abs/math/0009048.
[15] Tao, Terence, and Allen Knutson. “Puzzles and (equivariant) Cohomology of Grassmannians.” Duke Mathemat-
ical Journal 119.2 (2003): 221-60. Print.
35

More Related Content

What's hot

The 2 Goldbach's Conjectures with Proof
The 2 Goldbach's Conjectures with Proof The 2 Goldbach's Conjectures with Proof
The 2 Goldbach's Conjectures with Proof
nikos mantzakouras
 
CMSC 56 | Lecture 12: Recursive Definition & Algorithms, and Program Correctness
CMSC 56 | Lecture 12: Recursive Definition & Algorithms, and Program CorrectnessCMSC 56 | Lecture 12: Recursive Definition & Algorithms, and Program Correctness
CMSC 56 | Lecture 12: Recursive Definition & Algorithms, and Program Correctness
allyn joy calcaben
 
Solve Equations
Solve EquationsSolve Equations
Solve Equations
nikos mantzakouras
 
New Formulas for the Euler-Mascheroni Constant
New Formulas for the Euler-Mascheroni Constant New Formulas for the Euler-Mascheroni Constant
New Formulas for the Euler-Mascheroni Constant
nikos mantzakouras
 
Analysis of algorithms
Analysis of algorithmsAnalysis of algorithms
Analysis of algorithms
S.Shayan Daneshvar
 
A Proof of the Riemann Hypothesis
A Proof of the Riemann  HypothesisA Proof of the Riemann  Hypothesis
A Proof of the Riemann Hypothesis
nikos mantzakouras
 
Hypothesis of Riemann's (Comprehensive Analysis)
 Hypothesis of Riemann's (Comprehensive Analysis) Hypothesis of Riemann's (Comprehensive Analysis)
Hypothesis of Riemann's (Comprehensive Analysis)
nikos mantzakouras
 
Chapter-3: DIRECT PROOF AND PROOF BY CONTRAPOSITIVE
Chapter-3: DIRECT PROOF AND PROOF BY CONTRAPOSITIVEChapter-3: DIRECT PROOF AND PROOF BY CONTRAPOSITIVE
Chapter-3: DIRECT PROOF AND PROOF BY CONTRAPOSITIVE
nszakir
 
Reciprocity Law For Flat Conformal Metrics With Conical Singularities
Reciprocity Law For Flat Conformal Metrics With Conical SingularitiesReciprocity Law For Flat Conformal Metrics With Conical Singularities
Reciprocity Law For Flat Conformal Metrics With Conical SingularitiesLukasz Obara
 
Boolean Programs and Quantified Propositional Proof System -
Boolean Programs and Quantified Propositional Proof System - Boolean Programs and Quantified Propositional Proof System -
Boolean Programs and Quantified Propositional Proof System - Michael Soltys
 
Discrete Math Lecture 03: Methods of Proof
Discrete Math Lecture 03: Methods of ProofDiscrete Math Lecture 03: Methods of Proof
Discrete Math Lecture 03: Methods of Proof
IT Engineering Department
 
G03201034038
G03201034038G03201034038
G03201034038
inventionjournals
 
CMSC 56 | Lecture 8: Growth of Functions
CMSC 56 | Lecture 8: Growth of FunctionsCMSC 56 | Lecture 8: Growth of Functions
CMSC 56 | Lecture 8: Growth of Functions
allyn joy calcaben
 
Local Volatility 1
Local Volatility 1Local Volatility 1
Local Volatility 1
Ilya Gikhman
 
Introduction to Real Analysis 4th Edition Bartle Solutions Manual
Introduction to Real Analysis 4th Edition Bartle Solutions ManualIntroduction to Real Analysis 4th Edition Bartle Solutions Manual
Introduction to Real Analysis 4th Edition Bartle Solutions Manual
DawsonVeronica
 
Admission in india 2015
Admission in india 2015Admission in india 2015
Admission in india 2015
Edhole.com
 
Math induction
Math inductionMath induction
Math inductionasel_d
 
Method of direct proof
Method of direct proofMethod of direct proof
Method of direct proof
Abdur Rehman
 

What's hot (20)

The 2 Goldbach's Conjectures with Proof
The 2 Goldbach's Conjectures with Proof The 2 Goldbach's Conjectures with Proof
The 2 Goldbach's Conjectures with Proof
 
CMSC 56 | Lecture 12: Recursive Definition & Algorithms, and Program Correctness
CMSC 56 | Lecture 12: Recursive Definition & Algorithms, and Program CorrectnessCMSC 56 | Lecture 12: Recursive Definition & Algorithms, and Program Correctness
CMSC 56 | Lecture 12: Recursive Definition & Algorithms, and Program Correctness
 
Solve Equations
Solve EquationsSolve Equations
Solve Equations
 
New Formulas for the Euler-Mascheroni Constant
New Formulas for the Euler-Mascheroni Constant New Formulas for the Euler-Mascheroni Constant
New Formulas for the Euler-Mascheroni Constant
 
Analysis of algorithms
Analysis of algorithmsAnalysis of algorithms
Analysis of algorithms
 
A Proof of the Riemann Hypothesis
A Proof of the Riemann  HypothesisA Proof of the Riemann  Hypothesis
A Proof of the Riemann Hypothesis
 
Hypothesis of Riemann's (Comprehensive Analysis)
 Hypothesis of Riemann's (Comprehensive Analysis) Hypothesis of Riemann's (Comprehensive Analysis)
Hypothesis of Riemann's (Comprehensive Analysis)
 
Chapter-3: DIRECT PROOF AND PROOF BY CONTRAPOSITIVE
Chapter-3: DIRECT PROOF AND PROOF BY CONTRAPOSITIVEChapter-3: DIRECT PROOF AND PROOF BY CONTRAPOSITIVE
Chapter-3: DIRECT PROOF AND PROOF BY CONTRAPOSITIVE
 
Reciprocity Law For Flat Conformal Metrics With Conical Singularities
Reciprocity Law For Flat Conformal Metrics With Conical SingularitiesReciprocity Law For Flat Conformal Metrics With Conical Singularities
Reciprocity Law For Flat Conformal Metrics With Conical Singularities
 
Boolean Programs and Quantified Propositional Proof System -
Boolean Programs and Quantified Propositional Proof System - Boolean Programs and Quantified Propositional Proof System -
Boolean Programs and Quantified Propositional Proof System -
 
Problem Set 1
Problem Set 1Problem Set 1
Problem Set 1
 
Discrete Math Lecture 03: Methods of Proof
Discrete Math Lecture 03: Methods of ProofDiscrete Math Lecture 03: Methods of Proof
Discrete Math Lecture 03: Methods of Proof
 
G03201034038
G03201034038G03201034038
G03201034038
 
CMSC 56 | Lecture 8: Growth of Functions
CMSC 56 | Lecture 8: Growth of FunctionsCMSC 56 | Lecture 8: Growth of Functions
CMSC 56 | Lecture 8: Growth of Functions
 
Local Volatility 1
Local Volatility 1Local Volatility 1
Local Volatility 1
 
Introduction to Real Analysis 4th Edition Bartle Solutions Manual
Introduction to Real Analysis 4th Edition Bartle Solutions ManualIntroduction to Real Analysis 4th Edition Bartle Solutions Manual
Introduction to Real Analysis 4th Edition Bartle Solutions Manual
 
Igv2008
Igv2008Igv2008
Igv2008
 
Admission in india 2015
Admission in india 2015Admission in india 2015
Admission in india 2015
 
Math induction
Math inductionMath induction
Math induction
 
Method of direct proof
Method of direct proofMethod of direct proof
Method of direct proof
 

Similar to Thesis 6

Lemh1a1
Lemh1a1Lemh1a1
Imc2020 day1&amp;2 problems&amp;solutions
Imc2020 day1&amp;2 problems&amp;solutionsImc2020 day1&amp;2 problems&amp;solutions
Imc2020 day1&amp;2 problems&amp;solutions
Christos Loizos
 
Mth3101 Advanced Calculus Chapter 1
Mth3101 Advanced Calculus Chapter 1Mth3101 Advanced Calculus Chapter 1
Mth3101 Advanced Calculus Chapter 1
saya efan
 
Calculus Homework Help
Calculus Homework HelpCalculus Homework Help
Calculus Homework Help
Math Homework Solver
 
Calculus Assignment Help
Calculus Assignment HelpCalculus Assignment Help
Calculus Assignment Help
Maths Assignment Help
 
Differential Equations Assignment Help
Differential Equations Assignment HelpDifferential Equations Assignment Help
Differential Equations Assignment Help
Maths Assignment Help
 
matrix theory and linear algebra.pptx
matrix theory and linear algebra.pptxmatrix theory and linear algebra.pptx
matrix theory and linear algebra.pptx
Maths Assignment Help
 
Differential Equations Homework Help
Differential Equations Homework HelpDifferential Equations Homework Help
Differential Equations Homework Help
Math Homework Solver
 
The Probability that a Matrix of Integers Is Diagonalizable
The Probability that a Matrix of Integers Is DiagonalizableThe Probability that a Matrix of Integers Is Diagonalizable
The Probability that a Matrix of Integers Is Diagonalizable
Jay Liew
 
Quaternion algebra
Quaternion algebraQuaternion algebra
Quaternion algebravikash0001
 
Cs229 cvxopt
Cs229 cvxoptCs229 cvxopt
Cs229 cvxoptcerezaso
 
SMB_2012_HR_VAN_ST-last version
SMB_2012_HR_VAN_ST-last versionSMB_2012_HR_VAN_ST-last version
SMB_2012_HR_VAN_ST-last versionLilyana Vankova
 
Gallians solution
Gallians solutionGallians solution
Gallians solution
yunus373180
 
solution
solutionsolution
solution
yunus373180
 
Math Assignment Help
Math Assignment HelpMath Assignment Help
Math Assignment Help
Math Homework Solver
 
Andrei rusu-2013-amaa-workshop
Andrei rusu-2013-amaa-workshopAndrei rusu-2013-amaa-workshop
Andrei rusu-2013-amaa-workshop
Andries Rusu
 

Similar to Thesis 6 (20)

Lemh1a1
Lemh1a1Lemh1a1
Lemh1a1
 
Imc2020 day1&amp;2 problems&amp;solutions
Imc2020 day1&amp;2 problems&amp;solutionsImc2020 day1&amp;2 problems&amp;solutions
Imc2020 day1&amp;2 problems&amp;solutions
 
Final
Final Final
Final
 
Thesis
ThesisThesis
Thesis
 
Mth3101 Advanced Calculus Chapter 1
Mth3101 Advanced Calculus Chapter 1Mth3101 Advanced Calculus Chapter 1
Mth3101 Advanced Calculus Chapter 1
 
Calculus Homework Help
Calculus Homework HelpCalculus Homework Help
Calculus Homework Help
 
04_AJMS_330_21.pdf
04_AJMS_330_21.pdf04_AJMS_330_21.pdf
04_AJMS_330_21.pdf
 
Calculus Assignment Help
Calculus Assignment HelpCalculus Assignment Help
Calculus Assignment Help
 
Differential Equations Assignment Help
Differential Equations Assignment HelpDifferential Equations Assignment Help
Differential Equations Assignment Help
 
matrix theory and linear algebra.pptx
matrix theory and linear algebra.pptxmatrix theory and linear algebra.pptx
matrix theory and linear algebra.pptx
 
Differential Equations Homework Help
Differential Equations Homework HelpDifferential Equations Homework Help
Differential Equations Homework Help
 
Matching
MatchingMatching
Matching
 
The Probability that a Matrix of Integers Is Diagonalizable
The Probability that a Matrix of Integers Is DiagonalizableThe Probability that a Matrix of Integers Is Diagonalizable
The Probability that a Matrix of Integers Is Diagonalizable
 
Quaternion algebra
Quaternion algebraQuaternion algebra
Quaternion algebra
 
Cs229 cvxopt
Cs229 cvxoptCs229 cvxopt
Cs229 cvxopt
 
SMB_2012_HR_VAN_ST-last version
SMB_2012_HR_VAN_ST-last versionSMB_2012_HR_VAN_ST-last version
SMB_2012_HR_VAN_ST-last version
 
Gallians solution
Gallians solutionGallians solution
Gallians solution
 
solution
solutionsolution
solution
 
Math Assignment Help
Math Assignment HelpMath Assignment Help
Math Assignment Help
 
Andrei rusu-2013-amaa-workshop
Andrei rusu-2013-amaa-workshopAndrei rusu-2013-amaa-workshop
Andrei rusu-2013-amaa-workshop
 

Thesis 6

  • 1. INVESTIGATING THE QUANTUM CASE OF HORN’S QUESTION AUTHOR: DORIAN EHRLICH, COLLABORATORS: DOCTOR ELIZABETH BEAZLEY 1. Abstract Given the weakly decreasing sequences of real numbers, α = (α1 ≥ α2 ≥ · · · ≥ αk), and β = (β1 ≥ β2 ≥ · · · ≥ βk), Alfred Horn asks in his 1963 paper for what arbitrary sequence of real numbers γ = (γ1 ≥ γ2 ≥ · · · ≥ γk) do there exist Hermitian matrices A, B and C = A + B whose eigenvalues are α, β, and γ respectively [10]. Alexander Klyachko provided the first solution to this query in 1999 [11], and since then a number of alternative solutions have been released as well. The solution to Horn’s Question we focus on was first incorporated by Knutson and Tao in 1999 [14], who provide a solution for when α, β and γ ∈ Zk ≥0. Knutson and Tao’s solution involves using Littlewood-Richardson coefficients to determine intersections of subvarieties of the Grassmannian known as Schubert Varieties. We look to internalize Knutson and Tao’s method of proof as a means of tackling the question with which this thesis is ultimately concerned: Whether Horn’s Question has a solution if rather than intersections of Schubert Varieties, we consider the existence of a smooth curve that passes through triples of Schubert Varieties. This variant on Horn’s Question is known as the quantum case of Horn’s Question, and is currently an open problem. Date: April 27, 2014. 1
  • 2. 2. Introduction The type of matrices with which Horn’s Question is concerned, “Hermitian matrices, may be unfamiliar, so we formally define this concept below. Definition 2.1 (Hermitian Matrix). A matrix A is Hermitian if it is equal to its own conjugate transpose, i.e. AT = A. The only nontrivial concepts involved in the original statement of Horn’s Question are Hermeiian matrices and eigenvalues, so we are in fact ready to state this question formally. Question 2.2 (Horn’s Question). [10] Given two weakly decreasing sequences of arbitrary real numbers, α = (α1 ≥ · · · ≥ αk) β = (β1 ≥ · · · ≥ βk) for which sequences γ = (γ1 ≥ · · · ≥ γk) do there exist Hermitian matrices A, B and A + B = C such that their eigenvalues are α, β and γ respectively. A concern about the statement of Horn’s Question quickly arises: Given that Hermitian matrices are equal to their own conjugate transpose, Hermitian matrices may have complex entries, so it’s not clear that we necessarily have real eigenvalues. It is the case, however, that the eigenvalues of a Hermitian matrix are real, and we offer a cute proof below [7]. Proposition 2.3. Given a Hermitian matrix A, if there exists a vector −→x ∈ Ck such that A−→x = λ−→x for some scalar λ, it follows that λ ∈ R. Proof. Given A Hermitian, and A−→x = λ−→x for some −→x ∈ Ck, and scalar λ, consider multiplying both sides of the equation by −→x T on the left: −→x T A−→x = −→x T λ−→x = λ−→x T −→x(2.1) Looking at the (dot) product −→x T −→x , since −→x ∈ Ck, we can express the jth entry of −→x as aj +bji, for some real numbers ai and bi. Computing, we have −→x T −→x = (a1 + b1i) (a2 + b2i) . . . (ak + bki)      a1 + b1i a2 + b2i ... ak + bki      = (a1 + b1i)(a1 + b1i) + (a2 + b2i)(a2 + b2i) + · · · + (ak + bki)(ak + bki) = (a1 − b1i)(a1 + b1i) + (a2 − b2i)(a2 + b2i) + · · · + (ak − bki)(ak + bki) (a2 1 + b2 1) + (a2 2 + b2 2) + · · · + (a2 k + b2 k) = k j=1 a2 j + b2 j . This means −→x T −→x is a sum of real numbers, which is a real number, say S. Now, let’s take the conjugate transpose of equation (2.1), and recall that A is Hermitian: (−→x T A−→x )T = (λ−→x T −→x )T 2
  • 3. ⇐⇒ −→x T AT −→x T T = λ (S) = λS ⇐⇒ −→x T A−→x = λS According to equation (2.1), however, we also have −→x T A−→x = λS which means λ = λ, and thus λ ∈ R. Although Horn’s Question may at first glance, appear to be a challenging linear algebra exercise, a prerequisite for even posing Horn’s Question in the quantum case is an understanding of the mathematically complicated object known as the cohomology ring of the Grassmannian. This cohomology ring, denoted H∗(Gr(k, n)), is where we determine intersections of triples of Schubert Varieties of the Grassmannian, the latter which we denote Gr(k, n). Schubert Varieties are what’s known as the Zariski Closures of a specific choice of n k disjoint subspaces of Grassmannian called Schubert Cells, which are obtained by creating n k partitions of the group of invertible matrices over the complex numbers, GLn(C) [7]. Knutson and Tao’s solution to Horn’s Question in the non-quantum, or classical case, involves considering the operation known as the cup product of elements in H∗(Gr(k, n)) that index the Schubert Varieties of Gr(k, n) called Schubert classes . The cup product of Schubert classes is what computes the intersections of their corresponding Schubert Varieties. If we consider the Schubert classes σα and σβ that represent the Schubert Varieties our original k-tuples α and β index respectively, , and their cup product σα · σβ, we have the following formula: σα · σβ = Pα,β cγ α,βσγ Pα,β = {γ = (γ1, . . . , γk) | k j=1 γj = k j=1 αj + k j=1 βj}. In this sum, the terms cγ α,β are positive integers known as the Littlewood-Richardson coefficients corresponding to α, β and γ. Computing Littlewood-Richradson coefficients requires understanding an algorithm known as the Littlewood-Richardson Rule, or equivalent algorithms such as the Puzzle Rule, which we use for this thesis. Knutson and Tao’s solution to Horn’s Question states that for some Schubert class σγ, where γ ∈ Pα,β [7], ∃ Hermitian matrices A, B, C with integer eigenvalues α, β and γ respectively ⇐⇒ cγ α,β = 0. Knutson and Tao only make this conclusion after first proving the following theorem regarding Littlewood-Richardson coefficients [14]. Theorem 2.4 (The Saturation Conjecture). Given two weakly decreasing sequences of k positive integers, λ and µ, and some weakly decreasing sequence ν ∈ Pλ,µ, ∃ N ∈ N such that cNν Nλ,Nµ = 0 ⇐⇒ cν λ,µ = 0 where λ = {λ1, . . . , λn−k}, Nλ = {Nλ1, . . . , Nλn−k} 3
  • 4. The quantum case of Horn’s Question, the focus of this thesis, looks at the quantum cup product in the quantum cohomology ring of the Grassmannian QH∗(Gr(k, n)). The quantum cup product determines if three Schubert Varieties are linked by smooth curves, rather than if they contain an ordinary intersection. If we now look at the Schubert Varieties indexed by our k-tuples α and β, and the quantum cup product, denoted σα σβ, we have the formula σα σβ = Pα,β cd,γ α,βqd σγ, where cd,γ α,β is the quantum Littlewood-Richardson coefficient that determines how many degree d curve pass through the triple of Schubert Varieties with Schubert classes σα, σβ and σγ respectively, and q is an indeterminate. To determine if in fact, there exists the same connection between eigenvalues of Hermitian ma- trices and quantum Littlewood-Richardson coefficients, we set out to prove a “quantum analogue” to Theroem 2.3, which we call the quantum Saturation Conjecture. Conjecture 2.5 (The Quantum Saturation Conjecture). ∃ N ∈ N such that cNd,Nν Nλ,Nµ = 0 ⇐⇒ cd,ν λ,µ = 0 3. Orbit-Stabilizer for the Complete Flag Our first task for research into the Quantum Saturation Conjecture is to derive a key object of study: The Schubert Cells of the Grassmannian. We build the intuition for deriving the Schubert Cells of the Grassmannian by first building the analogous results for the complete flag. Now, let us fix n ∈ N. Definition 3.1. A complete flag, F, is an increasing sequence of n linear subspaces of Cn [7] F : {0} = F0 ⊆ F1 ⊆ F2 · · · ⊆ Fn−1 ⊆ Fn = Cn dim(Fi) = i, for every index 1 ≤ i ≤ n Remark 3.2. A partial flag is an increasing sequence of linear subspaces where unlike the complete flag, there are fewer than n − 1 steps in the sequence. Gr(k, n) is an example of a partial flag. Given a complete flag F, we know dim(F1) = 1, so we can express the subspace F1 as the span of some basis vector: F1 = span(−→v1) Looking now at F2, since F2 is a two-dimensional subspace, it can be expressed as the span of two basis vectors. We know, however, that F1 ⊂ F2, which means that span(−→v1) ⊂ F2. This means there exists a basis for F2 where one basis vector is −→v1. We can express F2 as follows: F2 = span(−→v1, −→v2) where −→v2 is some independent vector in Cn. For the next step, F3, since F2 ⊂ F3, we know span(−→v1, −→v2) ⊂ F3, so by the same argument used above, there exists an independent vector −→v3 ∈ Cn such that F3 = span(−→v1, −→v2, −→v3). Applying this argument at each step in F, we eventually have that Fn = span(−→v1, −→v2, −→v3, . . . , −→v n). 4
  • 5. for a set of n independent vectors in Cn, {−→v1, . . . , −→vn}. We can now reexpress our flag F: F : {0} ⊂ span(−→v1) ⊂ span(−→v1, −→v2) ⊂ · · · ⊂ span(−→v1, −→v2 . . . , −→vn) = Cn As we will with Gr(k, n), we consider the group action GLn(C) {F | F is a complete flag} given by matrix multiplication on the left. We define “multiplication on the left” of a complete flag by taking each subspace Fk, and multiplying each basis vector by an invertible matrix: M ∈ GLn(C), Fk = span(−→v1, . . . , −→vk) MFk = span(M−→v1, . . . , M−→vk) Since we are multiplying by invertible matrices, we retain a k-dimensional basis after multiplying each basis vector. Note that the action takes one complete flag to another complete flag: MF : {0} = MF0 ⊆ MF1 ⊆ · · · ⊆ MFn = Cn We claim this group action is transitive. Proposition 3.3. GLn(C) {F} is a transitive group action, i.e. for every two flags F and F ∈ {F}, there exists a matrix M ∈ GLn(C) such that MF = F . Proof. Given two arbitrary flags, F, F ∈ {F}, F : {0} = F0 ⊂ F1 ⊂ F2 · · · ⊂ Fn−1 ⊂ Fn = Cn F : {0} = F0 ⊂ F1 ⊂ F2 · · · ⊂ Fn−1 ⊂ Fn = Cn there exist n vectors −→v1, . . . , −→vn, and n vectors −→w1, . . . , −→wn such that F : span(−→v1) ⊂ span(−→v1, −→v2) ⊂ · · · ⊂ span(−→v1, −→v2, . . . , −→vk) ⊂ span(−→v1, −→v2, . . . , −→vk, . . . , −→vn) F : span(−→w1) ⊂ span(−→w1, −→w2) ⊂ · · · ⊂ span(−→w1, −→w2, . . . , −→wk) ⊂ span(−→w1, −→w2, . . . , −→wk, . . . , −→wn). Since dim(Fn) = dim(Fn) = n, we can find a change of basis matrix M such that M−→vi = −→wi for every 1 ≤ i ≤ n. By the definition of our group action, this means MFn = Fn. MFn = Mspan(−→v1, −→v2, . . . , −→vn) = span(M−→v1, M−→v2, . . . , M−→vn) = span(−→w1, −→w2, . . . , −→wn) = Fn Now, given k < n, MFk = Mspan(−→v1, −→v2, . . . , −→vk) = span(M−→v1, M−→v2, . . . , M−→vk) = span(−→w1, −→w2, . . . , −→wk) = Fk Since k < n was arbitrary, we have that MF : {0} = MF0 ⊂ MF1 ⊂ MF2 · · · ⊂ MFn−1 ⊂ MFn = {0} = F0 ⊂ F1 ⊂ F2 · · · ⊂ Fn−1 ⊂ Fn = F . Now that we know our group action is transitive, by the Orbit-Stabilizer Theorem [1], we have that for any flag F and its stabilizer StabF = {M ∈ GLn(C) | MF = F}, the quotient GLn(C)/StabF will be in one-to-one correspondence with the entire set of complete flags. It thus suffices to work only with the easiest flag, the standard flag, which we denote as E: E : {0} ⊂ span(−→e1) ⊂ span(−→e1, −→e2) ⊂ · · · ⊂ span(−→e1, −→e2, . . . −→en) = Cn GLn(C)/StabE ∼= {F} 5
  • 6. We claim the stabilizer has a very recognizable form. Proposition 3.4. The stabilizer of E under the group action GLn(C) {F} is the set of upper triangular matrices. Proof. To find the stabilizer, we need to establish what matrices M ∈ GLn(C) can multiply each subspace in E, span(−→e1, . . . , −→ek), so that Mspan(−→e1, . . . , −→ek) = span(−→e1, . . . , −→ek). Given k ≤ n, we express span(−→e1, . . . , −→ek) as the column space of an n by k matrix to carry out multiplication of a subspace: span(−→e1, −→e2, . . . , −→ek) = C −→e1 −→e2 . . . −→ek = C                             1 0 . . . 0 . . . 0 0 1 . . . 0 . . . 0 ... ... ... ... ... ... 0 0 . . . 1 . . . 0 ... ... ... ... ... ... 0 0 . . . 0 . . . 1 ... ... ... ... ... ... 0 0 . . . 0 . . . 0                             To find the matrices that retain this subspace after left multiplication, given any k vectors −→x1, . . . , −→xk ∈ span(−→e1, . . . , −→ek), we find a matrix M that takes span(−→e1, . . . , −→ek) to span(−→x1, . . . , −→xk) Mspan(−→e1, . . . , −→ek) = span(−→x1, . . . , −→xk). If we denote each −→xi T = x1i x2i . . . xki 0 . . . 0 , we have the following matrix equality:          m11 m12 . . . m1k m1k+1 . . . m1n m21 m22 . . . m2k m2k+1 . . . m2n ... ... ... ... ... ... ... mk1 mk2 . . . mkk mkk+1 . . . mkn ... ... ... ... ... ... ... mn1 mn2 . . . mnk mnk+1 . . . mnn                   1 0 . . . 0 0 1 . . . 0 ... ... ... ... 0 0 . . . 1 ... ... ... ... 0 0 . . . 0          =          x11 x12 . . . x1k x21 x22 . . . x2k ... ... ... ... xk1 xk2 . . . xkk ... ... ... ... 0 0 . . . 0          MEk = Xk. Note that Ek and Xk have all 0’s past the kth row, since the first k standard basis vectors only span the first k dimensions. Now, let −→mi T be the row vector that contains the entries in the ith row of M so that −→mi T = mi1 mi2 . . . min . By the definition of matrix multiplication, we know that the entry in the ith row and jth column of Xk is defined as follows: −→mi T −→ej = mi10 + mi20 + · · · + mij1 + · · · + min0 = mij Since for every i > k, the ith row of Xk is a zero row vector, we need the first k entries in the ith row of M to be 0. Looking now at every row i ≤ k, the entry in the jth column is the arbitrary complex number xij. This means we need −→mi T −→ej = xij, so we can set the entry in the ith row and jth column of M equal to xij. Since there are only k columns in Xk, these results hold only for every mij where j ≤ k, which means we can leave the other entries in M untouched. Since we have accounted for every xij in Xk, we can state the following result: 6
  • 7.          x11 x12 . . . x1k m1k+1 . . . m1n x21 x22 . . . x2k m2k+1 . . . m2n ... ... ... ... ... ... ... xk1 xk2 . . . xkk mkk+1 . . . mkn ... ... ... ... ... ... ... 0 0 . . . 0 mnk+1 . . . mnn                   1 0 . . . 0 0 1 . . . 0 ... ... ... ... 0 0 . . . 1 ... ... ... ... 0 0 . . . 0          =          x11 x12 . . . x1k x21 x22 . . . x2k ... ... ... ... xk1 xk2 . . . xkk ... ... ... ... 0 0 . . . 0          In particular, the matrix M we constructed has 0’s in every entry past the kth row and up to and including the k column, and arbitrary entries elsewhere, since each xij was arbitrary. Recall that k ≤ n was arbitrary. This means that since we need a matrix M that retains each step in E after matrix multiplication, we require that for every k ≤ n, this matrix have 0’s past the kth row and before the (k + 1)st column: M =            m11 m12 . . . m1k m1k+1 . . . m1n 0 m22 . . . m2k m2k+1 . . . m2n ... ... ... ... ... ... ... 0 0 . . . mkk mkk+1 . . . mkn 0 0 . . . 0 mk+1k+1 . . . mk+1n ... ... ... ... ... ... ... 0 0 . . . 0 0 . . . mnn            Since the equation ME = E is satisfied when M is upper triangular, we can state the following: StabE = {U ∈ GLn(C) | U is upper triangular} Using the stabilizer just derived, we can now reexpress the set of complete flags {F}: {F} ∼= {MStabE | M ∈ GLn(C)} ⊂ GLn(C) Remark 3.5. Each element in {F} corresponds to a left coset of the set of upper triangular matrices: MStabE = {MU | U ∈ StabE} Since StabE is not a normal subgroup, however, the cosets do not have a group structure on them i.e. the quotient is not a quotient group. Remark 3.6. For the rest of this thesis, we will refer to StabE simply as U. 4. Bruhat Decomposition for the Complete Flag We put aside our quotient momentarily and focus on the method of partitioning GLn(C) to derive the Schubert Cells of {F} (and ultimately Gr(k, n)). It is known that we can express GLn(C) as a disjoint union of cosets of the permutation group Sn using what is known as the Bruhat Decomposition [5]. We provide the formal statement below. Theorem 4.1 (Bruhat Decomposition (of GLn(C))). GLn(C) ∼= σ∈Sn UσU 7
  • 8. Given σ ∈ Sn, we define the set UσU: UσU = {UσU | U and U ∈ U, σ ∈ Sn} . This means each element UσU is a product of an upper triangular matrix, a permutation matrix, and another upper triangular matrix. We view each set in the disjoint union as a “double coset”, or product of sets UσU For this thesis, we use the so-called “window notation” for permutation matrices, where the indices a1, a2, . . . an ∈ {1, 2, . . . , n}, and the element a1 a2 . . . an ∈ Sn defines the following permutation: a1 → 1, a2 → 2, . . . , an → n Permutation matrices act on other matrices with right multiplication by permuting columns, and with left multiplication by permuting rows. We illustrate with an example. Examples 4.2. Let σ = 2 1 3 . By the mapping described above, the corresponding matrix has a 1 in the second row, first column, first row, second column and third row, third column: 2 1 3 →   0 1 0 1 0 0 0 0 1   Given some matrix   a b c d e f g h i  , note the permutation of columns by right multiplication, and permutation of rows by left multiplication:   a b c d e f g h i     0 1 0 1 0 0 0 0 1   =   b a c e d f h g i     0 1 0 1 0 0 0 0 1     a b c d e f g h i   =   d e f a b c g h i   Recall now, the isomorphism established in the last section: {F} ∼= GLn(C)/U Using the Bruhat Decomposition, we can derive a new isomorphism: {F} ∼= GLn(C)/U ∼= σ∈Sn UσU/U ∼= σ∈Sn Uσ By this result, we can partition the set of all complete flags into n! disjoint unions of a subset of GLn(C), where each partition is the set of upper triangular matrices with the columns permuted. These disjoint unions are a crucial piece of this thesis, and we provide them with a formal definition. Definition 4.3. We call each disjoint set Uσ a Schubert Cell of the set of complete flags [7]. 8
  • 9. 5. Stabilizer of the Grassmannian Intuition under our belt, we are now ready to derive the Schubert Cells of the Grassmannian. We begin by deriving the necessary stabilizer. Definition 5.1. Given k ≤ n, the Grassmannian, denoted Gr(k, n), is the set of all k-dimensional subspaces of Cn [7]: Gr(k, n) = {L ⊆ Cn |dim(L) = k} We will use the same group action GLn(C) Gr(k, n) given by left multiplication, and look to find the stabilizer using a well-chosen element of Gr(k, n). The group action is once again transitive; the proof of this follows a similar argument to the case of the complete flag but will be omitted. We choose the following Grassmannian element to derive the stabilizer: LE = span(−→e1, −→e2, . . . −→ek) ∈ Gr(k, n) To find StabLE , we consider what matrices M are LE-invariant. As with the complete flag case, the stabilizer has a very nice algebraic form. Proposition 5.2. StabLE = {G ∈ GLn(C) | G = A ∗ 0 B , A ∈ GLk(C), B ∈ GLn−k(C)} i.e. the set of “block” matrices. For the purpose of this thesis, we derive the stabilizer, StabLE , using Gr(2, 4) as a readily understood example rather than prove the result for the general Gr(k, n); the proof would otherwise also assume a structure too similar to the proof for complete flags. Examples 5.3. Since we elect to work in Gr(2, 4), we have k = n − k = 2. We can view LE as both the xy-plane and a 4 by 2 matrix whose columns are −→e1, −→e2 : LE =     1 0 0 1 0 0 0 0     For a matrix M to multiply LE and retain the xy-plane, M must have zeroes in the first two columns and last two rows: M     1 0 0 1 0 0 0 0     =     x w y z 0 0 0 0     ⇒ M =     x w ∗ ∗ y z ∗ ∗ 0 0 a b 0 0 c d     Since M is an invertible matrix, and there are zeroes in the bottom-left “square,” we need an invertible 2 by 2 matrix in the bottom-right “square” to keep M invertible. We are left with the following result: StabLE = {G ∈ GLn(C) | G = A ∗ 0 B , A ∈ GL2(C), B ∈ GL2(C)}. where ∗ denotes arbitrary entries in the upper-right “square.” We can now use the Orbit-Stabilizer theorem to find this time, the quotient isomorphic to Gr(k, n): GLn(C)/StabLE ∼= Gr(k, n) 9
  • 10. Remark 5.4. For ease of notation, we let StabLE = G for the rest of this thesis. 6. “Bruhat” Decomposition for the Grassmannian Since U is a proper subset of G, since upper triangular matrices are block matrices but block matrices are not necessarily upper triangular, partitioning GLn(C) with the quotient involving G using the traditional Bruhat Decomposition used in §4, ∪σ∈Sn GσG, creates unions with nontrivial intersections. Let Qkn denote the quotient of permutations Sn/(Sk × Sn−k), where an element in Sk permutes the first k components, and an element in Sn−k permutes the last n − k components. We claim the following variant of the Bruhat Decomposition properly partitions GLn(C): Proposition 6.1. GLn(C) = σ∈Qkn UσG First, we prove a lemma regarding “unnecessary” permutations. Lemma 6.2. σ∈Sn UσG = σ∈Qkn UσG Proof. Since the difference in the left hand and right hand side sets is the set over which the permutations in σG range, it suffices to show that σ∈Sn σG = σ∈Qkn σG Since the above equality implies that given σ ∈ Sk × Sn−k, the coset σG is the same as the set G, it also suffices to show that [7] σ∈Sk×Sn−k σG = G. Since G = 1G, where 1 denotes the identity permutation, G ⊆ ∪σ∈Sk×Sn−k G. Conversely, given G ∈ G, and a permutation σ ∈ Sk × Sn−k, since σ is in the product group Sk × Sn−k,σ permutes the first k and last n − k items among themselves. Since σ is multiplying a matrix G on the left, this means σ rearranges the first k and last n−k rows of G among themselves. Now, recall that G has the following form: G = A ∗ 0 B A ∈ GLk(C), B ∈ GLn−k(C) If we express σ as a product of two permutations from Sk and Sn−k, i.e. σ = σ1 × σ2, we can compute σG as follows: σG = (σ1 × σ2)G = (σ1 × σ2) A ∗ 0 B = σ1A σ1∗ σ20 σ2B Since A and B are invertible matrices, σ1A = A and σ2B = B , where A ∈ GLk(C) and B ∈ GLn−k(C). Since ∗ are arbitrary entries, so is the permutation of them σ1∗. Since permuting a 0 matrix recovers the 0 matrix, σ20 = 0. This means σ1A σ1∗ σ20 σ2B = A ∗ 0 B ∈ G, which means σ∈Sk×Sn−k σG ⊆ G and thus: σ∈Sk×Sn−k σG = G. 10
  • 11. Now, we can prove Proposition 6.1. Proof. (Proposition 6.1) The containment σ∈Qkn UσG ⊆ GLn(C) is obvious, since given a matrix UσG, we know that each of U, σ, and G is invertible, and a product of invertible matrices is invertible. The reverse containment can be shown using algebra. Recall the original Bruhat Decomposition of GLn(C): GLn(C) = σ∈Sn UσU Since upper triangular matrices are block matrices, we have that U ⊆ G. Now, using the lemma we just proved, we can state the following: σ∈Sn UσU ⊆ σ∈Sn UσG = σ∈Qkn UσG This means that GLn(C) ⊆ σ∈Qkn UσG, thus proving the necessary equality. Remark 6.3. Since our proof does not show that the union of GLn(C) using this decomposition is necessarily disjoint, we credit the fact that we have a disjoint union to William Fulton, who provides this fact in his text, Young Tableaux, through an example on page 147 [9]. Note that the example in this text involves using a completely different means of expressing Schubert Cells. Remark 6.4. We note the cardinality of our quotient of permutations: |Qkn| = n! k!(n − k)! = n k . Now that we are only working with a quotient of Sn, we will want to work with one particular representative in each coset. Before picking our favorite representative, we need make the precise equivalence relation among the permutations, a corollary of having just established that a per- mutation that permutes the first k rows of G among themselves, and the last n − k rows among themselves is equivalent to the identity. We state and prove the equivalence relation in our quotient Qkn. Corollary 6.5. σ ∼ σ ⇐⇒ σ and σ have the same window notation after shuffling the first k and last n − k positions. Proof. First, assume σ ∼ σ . That means σ and σ are in the same coset, so there exists a permutation π ∈ (Sk × Sn−k) π = a1 a2 . . . ak ak+1 . . . an a1, . . . , ak ∈ {1, . . . , k} ak+1, . . . , an ∈ {k + 1, . . . , n}. such that σ = σ π. Since π ∈ (Sk × Sn−k), we know that π permutes the first k and last n − k entries of σ . Since σ and σ differ only by a permutation that shuffles the first k and last n − k columns, σ has the same window notation as σ after the first k and last n−k entries in σ undergo the permutation given by π. Now, assume the converse, namely that σ and σ have the same window notation after shuffling the first k and last n−k entries in one, say σ . Then, the two permutations differ by a permutation 11
  • 12. factor that shuffles the first k and last n−k columns, which means σ = σ π for some π ∈ (Sk×Sn−k). Since σ and σ differ by a factor of some element in (Sk × Sn−k), we have that σ, and σ are in the same coset and thus σ ∼ σ . We can now say that for the coset [σ] ∈ Qkn we will use the element that has the first k and last n − k entries in ascending order to represent all of [σ]: σ = a1 a2 . . . ak | ak+1 . . . an a1 < a2 < · · · < ak ak+1 < · · · < an. Remark 6.6. For this thesis, we visually separate the first k and last n − k entries to remind the reader of the ordering among the n indices ai. 7. Orbit-Stabilizer and Schubert Cells of the Grassmannian Having split GLn(C) into disjoint unions involving the stabilizer of the Grassmannian, we now use the Orbit-Stabilizer theorem to express the Schubert Cells of the Grassmannian. Recall that the stabilizer of the Grassmannian: StabLE = G = G ∈ GLn(C) G = A ∗ 0 B , Since our group action was transitive, by the Orbit-Stabilizer Theorem, Gr(k, n) ∼= GLn(C)/StabEG = GLn(C)/G = σ∈Qkn UσG/G Since each coset UσG/G has all matrices G as the identity, we also can establish the following isomorphism: σ∈Qkn UσG/G ∼= σ∈Qkn Uσ Definition 7.1. We define the Schubert Cells of the Grassmannian to be each partition Uσ, where σ ranges over Qkn. Remark 7.2. Whereas there were n! Schubert Cells of {F}, there are only n k Schubert Cells of Gr(k, n), since the permutations only range over the quotient Qkn. The Schubert Cells of the Grassmannian are cosets, and as we did with the quotient Qkn, we need to find an element of each coset that allows for the simplest computations. We will choose these coset representatives by taking an arbitrary upper triangular matrix U, and then constructing a particular matrix G that reduces Uσ as much as possible. Since every G is equivalent to the identity matrix in Uσ by the isomorphism above, the matrix UσG is a member of the coset Uσ. We omit the proof of which matrix is the most reduced for each Uσ for Gr(k, n) in favor of the more illustrative process of deriving the Schubert Cells for Gr(2, 3), and generalizing the result. Since k = 2, and n − k = 1, our quotient is S3/(S2 × S1) = Q23. As we computed in a previous example, we have the following choices for coset representatives: Q23 = {[σ1] ∼ 1 2 | 3 , [σ2] ∼ 1 3 | 2 , [σ3] ∼ 2 3 | 1 } We now take the matrix representations of each coset representative: 12
  • 13. [σ1] ∼ 1 2 | 3 →   1 0 0 0 1 0 0 0 1   , [σ2] ∼ 1 3 | 2 →   1 0 0 0 0 1 0 1 0   , [σ3] ∼ 2 3 | 1 →   0 0 1 1 0 0 0 1 0   Proposition 7.3. Given an arbitrary U ∈ U, the following matrices are more reduced, or have all 1’s in nonzero entries, and more 0’s than any other matrix in their respective Schubert Cells:   1 0 0 0 1 0 0 0 1   ∈ Uσ1   1 0 0 0 ∗ 1 0 1 0   ∈ Uσ2   ∗ ∗ 1 1 0 0 0 1 0   ∈ Uσ3 Remark 7.4. In the perspective of the disjoint union, (Uσ1 Uσ2 Uσ3) ∼= Gr(k, n). Proof. Given U ∈ U: U =   a b d 0 c e 0 0 f   , a, c, f = 0 We construct G ∈ G: G =   x w r y z s 0 0 t   , either x, z, t = 0 or y, w, t = 0 and xz − yw = 0 The choice of which entries are nonzero in G actually does not matter, since the permutation that switches the first two rows and the identity are equivalent in Q23. For ease of computation, we choose that x, z, t = 0. We construct the G that reduces Uσ as much as possible by letting the entries from G act as variables, and choosing each entry accordingly: Uσ1G =   a b d 0 c e 0 0 f   1 2 3   x w r y z s 0 0 t   =   a b d 0 c e 0 0 f     x w r y z s 0 0 t   =   ax + by aw + bz ar + bs + dt cy cz cs + et 0 0 ft   Uσ2G =   a b d 0 c e 0 0 f   1 3 2   x w r y z s 0 0 t   =   a b d 0 c e 0 0 f     x r w y s z 0 t 0   =   ax + dy aw + dz ar + ds + bt ey ez es + ct fy fz fs   Uσ3G =   a b d 0 c e 0 0 f   2 3 1   x w r y z s 0 0 t   =   a b d 0 c e 0 0 f     w r x z s y 0 t 0   =   bx + dy bw + dz br + ds + at cx + ey cw + ez cr + es fy fz fs   To find each reduced form, i.e. reduce as many entries to 0 or 1 as possible, we start with Uσ1G:. Uσ1G =   ax + by aw + bz ar + bs + dt cy cz cs + et 0 0 ft   Since f and t are nonzero, ft must be nonzero. The same is true of cz, which then forces ax+by to be nonzero. The first clear selection for the variable entries from G is that t = f−1: 13
  • 14.   ax + by aw + bz ar + bs + dt cy cz cs + et 0 0 ft   ∼   ax + by aw + bz ar + bs + df−1 cy cz cs + ef−1 0 0 1   Since cz = 0, we also need z = c−1:   ax + by aw + bz ar + bs + df−1 cy cz cs + ef−1 0 0 1   ∼   ax + by aw + bc−1 ar + bs + df−1 cy 1 cs + ef−1 0 0 1   Now, we have cy in an entry we wish to make 0, so we can simply take y = 0:   ax + by aw + bc−1 ar + bs + df−1 cy 1 cs + ef−1 0 0 1   ∼   ax aw + bc−1 ar + bs + df−1 0 1 cs + ef−1 0 0 1   These substitutions leave us with ax in a nonzero position, so we need x = a−1:   ax aw + bc−1 ar + bs + df−1 0 1 cs + ef−1 0 0 1   ∼   1 aw + bc−1 ar + bs + df−1 0 1 cs + ef−1 0 0 1   We are trying to make the rest of the entries 0; we can start by setting w = −bc−1 a , and s = −ef−1 c : a −bc−1 a − bc−1 = 0 c −ef−1 c − ef−1 = 0   1 aw + bc−1 ar + bs + df−1 0 1 cs + ef−1 0 0 1   ∼   1 0 ar + bs + df−1 0 1 0 0 0 1   For ease of notation, s was not substituted. We can complete the reduction by setting r = −(bs+ef−1) a : a −(bs + ef−1) a + bs + df−1 = 0   1 0 ar + bs + df−1 0 1 0 0 0 1   ∼   1 0 0 0 1 0 0 0 1   We now reduce the next product, Uσ2G: Uσ2G =   ax + dy aw + dz ar + ds + bt ey ez es + ct fy fz fs   We have that ft = 0, since again f and t are nonzero. Although ax = 0 by the same logic, the fact that ax + dy = 0 is not obvious. As we begin to construct G, however, the necessity that that ax + dy = 0 becomes clear. Immediately, we see that we need z = f−1, y = s = 0:   ax + dy aw + dz ar + ds + bt ey ez es + ct fy fz fs   ∼   ax aw + df−1 ar + bt 0 ef−1 ct 0 1 0   14
  • 15. Since we need what’s now ax = ct = 1, we can set x = a−1, t = c−1:   ax aw + df−1 ar + bt 0 ef−1 ct 0 1 0   ∼   1 aw + df−1 ar + bc−1 0 ef−1 1 0 1 0   The only variables left are w and r, and we can set w = −df−1 a , r = −bc−1 a , canceling the entries on the top row:   1 aw + df−1 ar + bc−1 0 ef−1 1 0 1 0   ∼   1 0 0 0 ef−1 1 0 1 0   Since we have no choice but to set x = a−1, z = f−1, t = c−1, this is the most we can reduce Uσ2G. Since e, f ∈ C are arbitrary, the middle entry is left arbitrary in the reduced form, so we have the following result:   1 0 0 0 ef−1 1 0 1 0   ∼   1 0 0 0 ∗ 1 0 1 0   ‘ Finally, we reduce Uσ3G, where fz is necessarily nonzero. This forces z = f−1: Uσ3G =   bx + dy bw + dz br + ds + at cx + ey cw + ez cr + es fy fz fs   ∼   bx + dy bw + df−1 br + ds + at cx + ey cw + ef−1 cr + es fy 1 fs   We set s = y = 0 to get 0’s in the last row:   bx + dy bw + df−1 br + ds + at cx + ey cw + ef−1 cr + es fy 1 fs   ∼   bx bw + df−1 br + at cx cw + ef−1 cr 0 1 0   Next, we need x = c−1, r = 0:   bx bw + df−1 br + at cx cw + ef−1 cr 0 1 0   ∼   bc−1 bw + df−1 at 1 cw + ef−1 0 0 1 0   Clearly, we need t = a−1. Yet, since df−1, ef−1 are arbitrary complex numbers, there is no choice for w that both bw +df−1 and cw +ef−1 to 0, so for purposes of convention, we choose w = −ef−1 c :   bc−1 bw + df−1 at 1 cw + ef−1 0 0 1 0   ∼   bc−1 bw + df−1 1 1 0 0 0 1 0   =   ∗ ∗ 1 1 0 0 0 1 0   Remark 7.5. Note that since we are constructing a matrix G, and each Uσ ∼= UσG/G, by choosing variables at each step in proof we do not obtain equality but rather an equivalence relation, as we are finding new matrices in the same coset Uσ. This is why we state “∼” rather than “=” after we assign variables for our constructed G. Looking now at the general case of Gr(k, n), with n k permutations, we derive n k distinct reduced matrices that correspond to n k disjoint Schubert Cells. We state our result formally: 15
  • 16. Theorem 7.6. Given σ ∈ Qkn: σ = a1 a2 . . . ak | ak+1 . . . an a1 < a2 < · · · < ak, ak+1 < · · · < an and U ∈ U, the following is the most reduced choice for UσG:                 ∗ ∗ . . . ∗ 1 0 . . . 0 ... ... . . . ... ... ... . . . ... 1 0 . . . 0 0 0 . . . 0 0 ∗ . . . ∗ 0 0 . . . 0 ... ... . . . ... ... ... . . . ... 0 1 . . . 0 0 0 . . . 0 ... ... . . . ... ... ... . . . ... 0 0 . . . ∗ 0 0 . . . 0 0 0 . . . 1 0 0 . . . 0                 The position of the 1’s in UσG corresponds to the permutation given by the window notation in the following manner: For each index 1 ≤ i ≤ n, the 1 in the aith row, or jth column appears in the ith column or ajth row. Furthermore, every entry in the ith row in a column j > ai, and in the aith column in a row r > i, is 0. Examples 7.7. Let n = 5, k = 3, and π = 1 3 5 | 2 4 . This means our “double coset” representative UπG has the following form, where the row in which any 1 is determined by the index of that number row in π. Uπ ∼ UπG =       1 0 0 0 0 0 ∗ ∗ 1 0 0 1 0 0 0 0 0 ∗ 0 1 0 0 1 0 0       . Note that the equivalence notation Uπ ∼ UπG is used because every matrix in Uπ is equivalent to the matrix UπG. 8. Young Tableau Diagrams and Indexing Schubert Cells As we continue to build the foundations for understanding how to compute intersections of Schubert Varieties in H∗(Gr(k, n)), we now transition from algebra to combinatorics, which we use to index our Schubert Cells, and ultimately, our Schubert Varieties. The transition is neces- sary because operations on these indices is what calculates intersections of Schubert Varieties in H∗(Gr(k, n)). We now define our combinatorial object of choice. Definition 8.1 (Young Tableau). [7] Given k < n, and a weakly ordered sequence of nonnegative integers λ = (λ1, λ2, . . . , λn−k), where λ1 ≥ · · · ≥ λn−k, and λ1 ≤ k, the Young Tableau diagram corresponding to λ is a diagram of contiguous 1 by 1 boxes assembled in rows such that there are λi boxes in the ith row of the diagram, and each diagram fits inside of a larger, (height) n − k by (length) k box. Remark 8.2. For this thesis, weakly ordered sequences and Young Tableau diagrams will be in- terchangeable. 16
  • 17. We fix one piece of notation regarding the sets of Young Tableau diagrams: {λ | λ fits inside a box of height r, length m} = r × m Examples 8.3. Let λ = (5, 5, 4, 2, 1). The Young Tableau diagram corresponding to λ is: To index each Schubert Cell, Uσ, using Young Tableau diagrams, we need to establish a bijection between these two objects. There is in fact a natural correspondence between the coset representa- tives UσG and diagrams that fit in an n−k by k box, determined by the placement of the arbitrary entries ∗. The diagram is taken by pushing together each ∗, and then sliding each ∗ as far left as the first row. We provide a familiar example to demonstrate. Examples 8.4. Recall from the last example in §7: Uπ ∼ UπG =       1 0 0 0 0 0 ∗ ∗ 1 0 0 1 0 0 0 0 0 ∗ 0 1 0 0 1 0 0       To find what diagram λ ⊆ 2 × 3 indexes the Schubert Cell Uπ, we vertically “push” each row of arbitrary entries ∗ together, and “slide” them until each row is completely left adjusted. This algorithm creates a diagram with a row of two boxes above a row of one box, as seen below: Uπ → λ = (2, 1) = Remark 8.5. For dealing with establishing correspondences with Young Tableau diagrams, there often occur slight disagreements about the conventions of how to form the Young Tableau diagram corresponding to the matrices UσG of Schubert Cells. Frequently, the convention is to deal with k by n − k Tableau diagrams, where λ = (λ1, . . . , λk). This is because the Schubert Cells we are indexing are usually indexed by counting the codimension of their corresponding Schubert variety, or the size of the entire Grassmannian that is outside the closure of some cell. Looking at how each permutation σ orients the arbitrary entries in UσG, we noticed a much more natural correspondence with n − k by k diagrams. This means our sequences λ are instead indexed up to n − k. Taking partitions this way, and working instead with n − k by k boxes means that we are instead counting the dimensions, or size, of the Schubert variety itself we are indexing. Later on in this paper when we discuss Knutson and Tao’s Puzzle diagrams for computing Littlewood-Richardson Coefficients, we will have to make the switch to adjust our convention to the “usual” convention. To prove the correspondence between Young Tableau diagrams and Schubert Cells, or matrices UσG, we introduce a formula that computes the corresponding diagram λ ⊆ n − k × k for each UσG. First, we present some unfamiliar notation. Let σ = a1 . . . ak | ak+1 . . . an , where a1 < a2 < · · · < ak, ak+1 < · · · < an, and define a0 = 0. We denote: Rσ = {1, 2, . . . n} {a1, a2, . . . ak} = {r1, r2, . . . rn−k} = {integers from 1 to n excluding a1, . . . , ak} 17
  • 18. Aσi = {ai−1, ai−1 + 1, . . . , ai} = {integers between ai−1 and ai} where i ≤ k The sets Rσ and Aσi depend on the permutation σ. Note the following cardinality: |Aσi | = ai − ai−1 + 1 We now prove a small lemma regarding the significance of the set Rσ. Lemma 8.6. The elements of Rσ are precisely the indices for which rows in UσG have arbitrary entries ∗. Proof. Given σ ∈ Qkn, σ = a1 . . . ak | ak+1 . . . an a1 < a2 < · · · < ak ak+1 < · · · < an by definition, the set Rσ are the integers ai in the window of σ such that i > k. Since our coset representative UσG is either the identity matrix, or somewhere contains arbitrary entries ∗, it suffices to show those entries do not occur in the rows ai where i ≤ k. Given an index i such that ai ≤ ak, by construction of UσG, the 1 in the aith row occurs in the ith column, and every entry to the right of that 1 in the ith row is 0. ∗ ∗ . . . 1 0 . . . 0 Now, either a1 = 1, and there are no zeros to the left, or a1 > 1. In this case, we know there are also 1’s in the ajth row, jth column for every index j < i. Since j < i, we know aj < ai, so for each column to the left of the ith column, there is a 1 in some row above the aith row. Since every entry below a 1 in the same column is 0, and now every entry to the left of the 1 in the aith row has a 1 above it, that entry must be 0. Thus, every entry in the aith row is either 1 or 0. We now propose the following formula is a valid tool for readily counting the arbitrary entries in each UσG, and will ultimately help prove the bijection between Schubert Cells Uσ and diagrams in n − k × k. Proposition 8.7. For λ = (λ1, λ2, . . . , λn−k), each λj = k i=1(k − i + 1)|{rj} ∩ Aσi |, where the set {rj} is a singleton set composed of the jth element of the set Rσ. Remark 8.8. Akin to an indicator function, |{rj} ∩ Aσi | is either 1 or 0 depending on whether or not the element rj is in that particular Aσi . This claim carries a lot of the unfamiliar notation just introduced, so before the formal proof, we carry out a computation to demonstrate what the formula means and how it computes the diagram corresponding to a reduced matrix. Examples 8.9. We will look at how the formula computes λ = (2, 1) from the ongoing previous example where k = 3, n = 5, our Young Tableau diagram λ ⊆ 2 × 3, and π = 1 3 5 | 2 4 . Uπ ∼       1 0 0 0 0 0 ∗ ∗ 1 0 0 1 0 0 0 0 0 ∗ 0 1 0 0 1 0 0       Applying the constructions just defined, we have the following three sets: 18
  • 19. Rπ = {2, 4}, Aπ1 = {0, 1}, Aπ2 = {1, 2, 3}, Aπ3 = {3, 4, 5} Note how Rπ is a list of the rows in the coset representative that have arbitrary entries. We now use the formula we just introduced to compute each component of the partition λ that determines our Young Tableau: λ1 = 3 i=1 (3 − i + 1)|{r1} ∩ Aπi | = 3|{2} ∩ Aπ1 | + 2|{2} ∩ Aπ2 | + 1|{2} ∩ Aπ3 | = 3(0)+ 2(1) + 1(0) = 2 λ2 = 3 i=1 (3 − i + 1)|{r2} ∩ Aπi | = 3|{4} ∩ Aπ1 | + 2|{4} ∩ Aπ2 | + 1|{4} ∩ Aπ3 | = 3(0)+ 2(0) +1(1) = 1 λ = (λ1, λ2) = (2, 1) = We now formally prove our claim about the computational formula for λj. Proof. (Proposition 8.8) Let σ ∈ Qkn. Since the shape of the indexing diagram λ is determined by the placement of the arbitrary entries ∗ in the coset representative UσG, we determine the number of entries ∗ in each row. By Lemma 8.6, either UσG = I, and the corresponding diagram λ = ·, or each row in UσG with arbitrary entries is given by Rσ. Given an index i ∈ {1, . . . , n}, looking at the entry ai, we know there are ai − ai−1 − 1 rows between the ai and ai−1 rows with arbitrary entries. Since now, the first i−1 and last k+1 columns have 1’s above the rows in question, there are ai − ai−1 − 1 times k − (i − 1) = k − i + 1 arbitrary entries in these rows, so we need to append an (ai − ai−1 − 1) by (k − 1 + 1) box to our diagram. Letting ai−1 < m < ai range, by Lemma 8.6, we have that each m = rj for some index j. Since now, ai−1 < rj < ai, by the definition of Aσi , we have rj ∈ Aσi . We compute each λj in question using the summation: λj = k i=1 (k − i + 1)|{rj} ∩ Aσi | = (k−1+1)|{rj}∩Aσ1 |+(k−2+1)|{rj}∩Aσ2 |+· · ·+(k−i+1)|{rj}∩Aσi |+· · ·+(k−k+1)|{rj}∩Aσk | = 0 + 0 + · · · + (k − (i − 1)) + 0 + · · · + 0 = k − (i − 1) Since there are ai −ai−1 −1 values of m that satisfy the range, we have ai −ai−1 −1 corresponding indices j, so we have appended a box with the appropriate dimensions to λ i.e. ai − ai−1 − 1 components λj = k − (i − 1). Remark 8.10. The formula used in the proof above is original to this thesis. Using this computational formula, we can now prove the bijection between Schubert Cells and n−k by k diagrams. Since each Schubet Cell is in one-to-one correspondence with the permutations σ ∈ Qkn, it suffices to instead prove a bijection between Qkn and n − k × k. Given σ ∈ Qkn, define φ : Qkn → n − k × k as follows: φ(σ) = λ = (λ1, . . . , λk) such that λj = k i=1 (k − i + 1)|{rj} ∩ Aσi | for every index j < k 19
  • 20. Proposition 8.11. φ is a bijection. Proof. Given two permutations σ, π ∈ Qkn, where σ = π: σ = a1 . . . ak | ak+1 . . . an , π = b1 . . . bk | bk+1 . . . bn We first show that φ is injective by proving φ(σ) = φ(π). Let φ(σ) = λ, and φ(π) = µ. Since σ = π, for at least one index j ≤ k, we have that the entries aj = bj. Consider the smallest such index, i. Since ai = bi, it follows that Aσi = Aπi . Then, either |Aσi | = |Aπi |, so there are a different number of rows with (k − i + 1) boxes in λ and µ, necessarily making the partitions different, or |Aσi | = |Aπi |. In this case, since we still have Aσi = Aπi , it must be that ai−1 = bi−1, but then there is an index smaller that i where ai = bi, which is a contradiction. This means that with ai = bi, we must have that |Aσi | = |Aπi |, so there are in fact a different number of rows with (k − 1 + 1) boxes, which means the diagrams λ and µ are not the same. To prove φ is surjective, note that there are n k diagrams that fit in an n × k by k box [13], so |Qkn| = n k = |{λ | λ ⊆ n − k × k}|. Since the cardinality of the two sets is the same, and φ is injective, the mapping must also be surjective. Thus, φ is a bijection between Qkn and n − k by k diagrams. 9. Schubert Varieties and the cohomology ring In this section, we (finally) discuss the geometry of the Schubert Varieties of Gr(k, n), and introduce the cohomology ring H∗(Gr(k, n)) so that we may use the Young Tableau diagrams that index our Schubert Cells (or varieties) to determine triple intersections of Schubert Varieites in §10. To obtain Schubert Varieties, we look at zero-sets in Cn that contain their respective Schubert Cells. We first require a formal definition for “zero-set.” Definition 9.1. Given a set of polynomials S = {f1, f2, . . . } ⊆ C[x1, x2, . . . , xn], the variety corresponding to the polynomials in S, denoted, V (S), is the following set: V (S) = {p ∈ Cn | f(p) = 0 ∀ f ∈ S} We need one more definition required to understand the link between Schubert Varieties and Schubert Cells. Definition 9.2 (Zariski Closure). Given a subspace X ⊆ Cn, we define its Zariski Closure, denoted X, to be the variety V such that X ⊆ V , and for any variety X ⊆ W, we have ⊆ W. If X = X, we say X is Zariski closed, while if X ⊂ X, we say X is Zariski open. Remark 9.3. The “Zariski Closure” refers to a particular topology known as the Zariski Topology, which counts closed sets as varieties. Although crucial to the field of algebraic geometry, we do not explore the Zariski Topology in depth for this thesis. We now define Schubert Varieties [7]. Definition 9.4. We define the Schubert variety corresponding to the permutation σ (or equiva- lently indexed by λ) to be the Zariski Closure of the Schubert Cell Uσ, denoted Uσ. In general, given a Schubert Cell Uσ, the Schubert Variety Uσ = Uσ. This is to say, Schubert Cells are not typically Zariski closed. The following theorem formalizes this statement [7]. Theorem 9.5. Each Schubert Cell of Gr(k, n) is isomorphic to the intersection of a Zariski open, and Zariski closed subspace. 20
  • 21. Theorem 9.5. offers a correspondence between the algebra we use to define Schubert Cells, and the geometry of Gr(k, n). We thus will need to formally define a map between geometry and algebra. There is in fact a natural isomorphism ϕ between n by n matrices, and points in Cn2 : M =      a11 a12 . . . a1n a21 a22 . . . a2n ... ... ... ... an1 an2 . . . ann      , aij ∈ C ϕ(M) = (a11, a12, . . . , ann) ∈ Cn2 . The map ϕ “coordinatizes” matrices, i.e. takes an arbitrary n by n matrix to a point in space. Proving this theorem requires implementing tools inaccessible given the scope of this thesis, so we instead demonstrate the phenomenon with the following example [7]. Examples 9.6. Let n = 3 and k = 2 so that we are working in Gr(2, 3). Now, let σ = 2 3 | 1 , which gives for the Schubert Cell Uσ, the representative, Uσ ∼ UσG =   ∗ ∗ 1 1 0 0 0 1 0   . Note that the coset representative UσG is in fact a set of matrices, so ϕ(UσG) maps to a set of points in C32 = C9. We are not quite ready to apply this map, however. Although the coset representatives we derived have 1’s in the nonzero entries, these matrices are equivalent to a matrix with any nonzero entry where there are currently 1’s. This is because given three arbitrary complex numbers x, y and z, there exists a matrix G such that UσG ∼ UσGG =   ∗ ∗ x y 0 0 0 z 0   . The equivalence is retained because G acts as the identity in Uσ. We now are ready to compute ϕ(UσG): ϕ(UσG) = ϕ     ∗ ∗ 1 1 0 0 0 1 0     ∼ ϕ     ∗ ∗ x y 0 0 0 z 0     = {(a11, a12, . . . , a33) ∈ C9 | a13, a21, a32 = 0, a22 = a23 = a31 = a33 = 0} Note that each of these three sets denotes points in C9 subject to constraints that either com- ponents aij are zero or components aij are nonzero. Treating the entries aij of matrices in Uσ as monomials, this set describes the intersection of a variety of a12, a13, a21, a23, a31, a32, and everything not in the variety of a11, a22, a33. We reexpress our set accordingly: Uσ ∼= V (a22, a23, a31, a33) ∩ C9 V (a13, a21, a32) We are now ready to discuss the cohomology ring. Definition 9.7. [7] The cohomology ring of the Grassmannian, H∗(Gr(k, n)), is a commutative ring with unity whose basis elements correspond to the Schubert Varieties of Gr(k, n). Definition 9.8. [7] Given the Schubert Cells Uσ, its Schubert variety Uσ and index λ, we call the representative of Uσ in H∗(Gr(k, n)) its Schubert class, denoted σλ. 21
  • 22. The cohomology ring operation with which this thesis is concerned is the cup product. The cup product of two Schubert classes σλ and σµ is denoted σλ · σµ, and given by the following familiar formula: σλ · σµ = Pλ,µ cν λ,µσν Pλ,µ = {ν = (ν1, . . . , νn−k | n−k j=1 νj = n−k j=1 λj + n−k j=1 µj} cν λ,µ ∈ Z+ The cup product computes intersections among the varieties represented by σλ, σµ and σν for each ν ∈ Pλ,µ [7]. In the cup product appear the positive integers cν λ,µ that depend on the diagrams ν. Computing these integers, known as Littlewood-Richardson Coefficients, requires very involved processes that are fully explained in the next section. 10. Computing Littlewood-Richardson Coefficients For the Littlewood-Richardson Rule to compute cν λ,µ, we took the approach outlined in Knutson and Tao’s paper published in 2001 known as the “Puzzle Rule” [15]. Remark 10.1. This is the point in this paper where we momentarily switch convention, and instead work with diagrams in k by n−k boxes. Although we use the k by n−k convention in this section, and in general for making computations related to the Puzzle Rule, we retain the n − k by k convention when referring to Schubert Cells, as there is a natural link between diagrams of those particular dimensions, and the arbitrary entries in a matrix UσG Since the Puzzle Rule requires using binary strings rather than Young Tableau diagrams to represent partitions, our first task is to convert our diagrams into binary strings. Fix k and n so that we are working with diagrams that fit in a k by n − k box. The ”conversion” process is described as follows: Beginning in the upper right corner of the k by n − k box, we assign a 0 and shift one box to the left if there is no box in that position, or assign a 1 and shift down a box if there is. We repeat this process of either assigning 0 and moving left, or assigning 1 and moving down will trace the diagram within the larger box. We demonstrate with an example. Examples 10.2. Let n = 11, k = 6, λ = (5, 4, 3, 3, 1) λ = For the tracing process, we begin in the upper right corner. Since there is in fact a box there, we trace down one box, and append a 1 to our string. Now, there is no box in our current position, so we trace left and append 0 on the left of the first 1. We then trace down, and then to the left, appending 10, giving so far, 1010. Next, we trace down two more boxes, giving us 101011, left two boxes, giving 10101100, and then down, left and finally down again. The final string is 10101100101, i.e. 22
  • 23. → 10101100101 Now, we have to establish a one-to-one correspondence between strings and Young Tableau diagrams to ensure there is in fact one string for every partition. Proposition 10.3. There is a bijection ψ : {s | s is a binary string with k ones and n − k zeros} → k × n − k, given by the rule ψ(s) = λ = (λ1, . . . , λk), where for each index j < k, λj = k − the number of 0’s before the jth 1. Proof. First, we show ψ is injective. Given two strings of k ones and n − k zeros, say s1 and s2, where s1 = s2, let ψ(s1) = λ = (λ1, . . . , λk), and ψ(s2) = µ = (µ1, . . . , µk). We know that there are a1 0’s before the first 1 in s1, where 0 ≤ a1 ≤ k, and b1 0’s before the first 1 in s2, where 0 ≤ b1 ≤ k. This means by construction of λ and µ, λ1 = k − a1, and µ1 = k − b1. We have that either a1 = b1, so λ1 = µ1, or a1 = b1. If a1 = b1, we know that λ1 = µ1, and we then consider the number of 0’s before the second 1. As before, denote by a2 the number of 0’s before the second 1 in s1 and denote by b2, the number of 0’s before the second 1 in s2, where a1 ≤ a2 ≤ k, and b1 ≤ b2 ≤ k. Once again, either a2 = b2, so λ2 = µ2, or λ2 = µ2. If λ2 = µ2, we repeat this process and check for the number of 0’s before the third 1. If we repeat this process until we check the kth 1, and have ai = bi for every index, then that contradicts s1 = s2. Thus, λ = (λ1, . . . , λk) = (µ1, . . . , µk) = µ, so this mapping is injective. Since there are n−k zeros and k 1’s, there are n k possible strings, and since there are n k Young Tableau diagrams that fit in a k by n − k box, the cardinality of the sets is the same, so having injective mapping implies it is bijective mapping. Now that there is a clear mapping from diagrams to strings, we can move on to describing the Puzzle Rule. Let λ, µ and ν by binary strings with n − k zeros and k ones. Puzzles are constructed as follows: For our fixed n, we begin by creating an equilateral triangle with (n + 1) vertices on each side. This creates n edges on each side. We label each edge with a 1 or a 0, and reading from left to right, take the labelings on the edges as the binary strings. We write λ and µ on the upright sides, and ν on the bottom. Then we fill in n2 equilateral triangles inside the large, labeled one. These interior triangles will too have labels on them. There are three different choices for labeling, or three different pieces that are used to complete puzzles: A triangle may have all 0’s, all 1’s or moving counterclockwise, a 2, a 0 and a 1. The idea is, using only those pieces, to see if given the three labelings on the edges of the larger triangles, how many ways the interior edges can be labelled. We are now ready to formally present the Puzzle Rule. Theorem 10.4 (Puzzle Rule). The number of different valid puzzles using λ, µ and ν as the left, right and bottom edges respectively is precisely cν λ,µ [15]. Remark 10.5. Our version of the Puzzle Rule is slightly modified, as the original puzzles incorpo- rated both rhombi and triangles, while we only use triangles. Examples 10.6. Let n = 4, k = 2, λ = (2, 1), µ = (1, 0), ν = (2, 2) λ = µ = , ν = 23
  • 24. By the tracing algorithm, we have the following corresponding binary strings: λ = 1010, µ = 0101, ν = 1100 Using λ as the labels on the left edges, µ as the labels on the right edges, and ν as the labels on the bottom edge, there is one possible puzzle that can be completed [2]: 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 ⇐⇒ cν λ,µ = 1 Although these puzzles can be computed by hand, doing so can take a while for larger n’s. We instead reserve our computations of puzzles for Sage. 11. Understanding the Saturation Conjecture We now understand what the Schubert Varieties of the Grassmannian are, and how to determine their intersections by computing cup products and determining Littlewood-Richardson coefficients; we are ready to formally state the theorem Knutson and Tao prove that answers Horn’s Question [7]. Theorem 11.1. There exist Hermitian matrices A, B and C, where A + B = C, such that α are the eigenvalues of A, β are the eigenvalues of B, and γ are the eigenvalues of C, where α, β and γ ∈ Zk ≥0 if and only if looking at the varieties Uσ ↔ σα, Uπ ↔ σβ, Uρ ↔ σγ we have that in the cup product σα · σβ, cγ α,β = 0. To build up to this statement that links cohomology all the way back to eigenvalues, Knutson and Tao first prove the Saturation Conjecture (Theorem 2.3), a key statement about cohomology. We provide the statement of the Saturation Conjecture below for convenience: ∃ N ∈ N such that cNν Nλ,Nµ = 0 ⇐⇒ cν λ,µ = 0 λ = {λ1, . . . , λk}, Nλ = {Nλ1, . . . , Nλk} Note that one direction is trivial, since this statement is true for N = 1. To prove the other direction of the Saturation Conjecture, Knutson and Tao make use of two devices, the honeycomb model and the hive model [14]. Although understanding the honeycomb model reaches beyond the scope of this thesis, the hive model is quite accessible, and explored in detail to gain the necessary intuition for tackling the Quantum Saturation Conjecture [6]. For the hive model, we start with an equilateral triangle with m + 1 vertices along each edge, for some m ∈ N. Then, creating vertices and connecting each vertex, we create m2 equilateral triangles inside the larger triangle. A triangle with m = 3 is drawn below, where there are 3 + 1 = 4 vertices on each side of the border, and 32 = 9 small equilateral triangles within the larger original one. 24
  • 25. We now introduce a new round of definitions to explain this “triangle construction.” Definition 11.2. In our current construction, we assign each vertex a real number called the label of the vertex. Definition 11.3. Looking at the union of two equilateral triangles adjoined by either a northeast or northwest border, which we appropriately call a rhombus, if the sum of the labels on the obtuse angles is greater than or equal to the sum of the labels on the acute angles, we say we have satisfied a rhombus inequality. We are ready to formally define this object we have developed. Definition 11.4. A hive is a diagram where every rhombus inequality is satisfied. Definition 11.5. We say that a hive or a border is integral if every vertex label is an integer. To introduce a bit of notation, given a fully labeled (not necessarily hive) diagram of the form described above, we denote the set of its vertices by H. While H is just the set of its vertices, which could be indexed for convenience, RH is the space where the vertices in H are assigned real numbers. Elements h ∈ RH are basically vectors in RM , where M is the number of vertices, i.e. M = m k=1 k = m(m + 1) 2 . For the example above, where m = 3, we determine M = 10, so elements in RM are vectors in R10 We will analogously denote the set of border vertices B, and the labelings of these border vertices with real numbers RB. Elements b ∈ RB are likewise vectors in R3m. We also will refer to integral hives and integral borders as belonging to ZH, and ZB, respectively, where instead of being labeled by real numbers, vertices are labeled only by integers. We refer to the subset C ⊂ RH as the set consisting of all M-tuples of vertex labels that form a valid hive. According to Buch, this subset C forms a convex polyhedral cone [6]. Finally, we define a (restriction) map ρ : RH → RB by the following rule: ρ(h) = b = the border labels of h Also according to Buch, for the border of a fully-labeled diagram, b ∈ RB, we have the following equality: ρ−1 (b) ∩ C = {h ∈ RH | h is a hive with the border b} The set ρ−1(b) ∩ C is important enough to win its own definition. Definition 11.6. We call the set ρ−1(b) ∩ C the hive polytope over b. Remark 11.7. Every hive polytope forms a compact polytope [6]. We formally state a key claim used in Buch’s paper [6]. 25
  • 26. Proposition 11.8. Given a hive h ∈ RH, and its border ρ(h) = b, ρ−1 (b) ∩ C = ∅ ⇐⇒ every rhombus equality in h is satisfied The point of hives is to compute Littlewood-Richardson Coefficients. This theorem, named Theorem 1 in Buch’s Paper, states how hives compute cν λ,µ. Theorem 11.9. [6] Given λ, µ and ν ∈ k × n − k, if we take a triangle with the label 0 at the top, along the northeast border, j i=1 λi for the jth vertex down along that border, k i=1 λ + j i=1 µi for the jth vertex from the bottom right, and j i=1 νi for the jth vertex down along the northwest border, then cν λ,µ is the number of integral hives formed from this choice of b. We now need one last definition that follows from the statement of this theorem used to prove the Saturation Conjecture. Definition 11.10. Given the diagrams λ, µ and ν, if we have the border of a hive h satisfy the construction in Theorem 11.9, we say that border, b, is defined by the triple (λ, µ, ν) An example is necessary. Examples 11.11. Examples of “interesting” hives are very hard to construct, so to retain sim- plicity, we are forced to borrow the example Buch provides after the statement of this theorem [6]. λ = µ = (2, 1) → ν = (3, 2, 1) → The corresponding diagram has the following borders, with the lone interior vertex momentarily undetermined. 6 6 5 3 5 x 3 3 2 0 The border b is the vector in R9, b = (0, 2, 3, 3, 5, 6, 6, 5, 3). We can determine the hive polytope by looking at what values for x satisfy the rhombus inequalities: x + 2 ≥ 3 + 3 x + 3 ≥ 5 + 2 x + 6 ≥ 5 + 5 x + 5 ≥ 6 + 3 5 + 3 ≥ x + 3 5 + 6 ≥ x + 6 Assessing the inequalities, we have that x is necessarily bigger than 4 and less than 5. We then take the hive polytope to be {(0, 1, 3, 3, 5, 6, 6, 5, 3, x) ∈ R10 | x ∈ [4, 5]}, which is naturally isomorphic to the compact subset [4, 5] ⊂ R. Since there are two choices for x that allow for an integral hive, the theorem says that cν λ,µ = 2. 26
  • 27. The purpose of this section is to fully internalize the development and proof of the Saturation Conjecture using the hive model, and adapt this method to begin our exploration into proving the Quantum Saturation Conjecture. Although Buch’s proof of the Saturation Conjecture requires arguments too advanced for this thesis, internalizing the proof is crucial to attempting to adapt this treatment to tackle the Quantum Saturation Conjecture, so we instead provide a rough outline of the strategy. The argument used in proving the Saturation Conjecture consists of proving a particular sufficient condition [6]. Fix λ, µ and ν such that the border b of a hive is defined by (λ, µ, ν), and the hive polytope ρ−1(b) ∩ C, satisfies the condition ρ−1 (b) ∩ C ∩ ZH = ∅. This inequality implies that there is some integral hive contained in the hive polytope over the border defined by (λ, µ, ν). Buch first claims that if we then scale the components of the hive polytope over b up by some N, we are still left with a nonempty polytope, i.e. if we let Nb = the border defined by (Nλ, Nµ, Nν), then we have the following inequality: ρ−1 (Nb) ∩ C = ∅. Intuitively, this is because looking at the original rhombus inequalities from each h ∈ ρ−1(b)∩C, scaling up the inequalities only results in shifting and scaling up the solution set. If we then check whether the border of this new partition is in ZB, or an integral border, Buch proves we are guaranteed that the hive polytope over (Nλ, Nµ, Nν) does contain an element of ZH, or an integral hive; this is the sufficient condition needed to complete the Saturation Conjecture. It’s not immediately obvious that simply because Nb is an integral border, there is some integral hive in the set ρ−1(Nb). This is the heart of the proof Buch provides; he argues that a particular hive he calls a “maximal” hive in the polytope ρ−1(Nb) ∩ C must have all of its vertices as Z-linear combos of all of the border vertices, which gives us an integral hive [6]. We provide a hive computation example that shows the Saturation Conjecture in action. Examples 11.12. Per our last example, let λ = µ = (2, 1) and ν = (3, 2, 1). The Saturation Conjecture says that since cν λ,µ = 0, we can find some natural number N ∈ N such that cNν Nλ,Nµ = 0. In this example, N = 2 satisfies the Saturation Conjecture: Nλ = Nµ = (4, 2) → Nν = (6, 4, 2) → These diagrams give rise to the hive below 27
  • 28. 12 12 10 6 10 y 6 6 4 0 which creates the following rhombus inequalities, where of note, the new inequalities are the original inequalities scaled by N = 2: y = Nx y + 4 ≥ 6 + 6 ⇐⇒ N(x + 2 ≥ 3 + 3) y + 6 ≥ 10 + 4 ⇐⇒ N(x + 3 ≥ 5 + 2) y + 12 ≥ 10 + 10 ⇐⇒ N(x + 6 ≥ 5 + 5) y + 10 ≥ 12 + 6 ⇐⇒ N(x + 5 ≥ 6 + 3) 10 + 6 ≥ y + 6 ⇐⇒ N(5 + 3 ≥ x + 3) 10 + 12 ≥ y + 12 ⇐⇒ N(5 + 6 ≥ x + 6) If we simultaneously evaluate the six inequalities with respect to y, we find that y ∈ [8, 10], which is the original interval stretched and shifted by N, i.e. [4N, 5N]. Remark 11.13. We noticed that in general that scaling up the sequences λ, µ and ν stretches solution set, which gives cNν Nλ,Nµ > cν λ,µ. The exception is for when cν λ,µ = 1; in this case, when cNν Nλ,Nµ = 0, we have always found that cNν Nλ,Nµ = cν λ,µ = 1. We are not the first to notice this trend, however. Buch mentions an unproven conjecture, Fultun’s Conjecture [6], with states that if cν λ,µ = 1, and cNν Nλ,Nµ = 0, it’s necessarily the case that cNν Nλ,Nµ = 1 as well. 12. The Quantum Case The goal of this thesis is to consider a quantum analogue of Horn’s Question by investigating a possible solution to the Quantum Saturation Conjecture; we now make the necessary transi- tion from the classical cohomology ring, H∗(Gr(k, n) to the quantum cohomology ring, denoted QH∗(Gr(k, n)). Like its classical counterpart, QH∗(Gr(k, n)) is a commutative ring with unity whose basis elements are the Schubert classes that represent the Schubert Varieties of Gr(k, n). The operation analogous to the cup product in QH∗(Gr(k, n)) is called the quantum cup product, denoted σλ σµ. Like the classical cup product, the quantum cup product requires an involved theorem to fully compute. Before introducing this theorem, we provide two definitions [7]. Definition 12.1. We call a contiguous chain of n boxes along the rightmost edge of some diagram an n-hook. The idea of n-hooks seems arbitrary, but is an essential part of computing the quantum cup product; obtaining quantum cup products requires first removing as many n-hooks of a diagram as possible, and then considering the remainder, which we formally define. Definition 12.2. Given some Young Tableau diagram, λ ⊆ k × n − k, if we remove all possible n-hooks from λ and are left with a diagram also in the set k × n − k, we call this new diagram the n-core of λ, denoted c(λ). We are now ready to introduce the following theorem regarding computing σλ σµ [4]. 28
  • 29. Theorem 12.3 (Rimhook Rule). Fix n, k ∈ N. Given λ, µ ⊆ k × n − k, we compute the quantum cup product σλ σµ ∈ QH∗(Gr(k, n)) by first taking σλ · σµ ∈H∗(Gr(k, 2n − k)), and applying the following map to each term in the resulting sum σλ · σµ = Pλ,µ cν λ,µσν σν = (−1) i(n−k−ht(Ri))qdσc(ν) if c(ν) ⊆ k × n − k 0 otherwise where ht(Ri) is the number of rows in the ith rimhook removed from the original diagram ν, and d is the total number of rimhooks removed from ν. Remark 12.4. The sum from the quantum cup product is denoted Pλ,µ = cd,ν λ,µqd σν where cd,ν λ,µ is the quantum Littlewood-Richardson coefficient, and cd,ν λ,µ = (−1)d cν λ,µ This process is hard to internalize without an example, so we provide one below. Examples 12.5. We first practice removing n-hooks. Let ν = (5, 4, 3) = . If we were to remove 5-hooks, we would begin at the rightmost box, and trace to the left and down, remaining on the edge at all times. We remove the first 5-hook: remove 5-hook → Looking again at the upper-rightmost part of the remaining diagram, we see there are still 5 boxes along the edge, so we can remove another 5-hook: remove 5-hook → Since there are not even 5 boxes left, there are no more 5-hooks to remove. This leaves us with the 5-core of ν, as c(ν) = . Examples 12.6. Let n = 4, k = 2, λ = (2, 2) and µ = (2, 0). To compute the quantum cup product, we first need to take the classical product in H∗(Gr(2, 6)): σλ · σµ = · = = σν We now look to remove 4-hooks from ν. Beginning from the upper right hand corner, tracing a diagram along the edge will remove an inverted L, leaving us with c(ν) = (1, 1). We then multiply c(ν) by qd, where d = 1 rim hook removed, and −1n−k−ht(R1), where n − k = 2, ht(Ri) = 2: = (−1)2−2 q = q Thus, we have computed the quantum cup product 29
  • 30. σλ σµ = qσc(ν) c(ν) = Remark 12.7. Although we specified our order of removing n-hooks, in general, the order of removing n-hooks doesn’t matter, since c(ν) forms independent of the choice of n-hooks removed. 13. Current Progress We now share our work on investigating a method of proving the Quantum Saturation Conjecture (Conjecture 2.4), restated below for convenience: ∃ N ∈ N such that cNd,Nν Nλ,Nµ = 0 ⇐⇒ cd,ν λ,µ = 0. Unfortunately, doubts that quickly arise when merely testing the Quantum Saturation Conjecture with Puzzle Rule computations hinder any significant progress into this investigation. Although there already exists a proof of Horn’s Question in more general cases including the quantum case [3], the statement in this proof is far more advanced a quantum replica the Saturation Conjecture, and doesn’t relieve the concerns that arise from our toy computations, such as the one found below. Examples 13.1. Let n = 8, and k = 3 so that we are computing the quantum product in QH∗(Gr(3, 8)), and applying the rimhook rule in H∗(Gr(3, 13)). Let λ = (3, 3), and consider σλ σλ: σλ σλ ∈ QH∗ (Gr(3, 8)) = σ · σ ∈ H∗ (Gr(3, 13)) = σ + σ + σ + σ ∈ H∗ (Gr(3, 13)) = qσ + qσ + qσ + ∈ QH∗ (Gr(3, 8)) Now, consider σ1 3 λ σ1 3 λ, and recall that we should have a result where if σ1 3 λ σ1 3 λ = 0 then for every term σν in the original product, we would have σ1 3 ν in the new product. The computation is as follows: σ1 3 λ σ1 3 λ ∈ QH∗ (Gr(3, 8)) = σ + σ ∈ H∗ (Gr(3, 13)) = σ + σ ∈ QH∗ (Gr(3, 8)) 30
  • 31. Our concern is that the techniques used to prove the saturation conjecture would not have predicted the result of the product σ1 3 λ σ1 3 λ in our naive understanding of the Quantum Saturation Conjecture. The result of classical products involved having each index ν scaled by 1 3 as well, where our result does not involve scaling in the familiar sense, but rather adding and subtract degrees from the intersecting smooth curve indexed by qd. This is to say, although we did in fact find an N ∈ N such that the quantum Littlewood-Richardson coefficient c d, 1 N ν 1 N λ, 1 N µ = 0 and thus satisfied even our “naive” understanding, there is no way to account for the terms qd in the quantum cup product using either the current Hive Model or Puzzle Rule. Without a way of calculating the appearance of terms qd given our current knowledge of computing the quantum cup product, we feel actually proving the Quantum Saturation Conjecture falls too far out of the scope of this thesis. 14. A Few Original Observations In our computations for investigating the quantum Saturation Conjecture, we noticed a few interesting facts about the behavior of Schubert classes in both H∗(Gr(k, n)) and QH∗(Gr(k, n)) that do not appear in the current literature. We formally state and prove these observations as lemmas in this section. Our results make heavy use of Pieri’s formula [9], formally stated below. Theorem 14.1 (Pieri’s Formula). Given any Schubert class in the classical cohomology ring H∗(Gr(k, n)), σλ, let (mk) denote the Young Tableau diagram with a column of height k, length m (i.e. k rows of m). We have for the cup product σλ · σ(1)r = Λ σλ Λ = {λ ⊆ k × n − k | λ no two boxes are appended to the same row}. where r is any positive integer. Fix k ∈ N and let m ≥ 3k. We now state our first lemma. Lemma 14.2. σ(1)k k = σ(k)k in the classical cohomology ring, H∗(Gr(k, m)). Proof. We can expand the term σ(1)k k : σ(1)k k = σ(1)k · σ(1)k · · · · · σ(1)k = (σ(1)k · σ(1)k ) · · · · · σ(1)k Looking at the term in parentheses, Pieri’s formula states that σ(1)k · σ(1)k = Λ σ(1)k Λ = {σ(1)k ∈ k × n − k | no two boxes are appended to the same row in σ(1)k } Since we can’t have diagrams in our sum with more than k rows, and σ(1)k already has k rows, the only valid diagrams σ(1)k have k rows. Since also, Pieri’s formula states that at most one box may be appended to each row, and σ(1)k has k boxes, the cup product has one term: σ(1)k · σ(1)k = σ(2)k Consider now, σ(2)k · σ(1)k . Since σ(2)k has k rows, no new rows may be added to any term in the cup product. By another application of Pieri’s formula, the product is forced into only one term: 31
  • 32. σ(2)k · σ(1)k = σ(3)k By iterating through the rest of the terms in the expansion of σ(1)k k and applying the same argument, at each step i in the cup product σ(i)k · σ(1)k , we add 1 box to each row of σ(i)k so that σ(i)k · σ(1)k = σ(i+1)k Looking back at the expansion of σ(1)k k , we can state the following equalities: σ(1)k k = σ(1)k · σ(1)k · · · · · σ(1)k . By the associativity of the cup product, we have that, = (σ(1)k · σ(1)k ) · · · · · σ(1)k = σ(2)k · σ(1)k · · · · · σ(1)k = σ(3)k · σ(1)k · · · · · σ(1)k ... = σ(k−1)k · σ(1)k = σ(k)k Having proved Lemma 14.2, the proof of this next lemma is very straightforward. The significance of the next lemma is that there is a specific result for the quantum cup product of the Schubert classes of k by k squares. Lemma 14.3. σ(k)k σ(k)k = qk in the quantum cohomology ring, QH∗(Gr(k, 2k)) Proof. We compute the quantum cup product σ(k)k σ(k)k = qk in QH∗(Gr(k, 2k)) by first taking the classical product σ(k)k · σ(k)k = qk in H∗(Gr(k, 3k)) Recall that by Lemma 14.2. σ(k)k = σ(1)k · σ(1)k · · · · · σ(1)k . Applying this result, we have σ(k)k · σ(k)k = σ(1)k · σ(1)k · · · · · σ(1)k · σ(k)k = σ(1)k · σ(1)k · · · · · σ(1)k · σ(k+1)k ... = σ(1)k · σ(2k−1)k = σ(2k)k Since the rimhook rule calls for removing 2k-hooks from σ(2k)k , and since σ(2k)k has k rows of length 2k, we remove k rim hooks of length 2k, and thus σ(2k)k = qk ∈ QH∗ (Gr(k, 2k)) We offer one last lemma, a generalization of the “product of squares” in Lemma 14.3 to a product of any two “rectangles” of height k in H∗(Gr(k, n)). Lemma 14.4. σ(r)k · σ( )k = σ(r+m)k in the classical cohomology ring, H∗(Gr(k, m)). 32
  • 33. Having practiced the argument involving Pieri’s formula in the last two lemmas, the proof for Lemma 14.4. is once again rather trivial. Proof. We simply compute the cup product in question: σ(r)k · σ( )k = σ(1)k · σ(1)k · · · · · σ(1)k · σ( )k There are r terms σ(1)k in this product, so we iterate this recurring argument r times: σ(1)k · σ(1)k · · · · · σ(1)k · σ( )k = σ(1)k · σ(1)k · · · · · σ(1)k · σ( +1)k = ... σ(1)k · σ(r+ −1)k = σ(r+ )k Corollary 14.5. In QH∗(Gr(k, 2k)), if r + = 2k, then σ(r)k σ( )k = qk. Proof. Since σ(r)k · σ( )k = σ(r+ )k = σ(2k)k , we have σ(r)k σ( )k ∈ QH∗ (Gr(k, 2k)) = σ(r)k · σ( )k ∈ H∗ (Gr(k, 3k)) = σ(2k)k ∈ H∗ (Gr(k, 3k)), and by removing k rim hooks of length 2k, we have qk ∈ QH∗ (Gr(k, 2k)). 15. The Next (Small) Steps Other than continuing to determine if there is a positive answer to the quantum Saturation Conjecture, the next step for us is generalize our lemmas to see if there are formulas that like Pieri’s formula, predict the cup product of certain Schubert classes. Having proved the products of rectangles in H∗(Gr(k, m)) and QH∗(Gr(k, 2k)), we hope these results, or the methods of proving these results, apply to proving a formula for a product of this type: σλ · σ(2)r for some r ∈ N. This product is intuitively a “double” Pieri product, where instead of taking a column with r entries, we take two consecutive columns with r entries. We focused heavily on the following example of a product of this type. Let λ = (2, 1, 1) and consider σλ · σ(2)2 in a sufficiently large Grassmannian: σλ · σ(2)2 = σ + σ + σ + σ + σ + σ + σ We compare this to the product σλ · σ(1)2 , which Pieri’s formula predicts: 33
  • 34. σλ · σ(1)2 = σ + σ + σ + σ + σ The relevance of this example is to show that proving a formula for products of type σλ ·σ(2)r , we need arguments more creative than the ones we incorporate in this thesis. If we were to hypothesize, for example, that every term in σλ ·σ(2)2 can be found by taking the product of each term in σλ ·σ(1)2 with another σ(1)2 , (we refer to “stacking” elements σ(1)2 as products of type P for convenience), we would be off by many terms. Of note, there are a number of products of type P that have more than 5 rows, but interestingly enough, no terms in σλ · σ(2)2 that have more than 5 rows. We realize this is mostly likely the result of Pieri’s formula for rows [9], which has the analogous statement to the Pieri’s formula used thus far with every instance of “row” and “column” swapped. Since every other product of type P occurs in σλ · σ(2)2 , we would want to explore more examples to see if applying both Pieri formulas would completely determine products of type σλ · σ(r)2 , and perhaps of a general type σλ · σ( )r . 34
  • 35. References [1] Artin, Michael. Algebra. Englewood Cliffs, NJ: Prentice Hall, 1991. Print. [2] Beazley, Elizabeth, Anna Bertiger, and Kaisa Taipale. “An Equivariant Rim Hook Rule for Quantum Cohomology of Grassmannians.” Discrete Mathematics and Theoretical Computer Science (2013): n. pag. Print. [3] Belkale, Prakash. “Quantum Generalization of the Horn Conjecture.” Journal of the American Mathematical Society 21.02 (2008): 365-409. [4] Bertram, Aaron, Ionut Ciocan-Fontanine, and William Fulton. “Quantum Multiplication of Schur Polynomials.” Journal of Algebra 219.2 (1999): 728-46. [5] “Bruhat Decomposition.” Wikipedia. Wikimedia Foundation, 03 May 2014. Web. 11 Apr. 2014. [6] Buch, Anders M. “The Saturation Conjecture (After A. Knutson and T. Tao).” American Mathematical Society (1998) [7] Conversations with Doctor Elizabeth Beazley. [8] Fulton, William. “Eigenvalues, Invariant Factors, Highest Weights and Schubert Calculus.” The American Math- ematical Society, 05 Apr. 2000. Web. 05 Oct. 2013. [9] Fulton, William. Young Tableaux: With Applications to Representation Theory and Geometry. Cambridge: Cambridge UP, 1997. pp. 145-47. [10] Horn, Alfred. Eigenvalues of sums of Hermitian matrices. Pacific Journal of Mathematics 12 (1962), no. 1, 225–241. [11] Klyachko, Alexander A. Birkhuser Verlag, Basel, 1998. Web. 09 Oct. 2013. [12] Strang, Gilbert. Introduction to Linear Algebra. Wellesley, MA: Wellesley-Cambridge, 2009. Print. [13] Stanley, Richard R. “Enumerative Combinatorics,” Vol 1 (2nd edition). Cambridge (2012), p. 60. [14] Tao, Terence, and Allen Knutson. “Honeycombs and Sums of Hermitian Matrices.” American Mathematical Society (2000): http://arxiv.org/abs/math/0009048. [15] Tao, Terence, and Allen Knutson. “Puzzles and (equivariant) Cohomology of Grassmannians.” Duke Mathemat- ical Journal 119.2 (2003): 221-60. Print. 35