SlideShare a Scribd company logo
Computer algebra
▶ Equations are everywhere : differential equations, linear or polynomial
equations or inequalities, recurrences, equations in groups, algebras or
categories, tensor equations etc.
▶ There are two ways of solving such equations: approximately or exactly
▶ Oversimplified, numerical analysis studies efficient ways to get approximate
solutions; computer algebra wants exact solutions
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
▶ Equations are everywhere : differential equations, linear or polynomial
equations or inequalities, recurrences, equations in groups, algebras or
categories, tensor equations etc.
▶ There are two ways of solving such equations: approximately or exactly
▶ Oversimplified, numerical analysis studies efficient ways to get approximate
solutions; computer algebra wants exact solutions
To get started, an example from chemistry
Watch out for three (very typical but usually highly nontrivial) steps:
▶ create a mathematical model
▶ “solve” the model (enter e.g. computer algebra)
▶ interpret the solution
We will see C6H12 next, so here it is:
Based on a conjecture of Sachse ∼1890
and a solution by Levelt ∼1997
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
chair:
one boat:
▶ C6H12 occurs in incongruent conformations: chair (one) and boats (many) mod mirrors
▶ Chair occurs far more frequently than the boats
▶ Chair is stiff while the boats can twist into one another
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
chair:
one boat:
▶ C6H12 occurs in incongruent conformations: chair (one) and boats (many) mod mirrors
▶ Chair occurs far more frequently than the boats
▶ Chair is stiff while the boats can twist into one another
Mathematical modeling is difficult
so it happens often that one needs to
go back and modify the approach
This happened here as the first solution attempt was not helpful
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product
▶ Model Sij = ai ⋆ aj as variables
▶ One gets polynomial variables subject to the relations above ⇒ get solution
via Gröbner bases
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product
▶ Model Sij = ai ⋆ aj as variables
▶ One gets polynomial variables subject to the relations above ⇒ get solution
via Gröbner bases
One gets that the inflexible solution chair is an isolated point
while the boats lie on a curve:
Chair won’t move
and boats can be twisted
when build from tubes
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product
▶ Model Sij = ai ⋆ aj as variables
▶ One gets polynomial variables subject to the relations above ⇒ get solution
via Gröbner bases
One gets that the inflexible solution chair is an isolated point
while the boats lie on a curve:
Chair won’t move
and boats can be twisted
when build from tubes
Gröbner bases are an essential part of computer algebra
so this is a fabulous example
of the usage of computer algebra
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product
▶ Model Sij = ai ⋆ aj as variables
▶ One gets polynomial variables subject to the relations above ⇒ get solution
via Gröbner bases
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product
▶ Model Sij = ai ⋆ aj as variables
▶ One gets polynomial variables subject to the relations above ⇒ get solution
via Gröbner bases
What we are using throughout is worst-case-analysis using:
f ∈ O(g) :
Careful This is different from:
▶ Average-case-analysis
▶ Computational implications due to overhead (≈ the part before n0)
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Fast multiplication
▶ Given two polynomials f and g of degree < n; we want fg
▶ Classical polynomial multiplication needs n2
multiplications and (n − 1)2
additions; thus mult(poly) ∈ O(n2
)
▶ It doesn’t appear that we can do faster
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
▶ Karatsuba ∼1960 It gets faster!
▶ Reduce multiplication cost even when potentially increasing addition cost
▶ Second, apply divide-and-conquer
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
▶ Karatsuba ∼1960 It gets faster!
▶ Reduce multiplication cost even when potentially increasing addition cost
▶ Second, apply divide-and-conquer
We compute ac, bd, u = (a + b)(c + d), v = ac + bd, u − v
with 3 multiplications and 4 additions = 7 operations
The total has increased, but a recursive application will drastically reduce the overall cost
Upshot We only have 3 multiplications not 4
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
Example
f = g = x3
+ x2
+ x + 1 is equal to F1 + F0 = (x + 1)x2
+ x + 1
F2
0 = F2
1 = (x + 1)2
and (2x + 2)(2x + 2) need 7 ops = 21 ops
To get fg we then need two more ops = 23 ops
Classical we need 42
+ (4 − 1)2
= 25 ops
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
Example
f = g = x3
+ x2
+ x + 1 is equal to F1 + F0 = (x + 1)x2
+ x + 1
F2
0 = F2
1 = (x + 1)2
and (2x + 2)(2x + 2) need 7 ops = 21 ops
To get fg we then need two more ops = 23 ops
Classical we need 42
+ (4 − 1)2
= 25 ops
This applies recursively , so we actually save a lot:
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
Example
f = g = x3
+ x2
+ x + 1 is equal to F1 + F0 = (x + 1)x2
+ x + 1
F2
0 = F2
1 = (x + 1)2
and (2x + 2)(2x + 2) need 7 ops = 21 ops
To get fg we then need two more ops = 23 ops
Classical we need 42
+ (4 − 1)2
= 25 ops
Theorem (Karatsuba ∼1960)
For n = 2k
we have mult(poly) ∈ O(n1.59
) (1.59 ≈ log(3); always: log = log2)
There is also a version for general n but the analysis is somewhat more involved
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
Example
f = g = x3
+ x2
+ x + 1 is equal to F1 + F0 = (x + 1)x2
+ x + 1
F2
0 = F2
1 = (x + 1)2
and (2x + 2)(2x + 2) need 7 ops = 21 ops
To get fg we then need two more ops = 23 ops
Classical we need 42
+ (4 − 1)2
= 25 ops
This is much faster than before:
logplot
Theorem (Karatsuba ∼1960)
For n = 2k
we have mult(poly) ∈ O(n1.59
) (1.59 ≈ log(3))
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
My silly 5 minute Python code:
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
My silly 5 minute code is still much slower than SageMath’s:
Why is that? Well: (1) I am stupid ⇒ too much overhead
(2) Nowadays computer algebra systems have beefed-up versions of Karatsuba’s algorithm build in
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
Actually in use today:
Toom–Cook algorithm ∼1963 with O(n1.46
) (1.46 ≈ log(5/3))
Schönhage–Strassen algorithm ∼1971 with O(n log n log log n)
Toom–Cook generalizes Karatsuba; Schönhage–Strassen is based on FFT
Maybe in use soon (?):
Harvey–van der Hoeven algorithm ∼2019 with O(n log n)
Conjecture (Schönhage–Strassen ∼1971) O(n log n) is the best possible
So maybe that’s it!
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
This is fantastic for large numbers:
logplot
Do not try for small numbers due to overhead
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
Honorable mentions These also run on divide-and-conquer
Binary search (Babylonians ∼200BCE, Mauchly ∼1946, many others as well) ∈ O(log n)
Euclidean algorithm (Euclid ∼300BCE) ∈ O log min(a, b)

Fast Fourier transform (Gauss ∼1805, Cooley–Tukey ∼1965) ∈ O(n log n)
Merge sort (von Neumann ∼1945) ∈ O(n log n)
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
For example, binary search does much better than linear search
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Discrete and fast Fourier transform
▶ Assume that there is an operation DFTω such that:
fg = DFT−1
ω DFTω(f )DFTω(g)

with DFTω and DFT−1
ω and DFTω(f )DFTω(g) being cheap
▶ Then compute fg for polynomials f and g is “cheap”
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
▶ In the following in need primitive roots of unity ω in some field R
▶ You can always assume R = C and ω = exp(2πk/n)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
The R-linear map
DFTω(f ) = 1, f (ω), f (ω2
), ..., f (ωn−1
)

that evaluates a polynomial at ωi
is called the Discrete Fourier transform (DFT)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
The R-linear map
DFTω(f ) = 1, f (ω), f (ω2
), ..., f (ωn−1
)

that evaluates a polynomial at ωi
is called the Discrete Fourier transform (DFT)
Theorem (Fast Fourier transform (FFT) Cooley–Tukey ∼1965)
DFTω can computed in O(n log n)
Theorem (FFT and Vandermonde ∼1770?)
DFT−1
ω can computed in O(n log n)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
The R-linear map
DFTω(f ) = 1, f (ω), f (ω2
), ..., f (ωn−1
)

that evaluates a polynomial at ωi
is called the Discrete Fourier transform (DFT)
Theorem (Fast Fourier transform (FFT) Cooley–Tukey ∼1965)
DFTω can computed in O(n log n)
Theorem (FFT and Vandermonde ∼1770?)
DFT−1
ω can computed in O(n log n)
The Vandermonde matrix, matrix of the multipoint evaluation map DFTω,
is easy to invert VωVω−1 = nId
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
Cyclic convolution of f = fn−1xn−1
+ ... and g = gn−1xn−1
+ ... is
h = f ∗n g =
P
0≤lnhl xl
, hl =
P
j+k≡l mod nfj gk
We see in a second why this is cyclic
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
Cyclic convolution of f = fn−1xn−1
+ ... and g = gn−1xn−1
+ ... is
h = f ∗n g =
P
0≤lnhl xl
, hl =
P
j+k≡l mod nfj gk
We see in a second why this is cyclic
Slogan Convolution = area obtained by sliding f through g
We have a cyclic version of this
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
Example Take f = x3
+ 1 and g = 2x3
+ 3x2
+ x + 1
fg = 2x6
+ 3x5
+ x4
+ 3x3
+ 3x2
+ x + 1
= (2x2
+ 3x + 1)(x4
− 1) + 3x3
+ 5x2
+ 4x + 2 ≡ f ∗4 g mod (x4
− 1)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
Example Take f = x3
+ 1 and g = 2x3
+ 3x2
+ x + 1
fg = 2x6
+ 3x5
+ x4
+ 3x3
+ 3x2
+ x + 1
= (2x2
+ 3x + 1)(x4
− 1) + 3x3
+ 5x2
+ 4x + 2 ≡ f ∗4 g mod (x4
− 1)
In general, fg ≡ f ∗n g mod (xn
− 1)
Thus, fg = f ∗n g if deg(fg)  n
Computation of fg is computation of f ∗n g
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
Final lemma we need
DFTω(f ∗n g) = DFTω(f ) ·pointwise DFTω(g)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
Final lemma we need
DFTω(f ∗n g) = DFTω(f ) ·pointwise DFTω(g)
Theorem (Cooley–Tukey ∼1965)
Computing fg is in O(n log n) for deg(fg)  n
“Proof”
Take n so that deg(fg)  n
Then fg = f ∗n g, so it remains to show that computing f ∗n g is in O(n log n)
But f ∗n g = DFT−1
ω DFTω(f ) ·pointwise DFTω(g)

DFTω(f ) and DFTω(f )−1
is in O(n log n)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
Final lemma we need
DFTω(f ∗n g) = DFTω(f ) ·pointwise DFTω(g)
This is much faster than before:
The overhead is however pretty large
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
What is FFT in this context?
▶ Assume n = 2k
and note that, using Euclid’s algorithm, writing
f = q0(xn/2
− 1) + r0 = q1(xn/2
+ 1) + r1 gives
f (ωeven
) = r0(ωeven
) , f (ωodd
) = r1(ωodd
)
▶ Writing r1( )∗
= r1(ω ) we can use divide-and-conquer since ω2
is a primitive
(n/2)th root of unity:
r0(ωeven
) and r∗
1 (ωeven
) are DFTs of order n/2 ⇒make recursive call
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
What is FFT in this context?
▶ Assume n = 2k
and note that, using Euclid’s algorithm, writing
f = q0(xn/2
− 1) + r0 = q1(xn/2
+ 1) + r1 gives
f (ωeven
) = r0(ωeven
) , f (ωodd
) = r1(ωodd
)
▶ Writing r1( )∗
= r1(ω ) we can use divide-and-conquer since ω2
is a primitive
(n/2)th root of unity:
r0(ωeven
) and r∗
1 (ωeven
) are DFTs of order n/2 ⇒make recursive call
Theorem (Cooley–Tukey ∼1965) FFT runs in O(n log n)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Computer algebra
▶ Equations are everywhere : differential equations, linear or polynomial
equations or inequalities, recurrences, equations in groups, algebras or
categories, tensor equations etc.
▶ There are two ways of solving such equations: approximately or exactly
▶ Oversimplified, numerical analysis studies efficient ways to get approximate
solutions; computer algebra wants exact solutions
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
chair:
one boat:
▶ C6H12 occurs in incongruent conformations: chair (one) and boats (many) mod mirrors
▶ Chair occurs far more frequently than the boats
▶ Chair is stiff while the boats can twist into one another
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product
▶ Model Sij = ai ⋆ aj as variables
▶ One gets polynomial variables subject to the relations above ⇒ get solution
via Gröbner bases
One gets that the inflexible solution chair is an isolated point
while the boats lie on a curve:
Chair won’t move
and boats can be twisted
when build from tubes
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Fast multiplication
▶ Given two polynomials f and g of degree  n; we want fg
▶ Classical polynomial multiplication needs n2
multiplications and (n − 1)2
additions; thus mult(poly) ∈ O(n2
)
▶ It doesn’t appear that we can do faster
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
Example
f = g = x3
+ x2
+ x + 1 is equal to F1 + F0 = (x + 1)x2
+ x + 1
F2
0 = F2
1 = (x + 1)2
and (2x + 2)(2x + 2) need 7 ops = 21 ops
To get fg we then need two more ops = 23 ops
Classical we need 42
+ (4 − 1)2
= 25 ops
This applies recursively , so we actually save a lot:
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
This is fantastic for large numbers:
logplot
Do not try for small numbers due to overhead
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Discrete and fast Fourier transform
▶ Assume that there is an operation DFTω such that:
fg = DFT−1
ω DFTω(f )DFTω(g)

with DFTω and DFT−1
ω and DFTω(f )DFTω(g) being cheap
▶ Then compute fg for polynomials f and g is “cheap”
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
Cyclic convolution of f = fn−1xn−1
+ ... and g = gn−1xn−1
+ ... is
h = f ∗n g =
P
0≤lnhl xl
, hl =
P
j+k≡l mod nfj gk
We see in a second why this is cyclic
Slogan Convolution = area obtained by sliding f through g
We have a cyclic version of this
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
What is FFT in this context?
▶ Assume n = 2k
and note that, using Euclid’s algorithm, writing
f = q0(xn/2
− 1) + r0 = q1(xn/2
+ 1) + r1 gives
f (ωeven
) = r0(ωeven
) , f (ωodd
) = r1(ωodd
)
▶ Writing r1( )∗
= r1(ω ) we can use divide-and-conquer since ω2
is a primitive
(n/2)th root of unity:
r0(ωeven
) and r∗
1 (ωeven
) are DFTs of order n/2 ⇒make recursive call
Theorem (Cooley–Tukey ∼1965) FFT runs in O(n log n)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
There is still much to do...
A primer on computer algebra Or: Faster than expected April 2023 5 / 5
Computer algebra
▶ Equations are everywhere : differential equations, linear or polynomial
equations or inequalities, recurrences, equations in groups, algebras or
categories, tensor equations etc.
▶ There are two ways of solving such equations: approximately or exactly
▶ Oversimplified, numerical analysis studies efficient ways to get approximate
solutions; computer algebra wants exact solutions
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
chair:
one boat:
▶ C6H12 occurs in incongruent conformations: chair (one) and boats (many) mod mirrors
▶ Chair occurs far more frequently than the boats
▶ Chair is stiff while the boats can twist into one another
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product
▶ Model Sij = ai ⋆ aj as variables
▶ One gets polynomial variables subject to the relations above ⇒ get solution
via Gröbner bases
One gets that the inflexible solution chair is an isolated point
while the boats lie on a curve:
Chair won’t move
and boats can be twisted
when build from tubes
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Fast multiplication
▶ Given two polynomials f and g of degree  n; we want fg
▶ Classical polynomial multiplication needs n2
multiplications and (n − 1)2
additions; thus mult(poly) ∈ O(n2
)
▶ It doesn’t appear that we can do faster
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
Example
f = g = x3
+ x2
+ x + 1 is equal to F1 + F0 = (x + 1)x2
+ x + 1
F2
0 = F2
1 = (x + 1)2
and (2x + 2)(2x + 2) need 7 ops = 21 ops
To get fg we then need two more ops = 23 ops
Classical we need 42
+ (4 − 1)2
= 25 ops
This applies recursively , so we actually save a lot:
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
This is fantastic for large numbers:
logplot
Do not try for small numbers due to overhead
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Discrete and fast Fourier transform
▶ Assume that there is an operation DFTω such that:
fg = DFT−1
ω DFTω(f )DFTω(g)

with DFTω and DFT−1
ω and DFTω(f )DFTω(g) being cheap
▶ Then compute fg for polynomials f and g is “cheap”
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
Cyclic convolution of f = fn−1xn−1
+ ... and g = gn−1xn−1
+ ... is
h = f ∗n g =
P
0≤lnhl xl
, hl =
P
j+k≡l mod nfj gk
We see in a second why this is cyclic
Slogan Convolution = area obtained by sliding f through g
We have a cyclic version of this
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
What is FFT in this context?
▶ Assume n = 2k
and note that, using Euclid’s algorithm, writing
f = q0(xn/2
− 1) + r0 = q1(xn/2
+ 1) + r1 gives
f (ωeven
) = r0(ωeven
) , f (ωodd
) = r1(ωodd
)
▶ Writing r1( )∗
= r1(ω ) we can use divide-and-conquer since ω2
is a primitive
(n/2)th root of unity:
r0(ωeven
) and r∗
1 (ωeven
) are DFTs of order n/2 ⇒make recursive call
Theorem (Cooley–Tukey ∼1965) FFT runs in O(n log n)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Thanks for your attention!
A primer on computer algebra Or: Faster than expected April 2023 5 / 5

More Related Content

Similar to A primer on computer algebra

Rightand wrong[1]
Rightand wrong[1]Rightand wrong[1]
Rightand wrong[1]slange
 
Matematicas para ingenieria 4ta edicion - john bird
Matematicas para ingenieria   4ta edicion - john birdMatematicas para ingenieria   4ta edicion - john bird
Matematicas para ingenieria 4ta edicion - john bird
Allan Bernal Espinoza
 
P1-Chp13-Integration.pptx
P1-Chp13-Integration.pptxP1-Chp13-Integration.pptx
P1-Chp13-Integration.pptx
YashTiwari512330
 
1.5 Quadratic Equations (Review)
1.5 Quadratic Equations (Review)1.5 Quadratic Equations (Review)
1.5 Quadratic Equations (Review)
smiller5
 
Feedback Vertex Set
Feedback Vertex SetFeedback Vertex Set
Feedback Vertex Set
Mag-Stellon Nadarajah
 
OPERATIONS ON FUNCTIONS.pptx
OPERATIONS ON FUNCTIONS.pptxOPERATIONS ON FUNCTIONS.pptx
OPERATIONS ON FUNCTIONS.pptx
Johnlery Guzman
 
Reasoning and Proof: An Introduction
Reasoning and Proof: An IntroductionReasoning and Proof: An Introduction
Reasoning and Proof: An Introduction
Sonarin Cruz
 
Daa chapter 3
Daa chapter 3Daa chapter 3
Daa chapter 3
B.Kirron Reddi
 
Ch-2 final exam documet compler design elements
Ch-2 final exam documet compler design elementsCh-2 final exam documet compler design elements
Ch-2 final exam documet compler design elements
MAHERMOHAMED27
 
Integral parsial
Integral parsialIntegral parsial
Integral parsial
LionaPutri1
 
1.5 Quadratic Equations.pdf
1.5 Quadratic Equations.pdf1.5 Quadratic Equations.pdf
1.5 Quadratic Equations.pdf
smiller5
 
0-introduction.pdf
0-introduction.pdf0-introduction.pdf
0-introduction.pdf
ArafathJazeeb1
 
Lego like spheres and tori, enumeration and drawings
Lego like spheres and tori, enumeration and drawingsLego like spheres and tori, enumeration and drawings
Lego like spheres and tori, enumeration and drawings
Mathieu Dutour Sikiric
 
K-Nearest Neighbor(KNN)
K-Nearest Neighbor(KNN)K-Nearest Neighbor(KNN)
K-Nearest Neighbor(KNN)
Abdullah al Mamun
 
Logic
LogicLogic
Logic
Hamxi
 
Matrix Multiplication(An example of concurrent programming)
Matrix Multiplication(An example of concurrent programming)Matrix Multiplication(An example of concurrent programming)
Matrix Multiplication(An example of concurrent programming)
Pramit Kumar
 
Daa unit 1
Daa unit 1Daa unit 1
Daa unit 1
jinalgoti
 
MAT060_24 Techniques of Integration (part 1).pdf
MAT060_24 Techniques of Integration (part 1).pdfMAT060_24 Techniques of Integration (part 1).pdf
MAT060_24 Techniques of Integration (part 1).pdf
NaomieAbaoDulayba
 
Activities on Software Development
Activities on Software DevelopmentActivities on Software Development
Activities on Software Development
Nicole Ynne Estabillo
 
Practical and Worst-Case Efficient Apportionment
Practical and Worst-Case Efficient ApportionmentPractical and Worst-Case Efficient Apportionment
Practical and Worst-Case Efficient Apportionment
Raphael Reitzig
 

Similar to A primer on computer algebra (20)

Rightand wrong[1]
Rightand wrong[1]Rightand wrong[1]
Rightand wrong[1]
 
Matematicas para ingenieria 4ta edicion - john bird
Matematicas para ingenieria   4ta edicion - john birdMatematicas para ingenieria   4ta edicion - john bird
Matematicas para ingenieria 4ta edicion - john bird
 
P1-Chp13-Integration.pptx
P1-Chp13-Integration.pptxP1-Chp13-Integration.pptx
P1-Chp13-Integration.pptx
 
1.5 Quadratic Equations (Review)
1.5 Quadratic Equations (Review)1.5 Quadratic Equations (Review)
1.5 Quadratic Equations (Review)
 
Feedback Vertex Set
Feedback Vertex SetFeedback Vertex Set
Feedback Vertex Set
 
OPERATIONS ON FUNCTIONS.pptx
OPERATIONS ON FUNCTIONS.pptxOPERATIONS ON FUNCTIONS.pptx
OPERATIONS ON FUNCTIONS.pptx
 
Reasoning and Proof: An Introduction
Reasoning and Proof: An IntroductionReasoning and Proof: An Introduction
Reasoning and Proof: An Introduction
 
Daa chapter 3
Daa chapter 3Daa chapter 3
Daa chapter 3
 
Ch-2 final exam documet compler design elements
Ch-2 final exam documet compler design elementsCh-2 final exam documet compler design elements
Ch-2 final exam documet compler design elements
 
Integral parsial
Integral parsialIntegral parsial
Integral parsial
 
1.5 Quadratic Equations.pdf
1.5 Quadratic Equations.pdf1.5 Quadratic Equations.pdf
1.5 Quadratic Equations.pdf
 
0-introduction.pdf
0-introduction.pdf0-introduction.pdf
0-introduction.pdf
 
Lego like spheres and tori, enumeration and drawings
Lego like spheres and tori, enumeration and drawingsLego like spheres and tori, enumeration and drawings
Lego like spheres and tori, enumeration and drawings
 
K-Nearest Neighbor(KNN)
K-Nearest Neighbor(KNN)K-Nearest Neighbor(KNN)
K-Nearest Neighbor(KNN)
 
Logic
LogicLogic
Logic
 
Matrix Multiplication(An example of concurrent programming)
Matrix Multiplication(An example of concurrent programming)Matrix Multiplication(An example of concurrent programming)
Matrix Multiplication(An example of concurrent programming)
 
Daa unit 1
Daa unit 1Daa unit 1
Daa unit 1
 
MAT060_24 Techniques of Integration (part 1).pdf
MAT060_24 Techniques of Integration (part 1).pdfMAT060_24 Techniques of Integration (part 1).pdf
MAT060_24 Techniques of Integration (part 1).pdf
 
Activities on Software Development
Activities on Software DevelopmentActivities on Software Development
Activities on Software Development
 
Practical and Worst-Case Efficient Apportionment
Practical and Worst-Case Efficient ApportionmentPractical and Worst-Case Efficient Apportionment
Practical and Worst-Case Efficient Apportionment
 

More from Daniel Tubbenhauer

Equivariant neural networks and representation theory
Equivariant neural networks and representation theoryEquivariant neural networks and representation theory
Equivariant neural networks and representation theory
Daniel Tubbenhauer
 
Three colors suffice
Three colors sufficeThree colors suffice
Three colors suffice
Daniel Tubbenhauer
 
Knots and algebra
Knots and algebraKnots and algebra
Knots and algebra
Daniel Tubbenhauer
 
Representation theory of monoidal categories
Representation theory of monoidal categoriesRepresentation theory of monoidal categories
Representation theory of monoidal categories
Daniel Tubbenhauer
 
Representation theory of algebras
Representation theory of algebrasRepresentation theory of algebras
Representation theory of algebras
Daniel Tubbenhauer
 
Representation theory of monoids
Representation theory of monoidsRepresentation theory of monoids
Representation theory of monoids
Daniel Tubbenhauer
 
Representation theory of monoids and monoidal categories
Representation theory of monoids and monoidal categoriesRepresentation theory of monoids and monoidal categories
Representation theory of monoids and monoidal categories
Daniel Tubbenhauer
 
Why category theory?
Why category theory?Why category theory?
Why category theory?
Daniel Tubbenhauer
 
Why (categorical) representation theory?
Why (categorical) representation theory?Why (categorical) representation theory?
Why (categorical) representation theory?
Daniel Tubbenhauer
 
Subfactors in a nutshell
Subfactors in a nutshellSubfactors in a nutshell
Subfactors in a nutshell
Daniel Tubbenhauer
 
Temperley-Lieb times four
Temperley-Lieb times fourTemperley-Lieb times four
Temperley-Lieb times four
Daniel Tubbenhauer
 
Some history of quantum groups
Some history of quantum groupsSome history of quantum groups
Some history of quantum groups
Daniel Tubbenhauer
 

More from Daniel Tubbenhauer (12)

Equivariant neural networks and representation theory
Equivariant neural networks and representation theoryEquivariant neural networks and representation theory
Equivariant neural networks and representation theory
 
Three colors suffice
Three colors sufficeThree colors suffice
Three colors suffice
 
Knots and algebra
Knots and algebraKnots and algebra
Knots and algebra
 
Representation theory of monoidal categories
Representation theory of monoidal categoriesRepresentation theory of monoidal categories
Representation theory of monoidal categories
 
Representation theory of algebras
Representation theory of algebrasRepresentation theory of algebras
Representation theory of algebras
 
Representation theory of monoids
Representation theory of monoidsRepresentation theory of monoids
Representation theory of monoids
 
Representation theory of monoids and monoidal categories
Representation theory of monoids and monoidal categoriesRepresentation theory of monoids and monoidal categories
Representation theory of monoids and monoidal categories
 
Why category theory?
Why category theory?Why category theory?
Why category theory?
 
Why (categorical) representation theory?
Why (categorical) representation theory?Why (categorical) representation theory?
Why (categorical) representation theory?
 
Subfactors in a nutshell
Subfactors in a nutshellSubfactors in a nutshell
Subfactors in a nutshell
 
Temperley-Lieb times four
Temperley-Lieb times fourTemperley-Lieb times four
Temperley-Lieb times four
 
Some history of quantum groups
Some history of quantum groupsSome history of quantum groups
Some history of quantum groups
 

Recently uploaded

Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
University of Maribor
 
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills MN
 
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
yqqaatn0
 
Orion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWSOrion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWS
Columbia Weather Systems
 
Deep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless ReproducibilityDeep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless Reproducibility
University of Rennes, INSA Rennes, Inria/IRISA, CNRS
 
THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...
THEMATIC  APPERCEPTION  TEST(TAT) cognitive abilities, creativity, and critic...THEMATIC  APPERCEPTION  TEST(TAT) cognitive abilities, creativity, and critic...
THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...
Abdul Wali Khan University Mardan,kP,Pakistan
 
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...
Wasswaderrick3
 
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Ana Luísa Pinho
 
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATIONPRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
ChetanK57
 
Toxic effects of heavy metals : Lead and Arsenic
Toxic effects of heavy metals : Lead and ArsenicToxic effects of heavy metals : Lead and Arsenic
Toxic effects of heavy metals : Lead and Arsenic
sanjana502982
 
ISI 2024: Application Form (Extended), Exam Date (Out), Eligibility
ISI 2024: Application Form (Extended), Exam Date (Out), EligibilityISI 2024: Application Form (Extended), Exam Date (Out), Eligibility
ISI 2024: Application Form (Extended), Exam Date (Out), Eligibility
SciAstra
 
platelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptxplatelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptx
muralinath2
 
原版制作(carleton毕业证书)卡尔顿大学毕业证硕士文凭原版一模一样
原版制作(carleton毕业证书)卡尔顿大学毕业证硕士文凭原版一模一样原版制作(carleton毕业证书)卡尔顿大学毕业证硕士文凭原版一模一样
原版制作(carleton毕业证书)卡尔顿大学毕业证硕士文凭原版一模一样
yqqaatn0
 
SAR of Medicinal Chemistry 1st by dk.pdf
SAR of Medicinal Chemistry 1st by dk.pdfSAR of Medicinal Chemistry 1st by dk.pdf
SAR of Medicinal Chemistry 1st by dk.pdf
KrushnaDarade1
 
20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx
Sharon Liu
 
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Sérgio Sacani
 
In silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptxIn silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptx
AlaminAfendy1
 
Introduction to Mean Field Theory(MFT).pptx
Introduction to Mean Field Theory(MFT).pptxIntroduction to Mean Field Theory(MFT).pptx
Introduction to Mean Field Theory(MFT).pptx
zeex60
 
Chapter 12 - climate change and the energy crisis
Chapter 12 - climate change and the energy crisisChapter 12 - climate change and the energy crisis
Chapter 12 - climate change and the energy crisis
tonzsalvador2222
 
Nutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technologyNutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technology
Lokesh Patil
 

Recently uploaded (20)

Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
 
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
 
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
 
Orion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWSOrion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWS
 
Deep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless ReproducibilityDeep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless Reproducibility
 
THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...
THEMATIC  APPERCEPTION  TEST(TAT) cognitive abilities, creativity, and critic...THEMATIC  APPERCEPTION  TEST(TAT) cognitive abilities, creativity, and critic...
THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...
 
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...
 
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
 
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATIONPRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
 
Toxic effects of heavy metals : Lead and Arsenic
Toxic effects of heavy metals : Lead and ArsenicToxic effects of heavy metals : Lead and Arsenic
Toxic effects of heavy metals : Lead and Arsenic
 
ISI 2024: Application Form (Extended), Exam Date (Out), Eligibility
ISI 2024: Application Form (Extended), Exam Date (Out), EligibilityISI 2024: Application Form (Extended), Exam Date (Out), Eligibility
ISI 2024: Application Form (Extended), Exam Date (Out), Eligibility
 
platelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptxplatelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptx
 
原版制作(carleton毕业证书)卡尔顿大学毕业证硕士文凭原版一模一样
原版制作(carleton毕业证书)卡尔顿大学毕业证硕士文凭原版一模一样原版制作(carleton毕业证书)卡尔顿大学毕业证硕士文凭原版一模一样
原版制作(carleton毕业证书)卡尔顿大学毕业证硕士文凭原版一模一样
 
SAR of Medicinal Chemistry 1st by dk.pdf
SAR of Medicinal Chemistry 1st by dk.pdfSAR of Medicinal Chemistry 1st by dk.pdf
SAR of Medicinal Chemistry 1st by dk.pdf
 
20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx
 
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
 
In silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptxIn silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptx
 
Introduction to Mean Field Theory(MFT).pptx
Introduction to Mean Field Theory(MFT).pptxIntroduction to Mean Field Theory(MFT).pptx
Introduction to Mean Field Theory(MFT).pptx
 
Chapter 12 - climate change and the energy crisis
Chapter 12 - climate change and the energy crisisChapter 12 - climate change and the energy crisis
Chapter 12 - climate change and the energy crisis
 
Nutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technologyNutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technology
 

A primer on computer algebra

  • 1.
  • 2. Computer algebra ▶ Equations are everywhere : differential equations, linear or polynomial equations or inequalities, recurrences, equations in groups, algebras or categories, tensor equations etc. ▶ There are two ways of solving such equations: approximately or exactly ▶ Oversimplified, numerical analysis studies efficient ways to get approximate solutions; computer algebra wants exact solutions A primer on computer algebra Or: Faster than expected April 2023 2 / 5
  • 3. Computer algebra ▶ Equations are everywhere : differential equations, linear or polynomial equations or inequalities, recurrences, equations in groups, algebras or categories, tensor equations etc. ▶ There are two ways of solving such equations: approximately or exactly ▶ Oversimplified, numerical analysis studies efficient ways to get approximate solutions; computer algebra wants exact solutions To get started, an example from chemistry Watch out for three (very typical but usually highly nontrivial) steps: ▶ create a mathematical model ▶ “solve” the model (enter e.g. computer algebra) ▶ interpret the solution We will see C6H12 next, so here it is: Based on a conjecture of Sachse ∼1890 and a solution by Levelt ∼1997 A primer on computer algebra Or: Faster than expected April 2023 2 / 5
  • 4. Computer algebra chair: one boat: ▶ C6H12 occurs in incongruent conformations: chair (one) and boats (many) mod mirrors ▶ Chair occurs far more frequently than the boats ▶ Chair is stiff while the boats can twist into one another A primer on computer algebra Or: Faster than expected April 2023 2 / 5
  • 5. Computer algebra chair: one boat: ▶ C6H12 occurs in incongruent conformations: chair (one) and boats (many) mod mirrors ▶ Chair occurs far more frequently than the boats ▶ Chair is stiff while the boats can twist into one another Mathematical modeling is difficult so it happens often that one needs to go back and modify the approach This happened here as the first solution attempt was not helpful A primer on computer algebra Or: Faster than expected April 2023 2 / 5
  • 6. Computer algebra ▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product ▶ Model Sij = ai ⋆ aj as variables ▶ One gets polynomial variables subject to the relations above ⇒ get solution via Gröbner bases A primer on computer algebra Or: Faster than expected April 2023 2 / 5
  • 7. Computer algebra ▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product ▶ Model Sij = ai ⋆ aj as variables ▶ One gets polynomial variables subject to the relations above ⇒ get solution via Gröbner bases One gets that the inflexible solution chair is an isolated point while the boats lie on a curve: Chair won’t move and boats can be twisted when build from tubes A primer on computer algebra Or: Faster than expected April 2023 2 / 5
  • 8. Computer algebra ▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product ▶ Model Sij = ai ⋆ aj as variables ▶ One gets polynomial variables subject to the relations above ⇒ get solution via Gröbner bases One gets that the inflexible solution chair is an isolated point while the boats lie on a curve: Chair won’t move and boats can be twisted when build from tubes Gröbner bases are an essential part of computer algebra so this is a fabulous example of the usage of computer algebra A primer on computer algebra Or: Faster than expected April 2023 2 / 5
  • 9. Computer algebra ▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product ▶ Model Sij = ai ⋆ aj as variables ▶ One gets polynomial variables subject to the relations above ⇒ get solution via Gröbner bases A primer on computer algebra Or: Faster than expected April 2023 2 / 5
  • 10. Computer algebra ▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product ▶ Model Sij = ai ⋆ aj as variables ▶ One gets polynomial variables subject to the relations above ⇒ get solution via Gröbner bases What we are using throughout is worst-case-analysis using: f ∈ O(g) : Careful This is different from: ▶ Average-case-analysis ▶ Computational implications due to overhead (≈ the part before n0) A primer on computer algebra Or: Faster than expected April 2023 2 / 5
  • 11. Fast multiplication ▶ Given two polynomials f and g of degree < n; we want fg ▶ Classical polynomial multiplication needs n2 multiplications and (n − 1)2 additions; thus mult(poly) ∈ O(n2 ) ▶ It doesn’t appear that we can do faster A primer on computer algebra Or: Faster than expected April 2023 π / 5
  • 12. Fast multiplication ▶ Karatsuba ∼1960 It gets faster! ▶ Reduce multiplication cost even when potentially increasing addition cost ▶ Second, apply divide-and-conquer A primer on computer algebra Or: Faster than expected April 2023 π / 5
  • 13. Fast multiplication ▶ Karatsuba ∼1960 It gets faster! ▶ Reduce multiplication cost even when potentially increasing addition cost ▶ Second, apply divide-and-conquer We compute ac, bd, u = (a + b)(c + d), v = ac + bd, u − v with 3 multiplications and 4 additions = 7 operations The total has increased, but a recursive application will drastically reduce the overall cost Upshot We only have 3 multiplications not 4 A primer on computer algebra Or: Faster than expected April 2023 π / 5
  • 14. Fast multiplication Example f = g = x3 + x2 + x + 1 is equal to F1 + F0 = (x + 1)x2 + x + 1 F2 0 = F2 1 = (x + 1)2 and (2x + 2)(2x + 2) need 7 ops = 21 ops To get fg we then need two more ops = 23 ops Classical we need 42 + (4 − 1)2 = 25 ops A primer on computer algebra Or: Faster than expected April 2023 π / 5
  • 15. Fast multiplication Example f = g = x3 + x2 + x + 1 is equal to F1 + F0 = (x + 1)x2 + x + 1 F2 0 = F2 1 = (x + 1)2 and (2x + 2)(2x + 2) need 7 ops = 21 ops To get fg we then need two more ops = 23 ops Classical we need 42 + (4 − 1)2 = 25 ops This applies recursively , so we actually save a lot: A primer on computer algebra Or: Faster than expected April 2023 π / 5
  • 16. Fast multiplication Example f = g = x3 + x2 + x + 1 is equal to F1 + F0 = (x + 1)x2 + x + 1 F2 0 = F2 1 = (x + 1)2 and (2x + 2)(2x + 2) need 7 ops = 21 ops To get fg we then need two more ops = 23 ops Classical we need 42 + (4 − 1)2 = 25 ops Theorem (Karatsuba ∼1960) For n = 2k we have mult(poly) ∈ O(n1.59 ) (1.59 ≈ log(3); always: log = log2) There is also a version for general n but the analysis is somewhat more involved A primer on computer algebra Or: Faster than expected April 2023 π / 5
  • 17. Fast multiplication Example f = g = x3 + x2 + x + 1 is equal to F1 + F0 = (x + 1)x2 + x + 1 F2 0 = F2 1 = (x + 1)2 and (2x + 2)(2x + 2) need 7 ops = 21 ops To get fg we then need two more ops = 23 ops Classical we need 42 + (4 − 1)2 = 25 ops This is much faster than before: logplot Theorem (Karatsuba ∼1960) For n = 2k we have mult(poly) ∈ O(n1.59 ) (1.59 ≈ log(3)) A primer on computer algebra Or: Faster than expected April 2023 π / 5
  • 18. Fast multiplication k = 2: Replace xk by e.g. 2k and do the same as before ▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well ▶ Theorem (Karatsuba ∼1960) For n = 2k (n=#digits) we have mult ∈ O(n1.59 ) ▶ Multiplication is everywhere so this is fabulous A primer on computer algebra Or: Faster than expected April 2023 π / 5
  • 19. Fast multiplication k = 2: Replace xk by e.g. 2k and do the same as before ▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well ▶ Theorem (Karatsuba ∼1960) For n = 2k (n=#digits) we have mult ∈ O(n1.59 ) ▶ Multiplication is everywhere so this is fabulous My silly 5 minute Python code: A primer on computer algebra Or: Faster than expected April 2023 π / 5
  • 20. Fast multiplication k = 2: Replace xk by e.g. 2k and do the same as before ▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well ▶ Theorem (Karatsuba ∼1960) For n = 2k (n=#digits) we have mult ∈ O(n1.59 ) ▶ Multiplication is everywhere so this is fabulous My silly 5 minute code is still much slower than SageMath’s: Why is that? Well: (1) I am stupid ⇒ too much overhead (2) Nowadays computer algebra systems have beefed-up versions of Karatsuba’s algorithm build in A primer on computer algebra Or: Faster than expected April 2023 π / 5
  • 21. Fast multiplication k = 2: Replace xk by e.g. 2k and do the same as before ▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well ▶ Theorem (Karatsuba ∼1960) For n = 2k (n=#digits) we have mult ∈ O(n1.59 ) ▶ Multiplication is everywhere so this is fabulous Actually in use today: Toom–Cook algorithm ∼1963 with O(n1.46 ) (1.46 ≈ log(5/3)) Schönhage–Strassen algorithm ∼1971 with O(n log n log log n) Toom–Cook generalizes Karatsuba; Schönhage–Strassen is based on FFT Maybe in use soon (?): Harvey–van der Hoeven algorithm ∼2019 with O(n log n) Conjecture (Schönhage–Strassen ∼1971) O(n log n) is the best possible So maybe that’s it! A primer on computer algebra Or: Faster than expected April 2023 π / 5
  • 22. Fast multiplication k = 2: Replace xk by e.g. 2k and do the same as before ▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well ▶ Theorem (Karatsuba ∼1960) For n = 2k (n=#digits) we have mult ∈ O(n1.59 ) ▶ Multiplication is everywhere so this is fabulous This is fantastic for large numbers: logplot Do not try for small numbers due to overhead A primer on computer algebra Or: Faster than expected April 2023 π / 5
  • 23. Fast multiplication k = 2: Replace xk by e.g. 2k and do the same as before ▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well ▶ Theorem (Karatsuba ∼1960) For n = 2k (n=#digits) we have mult ∈ O(n1.59 ) ▶ Multiplication is everywhere so this is fabulous Honorable mentions These also run on divide-and-conquer Binary search (Babylonians ∼200BCE, Mauchly ∼1946, many others as well) ∈ O(log n) Euclidean algorithm (Euclid ∼300BCE) ∈ O log min(a, b) Fast Fourier transform (Gauss ∼1805, Cooley–Tukey ∼1965) ∈ O(n log n) Merge sort (von Neumann ∼1945) ∈ O(n log n) A primer on computer algebra Or: Faster than expected April 2023 π / 5
  • 24. Fast multiplication k = 2: Replace xk by e.g. 2k and do the same as before ▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well ▶ Theorem (Karatsuba ∼1960) For n = 2k (n=#digits) we have mult ∈ O(n1.59 ) ▶ Multiplication is everywhere so this is fabulous For example, binary search does much better than linear search A primer on computer algebra Or: Faster than expected April 2023 π / 5
  • 25. Discrete and fast Fourier transform ▶ Assume that there is an operation DFTω such that: fg = DFT−1 ω DFTω(f )DFTω(g) with DFTω and DFT−1 ω and DFTω(f )DFTω(g) being cheap ▶ Then compute fg for polynomials f and g is “cheap” A primer on computer algebra Or: Faster than expected April 2023 4 / 5
  • 26. Discrete and fast Fourier transform ▶ In the following in need primitive roots of unity ω in some field R ▶ You can always assume R = C and ω = exp(2πk/n) A primer on computer algebra Or: Faster than expected April 2023 4 / 5
  • 27. Discrete and fast Fourier transform The R-linear map DFTω(f ) = 1, f (ω), f (ω2 ), ..., f (ωn−1 ) that evaluates a polynomial at ωi is called the Discrete Fourier transform (DFT) A primer on computer algebra Or: Faster than expected April 2023 4 / 5
  • 28. Discrete and fast Fourier transform The R-linear map DFTω(f ) = 1, f (ω), f (ω2 ), ..., f (ωn−1 ) that evaluates a polynomial at ωi is called the Discrete Fourier transform (DFT) Theorem (Fast Fourier transform (FFT) Cooley–Tukey ∼1965) DFTω can computed in O(n log n) Theorem (FFT and Vandermonde ∼1770?) DFT−1 ω can computed in O(n log n) A primer on computer algebra Or: Faster than expected April 2023 4 / 5
  • 29. Discrete and fast Fourier transform The R-linear map DFTω(f ) = 1, f (ω), f (ω2 ), ..., f (ωn−1 ) that evaluates a polynomial at ωi is called the Discrete Fourier transform (DFT) Theorem (Fast Fourier transform (FFT) Cooley–Tukey ∼1965) DFTω can computed in O(n log n) Theorem (FFT and Vandermonde ∼1770?) DFT−1 ω can computed in O(n log n) The Vandermonde matrix, matrix of the multipoint evaluation map DFTω, is easy to invert VωVω−1 = nId A primer on computer algebra Or: Faster than expected April 2023 4 / 5
  • 30. Discrete and fast Fourier transform Cyclic convolution of f = fn−1xn−1 + ... and g = gn−1xn−1 + ... is h = f ∗n g = P 0≤lnhl xl , hl = P j+k≡l mod nfj gk We see in a second why this is cyclic A primer on computer algebra Or: Faster than expected April 2023 4 / 5
  • 31. Discrete and fast Fourier transform Cyclic convolution of f = fn−1xn−1 + ... and g = gn−1xn−1 + ... is h = f ∗n g = P 0≤lnhl xl , hl = P j+k≡l mod nfj gk We see in a second why this is cyclic Slogan Convolution = area obtained by sliding f through g We have a cyclic version of this A primer on computer algebra Or: Faster than expected April 2023 4 / 5
  • 32. Discrete and fast Fourier transform Example Take f = x3 + 1 and g = 2x3 + 3x2 + x + 1 fg = 2x6 + 3x5 + x4 + 3x3 + 3x2 + x + 1 = (2x2 + 3x + 1)(x4 − 1) + 3x3 + 5x2 + 4x + 2 ≡ f ∗4 g mod (x4 − 1) A primer on computer algebra Or: Faster than expected April 2023 4 / 5
  • 33. Discrete and fast Fourier transform Example Take f = x3 + 1 and g = 2x3 + 3x2 + x + 1 fg = 2x6 + 3x5 + x4 + 3x3 + 3x2 + x + 1 = (2x2 + 3x + 1)(x4 − 1) + 3x3 + 5x2 + 4x + 2 ≡ f ∗4 g mod (x4 − 1) In general, fg ≡ f ∗n g mod (xn − 1) Thus, fg = f ∗n g if deg(fg) n Computation of fg is computation of f ∗n g A primer on computer algebra Or: Faster than expected April 2023 4 / 5
  • 34. Discrete and fast Fourier transform Final lemma we need DFTω(f ∗n g) = DFTω(f ) ·pointwise DFTω(g) A primer on computer algebra Or: Faster than expected April 2023 4 / 5
  • 35. Discrete and fast Fourier transform Final lemma we need DFTω(f ∗n g) = DFTω(f ) ·pointwise DFTω(g) Theorem (Cooley–Tukey ∼1965) Computing fg is in O(n log n) for deg(fg) n “Proof” Take n so that deg(fg) n Then fg = f ∗n g, so it remains to show that computing f ∗n g is in O(n log n) But f ∗n g = DFT−1 ω DFTω(f ) ·pointwise DFTω(g) DFTω(f ) and DFTω(f )−1 is in O(n log n) A primer on computer algebra Or: Faster than expected April 2023 4 / 5
  • 36. Discrete and fast Fourier transform Final lemma we need DFTω(f ∗n g) = DFTω(f ) ·pointwise DFTω(g) This is much faster than before: The overhead is however pretty large A primer on computer algebra Or: Faster than expected April 2023 4 / 5
  • 37. Discrete and fast Fourier transform What is FFT in this context? ▶ Assume n = 2k and note that, using Euclid’s algorithm, writing f = q0(xn/2 − 1) + r0 = q1(xn/2 + 1) + r1 gives f (ωeven ) = r0(ωeven ) , f (ωodd ) = r1(ωodd ) ▶ Writing r1( )∗ = r1(ω ) we can use divide-and-conquer since ω2 is a primitive (n/2)th root of unity: r0(ωeven ) and r∗ 1 (ωeven ) are DFTs of order n/2 ⇒make recursive call A primer on computer algebra Or: Faster than expected April 2023 4 / 5
  • 38. Discrete and fast Fourier transform What is FFT in this context? ▶ Assume n = 2k and note that, using Euclid’s algorithm, writing f = q0(xn/2 − 1) + r0 = q1(xn/2 + 1) + r1 gives f (ωeven ) = r0(ωeven ) , f (ωodd ) = r1(ωodd ) ▶ Writing r1( )∗ = r1(ω ) we can use divide-and-conquer since ω2 is a primitive (n/2)th root of unity: r0(ωeven ) and r∗ 1 (ωeven ) are DFTs of order n/2 ⇒make recursive call Theorem (Cooley–Tukey ∼1965) FFT runs in O(n log n) A primer on computer algebra Or: Faster than expected April 2023 4 / 5
  • 39. Computer algebra ▶ Equations are everywhere : differential equations, linear or polynomial equations or inequalities, recurrences, equations in groups, algebras or categories, tensor equations etc. ▶ There are two ways of solving such equations: approximately or exactly ▶ Oversimplified, numerical analysis studies efficient ways to get approximate solutions; computer algebra wants exact solutions A primer on computer algebra Or: Faster than expected April 2023 2 / 5 Computer algebra chair: one boat: ▶ C6H12 occurs in incongruent conformations: chair (one) and boats (many) mod mirrors ▶ Chair occurs far more frequently than the boats ▶ Chair is stiff while the boats can twist into one another A primer on computer algebra Or: Faster than expected April 2023 2 / 5 Computer algebra ▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product ▶ Model Sij = ai ⋆ aj as variables ▶ One gets polynomial variables subject to the relations above ⇒ get solution via Gröbner bases One gets that the inflexible solution chair is an isolated point while the boats lie on a curve: Chair won’t move and boats can be twisted when build from tubes A primer on computer algebra Or: Faster than expected April 2023 2 / 5 Fast multiplication ▶ Given two polynomials f and g of degree n; we want fg ▶ Classical polynomial multiplication needs n2 multiplications and (n − 1)2 additions; thus mult(poly) ∈ O(n2 ) ▶ It doesn’t appear that we can do faster A primer on computer algebra Or: Faster than expected April 2023 π / 5 Fast multiplication Example f = g = x3 + x2 + x + 1 is equal to F1 + F0 = (x + 1)x2 + x + 1 F2 0 = F2 1 = (x + 1)2 and (2x + 2)(2x + 2) need 7 ops = 21 ops To get fg we then need two more ops = 23 ops Classical we need 42 + (4 − 1)2 = 25 ops This applies recursively , so we actually save a lot: A primer on computer algebra Or: Faster than expected April 2023 π / 5 Fast multiplication k = 2: Replace xk by e.g. 2k and do the same as before ▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well ▶ Theorem (Karatsuba ∼1960) For n = 2k (n=#digits) we have mult ∈ O(n1.59 ) ▶ Multiplication is everywhere so this is fabulous This is fantastic for large numbers: logplot Do not try for small numbers due to overhead A primer on computer algebra Or: Faster than expected April 2023 π / 5 Discrete and fast Fourier transform ▶ Assume that there is an operation DFTω such that: fg = DFT−1 ω DFTω(f )DFTω(g) with DFTω and DFT−1 ω and DFTω(f )DFTω(g) being cheap ▶ Then compute fg for polynomials f and g is “cheap” A primer on computer algebra Or: Faster than expected April 2023 4 / 5 Discrete and fast Fourier transform Cyclic convolution of f = fn−1xn−1 + ... and g = gn−1xn−1 + ... is h = f ∗n g = P 0≤lnhl xl , hl = P j+k≡l mod nfj gk We see in a second why this is cyclic Slogan Convolution = area obtained by sliding f through g We have a cyclic version of this A primer on computer algebra Or: Faster than expected April 2023 4 / 5 Discrete and fast Fourier transform What is FFT in this context? ▶ Assume n = 2k and note that, using Euclid’s algorithm, writing f = q0(xn/2 − 1) + r0 = q1(xn/2 + 1) + r1 gives f (ωeven ) = r0(ωeven ) , f (ωodd ) = r1(ωodd ) ▶ Writing r1( )∗ = r1(ω ) we can use divide-and-conquer since ω2 is a primitive (n/2)th root of unity: r0(ωeven ) and r∗ 1 (ωeven ) are DFTs of order n/2 ⇒make recursive call Theorem (Cooley–Tukey ∼1965) FFT runs in O(n log n) A primer on computer algebra Or: Faster than expected April 2023 4 / 5 There is still much to do... A primer on computer algebra Or: Faster than expected April 2023 5 / 5
  • 40. Computer algebra ▶ Equations are everywhere : differential equations, linear or polynomial equations or inequalities, recurrences, equations in groups, algebras or categories, tensor equations etc. ▶ There are two ways of solving such equations: approximately or exactly ▶ Oversimplified, numerical analysis studies efficient ways to get approximate solutions; computer algebra wants exact solutions A primer on computer algebra Or: Faster than expected April 2023 2 / 5 Computer algebra chair: one boat: ▶ C6H12 occurs in incongruent conformations: chair (one) and boats (many) mod mirrors ▶ Chair occurs far more frequently than the boats ▶ Chair is stiff while the boats can twist into one another A primer on computer algebra Or: Faster than expected April 2023 2 / 5 Computer algebra ▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product ▶ Model Sij = ai ⋆ aj as variables ▶ One gets polynomial variables subject to the relations above ⇒ get solution via Gröbner bases One gets that the inflexible solution chair is an isolated point while the boats lie on a curve: Chair won’t move and boats can be twisted when build from tubes A primer on computer algebra Or: Faster than expected April 2023 2 / 5 Fast multiplication ▶ Given two polynomials f and g of degree n; we want fg ▶ Classical polynomial multiplication needs n2 multiplications and (n − 1)2 additions; thus mult(poly) ∈ O(n2 ) ▶ It doesn’t appear that we can do faster A primer on computer algebra Or: Faster than expected April 2023 π / 5 Fast multiplication Example f = g = x3 + x2 + x + 1 is equal to F1 + F0 = (x + 1)x2 + x + 1 F2 0 = F2 1 = (x + 1)2 and (2x + 2)(2x + 2) need 7 ops = 21 ops To get fg we then need two more ops = 23 ops Classical we need 42 + (4 − 1)2 = 25 ops This applies recursively , so we actually save a lot: A primer on computer algebra Or: Faster than expected April 2023 π / 5 Fast multiplication k = 2: Replace xk by e.g. 2k and do the same as before ▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well ▶ Theorem (Karatsuba ∼1960) For n = 2k (n=#digits) we have mult ∈ O(n1.59 ) ▶ Multiplication is everywhere so this is fabulous This is fantastic for large numbers: logplot Do not try for small numbers due to overhead A primer on computer algebra Or: Faster than expected April 2023 π / 5 Discrete and fast Fourier transform ▶ Assume that there is an operation DFTω such that: fg = DFT−1 ω DFTω(f )DFTω(g) with DFTω and DFT−1 ω and DFTω(f )DFTω(g) being cheap ▶ Then compute fg for polynomials f and g is “cheap” A primer on computer algebra Or: Faster than expected April 2023 4 / 5 Discrete and fast Fourier transform Cyclic convolution of f = fn−1xn−1 + ... and g = gn−1xn−1 + ... is h = f ∗n g = P 0≤lnhl xl , hl = P j+k≡l mod nfj gk We see in a second why this is cyclic Slogan Convolution = area obtained by sliding f through g We have a cyclic version of this A primer on computer algebra Or: Faster than expected April 2023 4 / 5 Discrete and fast Fourier transform What is FFT in this context? ▶ Assume n = 2k and note that, using Euclid’s algorithm, writing f = q0(xn/2 − 1) + r0 = q1(xn/2 + 1) + r1 gives f (ωeven ) = r0(ωeven ) , f (ωodd ) = r1(ωodd ) ▶ Writing r1( )∗ = r1(ω ) we can use divide-and-conquer since ω2 is a primitive (n/2)th root of unity: r0(ωeven ) and r∗ 1 (ωeven ) are DFTs of order n/2 ⇒make recursive call Theorem (Cooley–Tukey ∼1965) FFT runs in O(n log n) A primer on computer algebra Or: Faster than expected April 2023 4 / 5 Thanks for your attention! A primer on computer algebra Or: Faster than expected April 2023 5 / 5