Abstract: Computer algebra is about developing tools for the exact solution of equations. Here equation can mean differential equations, linear or polynomial equations or inequalities, recurrences, equations in groups, algebras or categories, tensor equations etc. It is widely used to experiment in mathematics, science and engineering. In fact, it is used everywhere in mathematics, even while working on "very abstract" questions.
This talk is a short and friendly introduction to some of the main ideas of exact yet fast algorithms in computation with the focus on multiplications of numbers and polynomials.
The mathematics of AI (machine learning mathematically)Daniel Tubbenhauer
Or: Forward, loss, backward
Abstract: This presentation is an introduction to the mathematics that govern neural networks in machine learning.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
NP vs P Proof using Deterministic Finite AutomataSing Kuang Tan
Prove that Clique problem is NP and cannot be reduced to P because the Deterministic Finite Automata of the Clique problem has exponential number of states. We can use the same concept to prove that NP is not equal to P using Turing Machine. We figured out a way to unify Mathematics. This proof is for those Theoretical Computing guys who do not know Boolean algebra but know Turing Machine. Kung fu computer science, Geometric complexity theory
The mathematics of AI (machine learning mathematically)Daniel Tubbenhauer
Or: Forward, loss, backward
Abstract: This presentation is an introduction to the mathematics that govern neural networks in machine learning.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
NP vs P Proof using Deterministic Finite AutomataSing Kuang Tan
Prove that Clique problem is NP and cannot be reduced to P because the Deterministic Finite Automata of the Clique problem has exponential number of states. We can use the same concept to prove that NP is not equal to P using Turing Machine. We figured out a way to unify Mathematics. This proof is for those Theoretical Computing guys who do not know Boolean algebra but know Turing Machine. Kung fu computer science, Geometric complexity theory
Ch-2 final exam documet compler design elementsMAHERMOHAMED27
The "Project Risk Management" course transformed me from a passive observer of risk to a proactive risk management champion. Here are some key learnings that will forever change my approach to projects:
The Proactive Mindset: I transitioned from simply reacting to problems to anticipating and mitigating them. The course emphasized the importance of proactive risk identification through techniques like brainstorming, SWOT analysis, and FMEA (Failure Mode and Effect Analysis). This allows for early intervention and prevents minor issues from snowballing into major roadblocks.
Risk Assessment and Prioritization: I learned to assess the likelihood and impact of each identified risk. The course introduced qualitative and quantitative risk analysis methods, allowing me to prioritize risks based on their potential severity. This empowers me to focus resources on the most critical threats to project success.
Developing Response Strategies: The course equipped me with a toolbox of risk response strategies. I learned about risk avoidance, mitigation, transference, and acceptance strategies, allowing me to choose the most appropriate approach for each risk. For example, I can now advocate for additional training to mitigate a knowledge gap risk or build buffer time into the schedule to address potential delays.
Communication and Monitoring: The course highlighted the importance of clear communication regarding risks. I learned to effectively communicate risks to stakeholders, ensuring everyone is aware of potential challenges and mitigation plans. Additionally, I gained valuable insights into risk monitoring and tracking, allowing for continuous evaluation and adaptation as the project progresses.
In essence, "Project Risk Management" equipped me with the knowledge and tools to navigate the inevitable uncertainties of projects. By embracing a proactive approach, I can now lead projects with greater confidence, increasing the chances of achieving successful outcomes.
We consider here k-valent plane and toroidal maps with faces of size a and b. The faces are said to be in a lego if the faces are organized in blocks that then tile the sphere. We expose some enumeration results and the technical enumeration methods.
Then we expose how we managed to draw the graphs from the combinatorial data.
The K-Nearest Neighbors (KNN) algorithm is a robust and intuitive machine learning method employed to tackle classification and regression problems. By capitalizing on the concept of similarity, KNN predicts the label or value of a new data point by considering its K closest neighbours in the training dataset. In this article, we will learn about a supervised learning algorithm (KNN) or the k – Nearest Neighbours, highlighting it’s user-friendly nature.
What is the K-Nearest Neighbors Algorithm?
K-Nearest Neighbours is one of the most basic yet essential classification algorithms in Machine Learning. It belongs to the supervised learning domain and finds intense application in pattern recognition, data mining, and intrusion detection.
It is widely disposable in real-life scenarios since it is non-parametric, meaning, it does not make any underlying assumptions about the distribution of data (as opposed to other algorithms such as GMM, which assume a Gaussian distribution of the given data). We are given some prior data (also called training data), which classifies coordinates into groups identified by an attribute.
Practical and Worst-Case Efficient ApportionmentRaphael Reitzig
Proportional apportionment is the problem of assigning seats to parties according to their relative share of votes. Divisor methods are the de-facto standard solution, used in many countries.
In recent literature, there are two algorithms that implement divisor methods: one by Cheng and Eppstein (ISAAC, 2014) has worst-case optimal running time but is complex, while the other (Pukelsheim, 2014) is relatively simple and fast in practice but does not offer worst-case guarantees.
This talk presents the ideas behind a novel algorithm that avoids the shortcomings of both. We investigate the three contenders in order to determine which is most useful in practice.
Read more over here: http://reitzig.github.io/publications/RW2015b
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Or: SO3 webs in action
I will explain how to use a representation theoretical gadget called webs to prove the four color theorem.
There is still a computer component involved.
The slides and their source code can be found on the website www.dtubbenhauer.com
Ch-2 final exam documet compler design elementsMAHERMOHAMED27
The "Project Risk Management" course transformed me from a passive observer of risk to a proactive risk management champion. Here are some key learnings that will forever change my approach to projects:
The Proactive Mindset: I transitioned from simply reacting to problems to anticipating and mitigating them. The course emphasized the importance of proactive risk identification through techniques like brainstorming, SWOT analysis, and FMEA (Failure Mode and Effect Analysis). This allows for early intervention and prevents minor issues from snowballing into major roadblocks.
Risk Assessment and Prioritization: I learned to assess the likelihood and impact of each identified risk. The course introduced qualitative and quantitative risk analysis methods, allowing me to prioritize risks based on their potential severity. This empowers me to focus resources on the most critical threats to project success.
Developing Response Strategies: The course equipped me with a toolbox of risk response strategies. I learned about risk avoidance, mitigation, transference, and acceptance strategies, allowing me to choose the most appropriate approach for each risk. For example, I can now advocate for additional training to mitigate a knowledge gap risk or build buffer time into the schedule to address potential delays.
Communication and Monitoring: The course highlighted the importance of clear communication regarding risks. I learned to effectively communicate risks to stakeholders, ensuring everyone is aware of potential challenges and mitigation plans. Additionally, I gained valuable insights into risk monitoring and tracking, allowing for continuous evaluation and adaptation as the project progresses.
In essence, "Project Risk Management" equipped me with the knowledge and tools to navigate the inevitable uncertainties of projects. By embracing a proactive approach, I can now lead projects with greater confidence, increasing the chances of achieving successful outcomes.
We consider here k-valent plane and toroidal maps with faces of size a and b. The faces are said to be in a lego if the faces are organized in blocks that then tile the sphere. We expose some enumeration results and the technical enumeration methods.
Then we expose how we managed to draw the graphs from the combinatorial data.
The K-Nearest Neighbors (KNN) algorithm is a robust and intuitive machine learning method employed to tackle classification and regression problems. By capitalizing on the concept of similarity, KNN predicts the label or value of a new data point by considering its K closest neighbours in the training dataset. In this article, we will learn about a supervised learning algorithm (KNN) or the k – Nearest Neighbours, highlighting it’s user-friendly nature.
What is the K-Nearest Neighbors Algorithm?
K-Nearest Neighbours is one of the most basic yet essential classification algorithms in Machine Learning. It belongs to the supervised learning domain and finds intense application in pattern recognition, data mining, and intrusion detection.
It is widely disposable in real-life scenarios since it is non-parametric, meaning, it does not make any underlying assumptions about the distribution of data (as opposed to other algorithms such as GMM, which assume a Gaussian distribution of the given data). We are given some prior data (also called training data), which classifies coordinates into groups identified by an attribute.
Practical and Worst-Case Efficient ApportionmentRaphael Reitzig
Proportional apportionment is the problem of assigning seats to parties according to their relative share of votes. Divisor methods are the de-facto standard solution, used in many countries.
In recent literature, there are two algorithms that implement divisor methods: one by Cheng and Eppstein (ISAAC, 2014) has worst-case optimal running time but is complex, while the other (Pukelsheim, 2014) is relatively simple and fast in practice but does not offer worst-case guarantees.
This talk presents the ideas behind a novel algorithm that avoids the shortcomings of both. We investigate the three contenders in order to determine which is most useful in practice.
Read more over here: http://reitzig.github.io/publications/RW2015b
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Or: SO3 webs in action
I will explain how to use a representation theoretical gadget called webs to prove the four color theorem.
There is still a computer component involved.
The slides and their source code can be found on the website www.dtubbenhauer.com
Or: Quantum algebra = geometry + algebra
You may not have heard of knot theory, the mathematical study of knots. Inspired by real-world knots such as those in shoelaces and ropes, knot theory is a fairly new field of mathematics which nowadays sits in the intersection of algebra and geometry, with applications beyond mathematics itself. This talk is a friendly overview of how knots and algebra fit together nicely.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Slides (if I need to update the slides): tubbenhauer.com/talks.html
Or: Cell theory for monoidal categories
In this series of talks we explain how representation theory of monoids, algebras and monoidal categories can all be based on the theory of cells.
This is part 3 of 3.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Slides (if I need to update the slides): tubbenhauer.com/talks.html
Or: Cell theory for algebras
In this series of talks we explain how representation theory of monoids, algebras and monoidal categories can all be based on the theory of cells.
This is part 2 of 3.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Slides (if I need to update the slides): tubbenhauer.com/talks.html
Or: Cell theory for monoid
In this series of talks we explain how representation theory of monoids, algebras and monoidal categories can all be based on the theory of cells.
This is part 1 of 3.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Slides (if I need to update the slides): tubbenhauer.com/talks.html
Representation theory of monoids and monoidal categoriesDaniel Tubbenhauer
Or: Cells and actions
This talk explains a (in my opinion cute) relationship between the representation theory of monoids and monoidal categories. The connection is cell theory, a.k.a. Green's relations.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Slides (if I need to update the slides): tubbenhauer.com/talks.html
Or: Invariant theory, magnetism, subfactors and skein
This talk is a very biased collection of my attempt to understand the history of category theory, from Eilenberg and MacLane to monoidal categories and alike.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Slides: dtubbenhauer.com/talks.html
Or: Representing symmetries
This talk is a summary and a motivation why one should study representation theory and its categorification.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Slides: dtubbenhauer.com/talks.html
Or: ADE and all that
This talk is a summary of the history of subfactors, starting with the analytic definitions, to Jones' subfactors to skein theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Slides: dtubbenhauer.com/talks.html
Or: Invariant theory, magnetism, subfactors and skein
This talk is a summary of the history of the Temperley-Lieb calculus, starting with famous works of Rumer-Teller-Weyl and Temperley-Lieb, over subfactors to skein theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Slides: dtubbenhauer.com/talks.html
Or: From ice to R-matrices
This talk is a summary of the history of quantum groups, describe how they arose from questions in statistical mechanics. The keyword is the Yang-Baxter equation, which was crucial for the development of the field.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
ISI 2024: Application Form (Extended), Exam Date (Out), EligibilitySciAstra
The Indian Statistical Institute (ISI) has extended its application deadline for 2024 admissions to April 2. Known for its excellence in statistics and related fields, ISI offers a range of programs from Bachelor's to Junior Research Fellowships. The admission test is scheduled for May 12, 2024. Eligibility varies by program, generally requiring a background in Mathematics and English for undergraduate courses and specific degrees for postgraduate and research positions. Application fees are ₹1500 for male general category applicants and ₹1000 for females. Applications are open to Indian and OCI candidates.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Nutraceutical market, scope and growth: Herbal drug technology
A primer on computer algebra
1.
2. Computer algebra
▶ Equations are everywhere : differential equations, linear or polynomial
equations or inequalities, recurrences, equations in groups, algebras or
categories, tensor equations etc.
▶ There are two ways of solving such equations: approximately or exactly
▶ Oversimplified, numerical analysis studies efficient ways to get approximate
solutions; computer algebra wants exact solutions
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
3. Computer algebra
▶ Equations are everywhere : differential equations, linear or polynomial
equations or inequalities, recurrences, equations in groups, algebras or
categories, tensor equations etc.
▶ There are two ways of solving such equations: approximately or exactly
▶ Oversimplified, numerical analysis studies efficient ways to get approximate
solutions; computer algebra wants exact solutions
To get started, an example from chemistry
Watch out for three (very typical but usually highly nontrivial) steps:
▶ create a mathematical model
▶ “solve” the model (enter e.g. computer algebra)
▶ interpret the solution
We will see C6H12 next, so here it is:
Based on a conjecture of Sachse ∼1890
and a solution by Levelt ∼1997
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
4. Computer algebra
chair:
one boat:
▶ C6H12 occurs in incongruent conformations: chair (one) and boats (many) mod mirrors
▶ Chair occurs far more frequently than the boats
▶ Chair is stiff while the boats can twist into one another
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
5. Computer algebra
chair:
one boat:
▶ C6H12 occurs in incongruent conformations: chair (one) and boats (many) mod mirrors
▶ Chair occurs far more frequently than the boats
▶ Chair is stiff while the boats can twist into one another
Mathematical modeling is difficult
so it happens often that one needs to
go back and modify the approach
This happened here as the first solution attempt was not helpful
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
6. Computer algebra
▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product
▶ Model Sij = ai ⋆ aj as variables
▶ One gets polynomial variables subject to the relations above ⇒ get solution
via Gröbner bases
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
7. Computer algebra
▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product
▶ Model Sij = ai ⋆ aj as variables
▶ One gets polynomial variables subject to the relations above ⇒ get solution
via Gröbner bases
One gets that the inflexible solution chair is an isolated point
while the boats lie on a curve:
Chair won’t move
and boats can be twisted
when build from tubes
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
8. Computer algebra
▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product
▶ Model Sij = ai ⋆ aj as variables
▶ One gets polynomial variables subject to the relations above ⇒ get solution
via Gröbner bases
One gets that the inflexible solution chair is an isolated point
while the boats lie on a curve:
Chair won’t move
and boats can be twisted
when build from tubes
Gröbner bases are an essential part of computer algebra
so this is a fabulous example
of the usage of computer algebra
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
9. Computer algebra
▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product
▶ Model Sij = ai ⋆ aj as variables
▶ One gets polynomial variables subject to the relations above ⇒ get solution
via Gröbner bases
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
10. Computer algebra
▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product
▶ Model Sij = ai ⋆ aj as variables
▶ One gets polynomial variables subject to the relations above ⇒ get solution
via Gröbner bases
What we are using throughout is worst-case-analysis using:
f ∈ O(g) :
Careful This is different from:
▶ Average-case-analysis
▶ Computational implications due to overhead (≈ the part before n0)
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
11. Fast multiplication
▶ Given two polynomials f and g of degree < n; we want fg
▶ Classical polynomial multiplication needs n2
multiplications and (n − 1)2
additions; thus mult(poly) ∈ O(n2
)
▶ It doesn’t appear that we can do faster
A primer on computer algebra Or: Faster than expected April 2023 π / 5
12. Fast multiplication
▶ Karatsuba ∼1960 It gets faster!
▶ Reduce multiplication cost even when potentially increasing addition cost
▶ Second, apply divide-and-conquer
A primer on computer algebra Or: Faster than expected April 2023 π / 5
13. Fast multiplication
▶ Karatsuba ∼1960 It gets faster!
▶ Reduce multiplication cost even when potentially increasing addition cost
▶ Second, apply divide-and-conquer
We compute ac, bd, u = (a + b)(c + d), v = ac + bd, u − v
with 3 multiplications and 4 additions = 7 operations
The total has increased, but a recursive application will drastically reduce the overall cost
Upshot We only have 3 multiplications not 4
A primer on computer algebra Or: Faster than expected April 2023 π / 5
14. Fast multiplication
Example
f = g = x3
+ x2
+ x + 1 is equal to F1 + F0 = (x + 1)x2
+ x + 1
F2
0 = F2
1 = (x + 1)2
and (2x + 2)(2x + 2) need 7 ops = 21 ops
To get fg we then need two more ops = 23 ops
Classical we need 42
+ (4 − 1)2
= 25 ops
A primer on computer algebra Or: Faster than expected April 2023 π / 5
15. Fast multiplication
Example
f = g = x3
+ x2
+ x + 1 is equal to F1 + F0 = (x + 1)x2
+ x + 1
F2
0 = F2
1 = (x + 1)2
and (2x + 2)(2x + 2) need 7 ops = 21 ops
To get fg we then need two more ops = 23 ops
Classical we need 42
+ (4 − 1)2
= 25 ops
This applies recursively , so we actually save a lot:
A primer on computer algebra Or: Faster than expected April 2023 π / 5
16. Fast multiplication
Example
f = g = x3
+ x2
+ x + 1 is equal to F1 + F0 = (x + 1)x2
+ x + 1
F2
0 = F2
1 = (x + 1)2
and (2x + 2)(2x + 2) need 7 ops = 21 ops
To get fg we then need two more ops = 23 ops
Classical we need 42
+ (4 − 1)2
= 25 ops
Theorem (Karatsuba ∼1960)
For n = 2k
we have mult(poly) ∈ O(n1.59
) (1.59 ≈ log(3); always: log = log2)
There is also a version for general n but the analysis is somewhat more involved
A primer on computer algebra Or: Faster than expected April 2023 π / 5
17. Fast multiplication
Example
f = g = x3
+ x2
+ x + 1 is equal to F1 + F0 = (x + 1)x2
+ x + 1
F2
0 = F2
1 = (x + 1)2
and (2x + 2)(2x + 2) need 7 ops = 21 ops
To get fg we then need two more ops = 23 ops
Classical we need 42
+ (4 − 1)2
= 25 ops
This is much faster than before:
logplot
Theorem (Karatsuba ∼1960)
For n = 2k
we have mult(poly) ∈ O(n1.59
) (1.59 ≈ log(3))
A primer on computer algebra Or: Faster than expected April 2023 π / 5
18. Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
A primer on computer algebra Or: Faster than expected April 2023 π / 5
19. Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
My silly 5 minute Python code:
A primer on computer algebra Or: Faster than expected April 2023 π / 5
20. Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
My silly 5 minute code is still much slower than SageMath’s:
Why is that? Well: (1) I am stupid ⇒ too much overhead
(2) Nowadays computer algebra systems have beefed-up versions of Karatsuba’s algorithm build in
A primer on computer algebra Or: Faster than expected April 2023 π / 5
21. Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
Actually in use today:
Toom–Cook algorithm ∼1963 with O(n1.46
) (1.46 ≈ log(5/3))
Schönhage–Strassen algorithm ∼1971 with O(n log n log log n)
Toom–Cook generalizes Karatsuba; Schönhage–Strassen is based on FFT
Maybe in use soon (?):
Harvey–van der Hoeven algorithm ∼2019 with O(n log n)
Conjecture (Schönhage–Strassen ∼1971) O(n log n) is the best possible
So maybe that’s it!
A primer on computer algebra Or: Faster than expected April 2023 π / 5
22. Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
This is fantastic for large numbers:
logplot
Do not try for small numbers due to overhead
A primer on computer algebra Or: Faster than expected April 2023 π / 5
23. Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
Honorable mentions These also run on divide-and-conquer
Binary search (Babylonians ∼200BCE, Mauchly ∼1946, many others as well) ∈ O(log n)
Euclidean algorithm (Euclid ∼300BCE) ∈ O log min(a, b)
Fast Fourier transform (Gauss ∼1805, Cooley–Tukey ∼1965) ∈ O(n log n)
Merge sort (von Neumann ∼1945) ∈ O(n log n)
A primer on computer algebra Or: Faster than expected April 2023 π / 5
24. Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
For example, binary search does much better than linear search
A primer on computer algebra Or: Faster than expected April 2023 π / 5
25. Discrete and fast Fourier transform
▶ Assume that there is an operation DFTω such that:
fg = DFT−1
ω DFTω(f )DFTω(g)
with DFTω and DFT−1
ω and DFTω(f )DFTω(g) being cheap
▶ Then compute fg for polynomials f and g is “cheap”
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
26. Discrete and fast Fourier transform
▶ In the following in need primitive roots of unity ω in some field R
▶ You can always assume R = C and ω = exp(2πk/n)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
27. Discrete and fast Fourier transform
The R-linear map
DFTω(f ) = 1, f (ω), f (ω2
), ..., f (ωn−1
)
that evaluates a polynomial at ωi
is called the Discrete Fourier transform (DFT)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
28. Discrete and fast Fourier transform
The R-linear map
DFTω(f ) = 1, f (ω), f (ω2
), ..., f (ωn−1
)
that evaluates a polynomial at ωi
is called the Discrete Fourier transform (DFT)
Theorem (Fast Fourier transform (FFT) Cooley–Tukey ∼1965)
DFTω can computed in O(n log n)
Theorem (FFT and Vandermonde ∼1770?)
DFT−1
ω can computed in O(n log n)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
29. Discrete and fast Fourier transform
The R-linear map
DFTω(f ) = 1, f (ω), f (ω2
), ..., f (ωn−1
)
that evaluates a polynomial at ωi
is called the Discrete Fourier transform (DFT)
Theorem (Fast Fourier transform (FFT) Cooley–Tukey ∼1965)
DFTω can computed in O(n log n)
Theorem (FFT and Vandermonde ∼1770?)
DFT−1
ω can computed in O(n log n)
The Vandermonde matrix, matrix of the multipoint evaluation map DFTω,
is easy to invert VωVω−1 = nId
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
30. Discrete and fast Fourier transform
Cyclic convolution of f = fn−1xn−1
+ ... and g = gn−1xn−1
+ ... is
h = f ∗n g =
P
0≤lnhl xl
, hl =
P
j+k≡l mod nfj gk
We see in a second why this is cyclic
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
31. Discrete and fast Fourier transform
Cyclic convolution of f = fn−1xn−1
+ ... and g = gn−1xn−1
+ ... is
h = f ∗n g =
P
0≤lnhl xl
, hl =
P
j+k≡l mod nfj gk
We see in a second why this is cyclic
Slogan Convolution = area obtained by sliding f through g
We have a cyclic version of this
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
32. Discrete and fast Fourier transform
Example Take f = x3
+ 1 and g = 2x3
+ 3x2
+ x + 1
fg = 2x6
+ 3x5
+ x4
+ 3x3
+ 3x2
+ x + 1
= (2x2
+ 3x + 1)(x4
− 1) + 3x3
+ 5x2
+ 4x + 2 ≡ f ∗4 g mod (x4
− 1)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
33. Discrete and fast Fourier transform
Example Take f = x3
+ 1 and g = 2x3
+ 3x2
+ x + 1
fg = 2x6
+ 3x5
+ x4
+ 3x3
+ 3x2
+ x + 1
= (2x2
+ 3x + 1)(x4
− 1) + 3x3
+ 5x2
+ 4x + 2 ≡ f ∗4 g mod (x4
− 1)
In general, fg ≡ f ∗n g mod (xn
− 1)
Thus, fg = f ∗n g if deg(fg) n
Computation of fg is computation of f ∗n g
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
34. Discrete and fast Fourier transform
Final lemma we need
DFTω(f ∗n g) = DFTω(f ) ·pointwise DFTω(g)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
35. Discrete and fast Fourier transform
Final lemma we need
DFTω(f ∗n g) = DFTω(f ) ·pointwise DFTω(g)
Theorem (Cooley–Tukey ∼1965)
Computing fg is in O(n log n) for deg(fg) n
“Proof”
Take n so that deg(fg) n
Then fg = f ∗n g, so it remains to show that computing f ∗n g is in O(n log n)
But f ∗n g = DFT−1
ω DFTω(f ) ·pointwise DFTω(g)
DFTω(f ) and DFTω(f )−1
is in O(n log n)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
36. Discrete and fast Fourier transform
Final lemma we need
DFTω(f ∗n g) = DFTω(f ) ·pointwise DFTω(g)
This is much faster than before:
The overhead is however pretty large
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
37. Discrete and fast Fourier transform
What is FFT in this context?
▶ Assume n = 2k
and note that, using Euclid’s algorithm, writing
f = q0(xn/2
− 1) + r0 = q1(xn/2
+ 1) + r1 gives
f (ωeven
) = r0(ωeven
) , f (ωodd
) = r1(ωodd
)
▶ Writing r1( )∗
= r1(ω ) we can use divide-and-conquer since ω2
is a primitive
(n/2)th root of unity:
r0(ωeven
) and r∗
1 (ωeven
) are DFTs of order n/2 ⇒make recursive call
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
38. Discrete and fast Fourier transform
What is FFT in this context?
▶ Assume n = 2k
and note that, using Euclid’s algorithm, writing
f = q0(xn/2
− 1) + r0 = q1(xn/2
+ 1) + r1 gives
f (ωeven
) = r0(ωeven
) , f (ωodd
) = r1(ωodd
)
▶ Writing r1( )∗
= r1(ω ) we can use divide-and-conquer since ω2
is a primitive
(n/2)th root of unity:
r0(ωeven
) and r∗
1 (ωeven
) are DFTs of order n/2 ⇒make recursive call
Theorem (Cooley–Tukey ∼1965) FFT runs in O(n log n)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
39. Computer algebra
▶ Equations are everywhere : differential equations, linear or polynomial
equations or inequalities, recurrences, equations in groups, algebras or
categories, tensor equations etc.
▶ There are two ways of solving such equations: approximately or exactly
▶ Oversimplified, numerical analysis studies efficient ways to get approximate
solutions; computer algebra wants exact solutions
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
chair:
one boat:
▶ C6H12 occurs in incongruent conformations: chair (one) and boats (many) mod mirrors
▶ Chair occurs far more frequently than the boats
▶ Chair is stiff while the boats can twist into one another
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product
▶ Model Sij = ai ⋆ aj as variables
▶ One gets polynomial variables subject to the relations above ⇒ get solution
via Gröbner bases
One gets that the inflexible solution chair is an isolated point
while the boats lie on a curve:
Chair won’t move
and boats can be twisted
when build from tubes
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Fast multiplication
▶ Given two polynomials f and g of degree n; we want fg
▶ Classical polynomial multiplication needs n2
multiplications and (n − 1)2
additions; thus mult(poly) ∈ O(n2
)
▶ It doesn’t appear that we can do faster
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
Example
f = g = x3
+ x2
+ x + 1 is equal to F1 + F0 = (x + 1)x2
+ x + 1
F2
0 = F2
1 = (x + 1)2
and (2x + 2)(2x + 2) need 7 ops = 21 ops
To get fg we then need two more ops = 23 ops
Classical we need 42
+ (4 − 1)2
= 25 ops
This applies recursively , so we actually save a lot:
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
This is fantastic for large numbers:
logplot
Do not try for small numbers due to overhead
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Discrete and fast Fourier transform
▶ Assume that there is an operation DFTω such that:
fg = DFT−1
ω DFTω(f )DFTω(g)
with DFTω and DFT−1
ω and DFTω(f )DFTω(g) being cheap
▶ Then compute fg for polynomials f and g is “cheap”
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
Cyclic convolution of f = fn−1xn−1
+ ... and g = gn−1xn−1
+ ... is
h = f ∗n g =
P
0≤lnhl xl
, hl =
P
j+k≡l mod nfj gk
We see in a second why this is cyclic
Slogan Convolution = area obtained by sliding f through g
We have a cyclic version of this
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
What is FFT in this context?
▶ Assume n = 2k
and note that, using Euclid’s algorithm, writing
f = q0(xn/2
− 1) + r0 = q1(xn/2
+ 1) + r1 gives
f (ωeven
) = r0(ωeven
) , f (ωodd
) = r1(ωodd
)
▶ Writing r1( )∗
= r1(ω ) we can use divide-and-conquer since ω2
is a primitive
(n/2)th root of unity:
r0(ωeven
) and r∗
1 (ωeven
) are DFTs of order n/2 ⇒make recursive call
Theorem (Cooley–Tukey ∼1965) FFT runs in O(n log n)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
There is still much to do...
A primer on computer algebra Or: Faster than expected April 2023 5 / 5
40. Computer algebra
▶ Equations are everywhere : differential equations, linear or polynomial
equations or inequalities, recurrences, equations in groups, algebras or
categories, tensor equations etc.
▶ There are two ways of solving such equations: approximately or exactly
▶ Oversimplified, numerical analysis studies efficient ways to get approximate
solutions; computer algebra wants exact solutions
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
chair:
one boat:
▶ C6H12 occurs in incongruent conformations: chair (one) and boats (many) mod mirrors
▶ Chair occurs far more frequently than the boats
▶ Chair is stiff while the boats can twist into one another
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Computer algebra
▶ They then modeled the bods as vectors ai and ai ⋆ aj =inner product
▶ Model Sij = ai ⋆ aj as variables
▶ One gets polynomial variables subject to the relations above ⇒ get solution
via Gröbner bases
One gets that the inflexible solution chair is an isolated point
while the boats lie on a curve:
Chair won’t move
and boats can be twisted
when build from tubes
A primer on computer algebra Or: Faster than expected April 2023 2 / 5
Fast multiplication
▶ Given two polynomials f and g of degree n; we want fg
▶ Classical polynomial multiplication needs n2
multiplications and (n − 1)2
additions; thus mult(poly) ∈ O(n2
)
▶ It doesn’t appear that we can do faster
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
Example
f = g = x3
+ x2
+ x + 1 is equal to F1 + F0 = (x + 1)x2
+ x + 1
F2
0 = F2
1 = (x + 1)2
and (2x + 2)(2x + 2) need 7 ops = 21 ops
To get fg we then need two more ops = 23 ops
Classical we need 42
+ (4 − 1)2
= 25 ops
This applies recursively , so we actually save a lot:
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Fast multiplication
k = 2:
Replace xk
by e.g. 2k
and do the same as before
▶ Karatsuba ∼1960 Using k-adic expansion, this works for numbers as well
▶ Theorem (Karatsuba ∼1960) For n = 2k
(n=#digits) we have mult ∈ O(n1.59
)
▶ Multiplication is everywhere so this is fabulous
This is fantastic for large numbers:
logplot
Do not try for small numbers due to overhead
A primer on computer algebra Or: Faster than expected April 2023 π / 5
Discrete and fast Fourier transform
▶ Assume that there is an operation DFTω such that:
fg = DFT−1
ω DFTω(f )DFTω(g)
with DFTω and DFT−1
ω and DFTω(f )DFTω(g) being cheap
▶ Then compute fg for polynomials f and g is “cheap”
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
Cyclic convolution of f = fn−1xn−1
+ ... and g = gn−1xn−1
+ ... is
h = f ∗n g =
P
0≤lnhl xl
, hl =
P
j+k≡l mod nfj gk
We see in a second why this is cyclic
Slogan Convolution = area obtained by sliding f through g
We have a cyclic version of this
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Discrete and fast Fourier transform
What is FFT in this context?
▶ Assume n = 2k
and note that, using Euclid’s algorithm, writing
f = q0(xn/2
− 1) + r0 = q1(xn/2
+ 1) + r1 gives
f (ωeven
) = r0(ωeven
) , f (ωodd
) = r1(ωodd
)
▶ Writing r1( )∗
= r1(ω ) we can use divide-and-conquer since ω2
is a primitive
(n/2)th root of unity:
r0(ωeven
) and r∗
1 (ωeven
) are DFTs of order n/2 ⇒make recursive call
Theorem (Cooley–Tukey ∼1965) FFT runs in O(n log n)
A primer on computer algebra Or: Faster than expected April 2023 4 / 5
Thanks for your attention!
A primer on computer algebra Or: Faster than expected April 2023 5 / 5