Manindra Agrawal and two of his students, Neeraj Kayal and Nitin Saxena, proved that primes is in P by developing a new algorithm. Their proof was a breakthrough because it provided a simple, elegant solution to a long-standing open problem. It was also accessible to undergraduate students, unlike many other advanced mathematical proofs. Within a few days of publishing their preprint, experts verified the proof and it received widespread attention, with over two million views of their website in the first ten days.
A three day national seminar on advances in mathematics was organized by MBICT in January 2012. It received support from various organizations and over 100 mathematicians and engineers participated and presented on topics related to the history of mathematics and engineering applications. Key topics included the origins of right angles in ancient Indian mathematics, theories of equations from Newton to modern times, and using mathematics to understand nature.
The document discusses addition and subtraction theorems for the sine and cosine functions in medieval Indian mathematics. It provides statements of the theorems as found in several important Indian works from the 12th to 17th centuries. These theorems are equivalent to the modern mathematical formulas for trigonometric addition and subtraction. The document also outlines various proofs and derivations of the theorems found in Indian works, which indicate how Indians understood the rationales behind the theorems.
1) The document discusses an ancient Indian mathematics problem involving using gnomons (vertical rods) to measure the height of a lamp post and the distance between shadow tips.
2) It presents the original rule from the Aryabhatiya, and provides examples and explanations of how to use the rule to calculate heights and distances.
3) The rule involves using ratios between shadow lengths, distance between shadow tips, and gnomon length to determine the "upright" distance and lamp post height.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The First International Conference on the History of Mathematical Sciences was held in New Delhi from December 20-23, 2001. It was organized by the Indian Society for History of Mathematics and Ramjas College in collaboration with other institutions. The conference covered all aspects of the history of mathematical sciences, especially ancient Indian history, and had participants from 11 countries as well as India. There were over 20 sessions over the 4 days covering topics from ancient to modern mathematics through invited talks and paper presentations.
This document summarizes a rule from the ancient Indian mathematics text Lilavati for computing the sides of regular polygons inscribed in a circle. The rule provides coefficients that when multiplied by the circle's diameter and divided by 12,000 yield the side lengths of triangles, squares, pentagons, etc. up to nonagons. While the rationales for some coefficients are clear, others are less accurate, possibly due to limitations in available sine tables or knowledge of exact solutions. The document discusses methods that may have been used to derive the coefficients and compares the original values to more accurate modern calculations.
A three day national seminar on advances in mathematics was organized by MBICT in January 2012. It received support from various organizations and over 100 mathematicians and engineers participated and presented on topics related to the history of mathematics and engineering applications. Key topics included the origins of right angles in ancient Indian mathematics, theories of equations from Newton to modern times, and using mathematics to understand nature.
The document discusses addition and subtraction theorems for the sine and cosine functions in medieval Indian mathematics. It provides statements of the theorems as found in several important Indian works from the 12th to 17th centuries. These theorems are equivalent to the modern mathematical formulas for trigonometric addition and subtraction. The document also outlines various proofs and derivations of the theorems found in Indian works, which indicate how Indians understood the rationales behind the theorems.
1) The document discusses an ancient Indian mathematics problem involving using gnomons (vertical rods) to measure the height of a lamp post and the distance between shadow tips.
2) It presents the original rule from the Aryabhatiya, and provides examples and explanations of how to use the rule to calculate heights and distances.
3) The rule involves using ratios between shadow lengths, distance between shadow tips, and gnomon length to determine the "upright" distance and lamp post height.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The First International Conference on the History of Mathematical Sciences was held in New Delhi from December 20-23, 2001. It was organized by the Indian Society for History of Mathematics and Ramjas College in collaboration with other institutions. The conference covered all aspects of the history of mathematical sciences, especially ancient Indian history, and had participants from 11 countries as well as India. There were over 20 sessions over the 4 days covering topics from ancient to modern mathematics through invited talks and paper presentations.
This document summarizes a rule from the ancient Indian mathematics text Lilavati for computing the sides of regular polygons inscribed in a circle. The rule provides coefficients that when multiplied by the circle's diameter and divided by 12,000 yield the side lengths of triangles, squares, pentagons, etc. up to nonagons. While the rationales for some coefficients are clear, others are less accurate, possibly due to limitations in available sine tables or knowledge of exact solutions. The document discusses methods that may have been used to derive the coefficients and compares the original values to more accurate modern calculations.
1. Aryabhata, an Indian astronomer and mathematician from the 5th century AD, approximated pi (π) to 3.1416 in his famous work the Aryabhatiya. This approximation correct to four decimal places was one of the most accurate approximations of pi used anywhere in the ancient world.
2. Aryabhata expressed pi as a fraction 62832/20000, which can be expressed in continued fractions as 3 + (4/16). This value of pi was later used and referenced by many Indian and foreign mathematicians and astronomers over subsequent centuries.
3. Scholars debate whether Aryabhata's value of pi was influenced by the Greeks
News international conference at almoraSohil Gupta
The document provides information about an upcoming colloquium on the history of mathematical sciences and a symposium on nonlinear analysis to be held from May 16-19, 2011 in Almora, India. The colloquium will feature talks by national and international speakers on various topics related to the history of mathematical sciences. A registration fee of $150 or 1500 rupees is required to attend. Accommodation and local transportation will be provided for registered participants. Papers on relevant topics are invited for both events.
Narayana Pandita, an important ancient Indian mathematician who lived in the 14th century, developed a method for approximating quadratic surds (square roots of non-perfect squares). His method involved finding integer solutions to the indeterminate equation x2 - Ny2 = 1 and taking the ratio of the solutions as the approximation. The article describes Narayana's method using the example of approximating √10 and relates his method to other ancient approaches like the binomial approximation and Newton's method. It highlights the significance of Narayana's works in the development of ancient Indian mathematics.
The document summarizes rules for calculating the perimeter and area of an ellipse from an ancient Indian mathematics text from the 9th century AD. It provides the ancient text's approximate formulas for perimeter and area, and explains how they were likely empirically derived by comparing an ellipse to simpler shapes like a semicircle or circular segment. It also gives the correct elliptic integral formulas, and shows how the ancient approximations compare to the precise solutions for a numerical example.
Brahmagupta was an ancient Indian mathematician from the 7th century AD who made important contributions to mathematics. This document discusses one of Brahmagupta's formulas for calculating the volume of frustum-like solids (solids with parallel ends of different shapes and sizes). The formula provides a method to calculate both the practical volume and gross volume, and uses the difference between the two to find the accurate volume. The formula is shown to apply to specific cases like truncated wedges and cones. The formula demonstrates Brahmagupta's sophisticated mathematical abilities and had influence in both India and other parts of the ancient world.
The document summarizes a two-day international seminar on the history of mathematics held in New Delhi in 2012. Over 150 mathematicians from 16 countries participated in the event, which celebrated the 125th birth anniversary of Srinivasa Ramanujan and covered various aspects of mathematics history, especially ancient Indian history. There were 11 sessions over the two days featuring talks and papers from scholars. The seminar concluded with positive feedback and was considered a great success in arranging such a large international event.
This document discusses the unreasonable utility of recreational mathematics. It begins by defining recreational mathematics as mathematics that is fun and popular, understood by laypeople, or used for pedagogical purposes. The document then argues that recreational mathematics has been useful in several ways: it has stimulated serious mathematics; provided problems for students; communicated history and culture; and aided historians. Several examples are provided to illustrate how recreational problems led to probability, graph theory, geometry concepts used in science, and more.
COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, VOL. XIII, 001.docxpickersgillkayne
COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, VOL. XIII, 001-14 (1960)
The Unreasonable Effectiveness of Mat hematics
in the Natural Sciences
Richard Courant Lecture i n Mathematical Sciences delivered at New York University,
May 1 1 , 1959
E U G E N E P. W I G N E R
Princeton University
“and it i s probable t h a t there i s some secret here
w h i c h r e m a i n s t o be discovered.” ( C . S . P e i r c e )
There is a story about two friends, who were classmates in high school,
talking about their jobs. One of them became a statistician and was working
on population trends. He showed a reprint to his former classmate, The
reprint started, as usual, with the Gaussian distribution and the statistician
explained t o his former classmate the meaning of the symbols for the actual
population, for the average population, and so on. His classmate was a
bit incredulous and was not quite sure whether the statistician was pulling
his leg. “How can you know that?” was his query. “And what is this
symbol i e r e ? ” “Oh,” said the statistician, “this is n.” “What is t h a t ? ”
“The ratio of the circumference of the circle t o its diameter.” “Well, now
you are pushing your joke too far,” said the classmate, “surely the pop-
ulation has nothing to do with the circumference of the circle.”
Naturally, we are inclined to smile about the simplicity of the classmate’s
approach. Nevertheless, when I heard this story, I had to admit to an
eerie feeling because, surely, the reaction of the classmate betrayed only
plain common sense. I was even more confused when, not many days later,
someone came t o me and expressed his bewilderment1 with the fact that
we make a rather narrow selection when choosing the data on which we
test our theories. “How do we know that, if we made a theory which focusses
its attention on phenomena we disregard and disregards some of the phe-
nomena now commanding our attention, that we could not build another
theory which has little in common with the present one but which, never-
theless, explains just as many phenomena as the present theory.” It has
to be admitted that we have not definite evidence that there is no such theory.
The preceding two stories illustrate the two main points which are the
‘The remark to be quoted was made b y F. Werner when he was a student in Princeton.
1
2 E. P. WIGNER
subjects of the present discourse. The first point is that mathematical con-
concepts turn up in entirely unexpected connections. Moreover, they often
permit an unexpectedly close and accurate description of the phenomena in
these connections. Secondly, just because of this circumstance, and because
we do not understand the reasons of their usefulness, we cannot know
whether a theory formulated in terms of mathematical concepts is uniquely
appropriate. We are in a position similar to that of a man who was provided
with a bunch of keys and who, having to open several .
The document discusses Hilbert's view that unsolved mathematical problems are essential for advancing the field, as they test new methods and ideas and can lead to new discoveries and applications. He cites examples throughout history where difficult problems inspired major developments, such as the problem of the shortest line influencing many areas of math and Klein's work on the icosahedron connecting geometry, group theory, and more. Hilbert argues that unsolved problems keep mathematics alive and ensure its continued independent development.
Lecture 1 Slides -Introduction to algorithms.pdfRanvinuHewage
- The document discusses reasons for studying algorithms and their broad impacts.
- Key reasons include solving hard problems, intellectual stimulation, becoming a proficient programmer, unlocking secrets of life and the universe, and fun.
- Algorithms have roots in ancient times but new opportunities in the modern era with computers and large data. They allow addressing problems that could not otherwise be solved.
The document summarizes a presentation on the development of modern engineering and argues that software engineering needs to adopt a more theory-driven approach. It traces the history from symbolic algebra in the 16th century through the Enlightenment period. Key points included the introduction of symbolic algebra allowing general solutions, the relationship between models and the subjects they represent, and the development of intuitionistic logic and programming languages based on constructive logic. The presentation argues for an approach like proof-carrying code to address software security and bring more discipline to the field through a foundation in mathematical theory and logic.
The Riemann Hypothesis, proposed by Bernhard Riemann in 1859, is an unproven conjecture regarding the location of zeros of the Riemann zeta function. It states that all non-trivial zeros of the zeta function have a real part of 1/2. Proving or disproving the Riemann Hypothesis would have significant implications for mathematics and fields that rely on prime numbers like cryptography. It remains one of the most important unsolved problems in mathematics with a $1 million prize for a successful proof or disproof.
This document discusses David Hilbert's 1900 lecture on 23 important mathematical problems for the future. It provides context for the importance of problems in driving the development of mathematics. Hilbert believed problems attract mathematicians to devise new methods and gain deeper understanding. Good problems should be clear yet challenging, guiding researchers along the path to hidden truths. Historically, difficult specific problems like "the problem of the line of quickest descent" proposed by John Bernoulli spurred advances in mathematics. One such problem was Fermat's assertion that the diophantine equation xn + yn = zn has no integer solutions for n greater than 2.
The document discusses form-content confusion in computer science. It argues that an excessive focus on formalism is hindering development in three areas: theory of computation, programming languages, and education. In theory of computation, our understanding of fundamental computational tradeoffs like time-memory and serial-parallel exchanges is limited. In programming languages, the focus is too much on syntax and not enough on semantics. In education, the curriculum emphasizes formal classifications over practical knowledge.
The document summarizes 18 important mathematical problems for the next century as identified by Steve Smale. Some of the key problems discussed include:
1) The Riemann Hypothesis concerning the distribution of primes.
2) The Poincaré Conjecture regarding classifying 3-dimensional spaces.
3) The famous P vs. NP problem about the difference between solving and verifying solutions to problems.
This document provides an introduction to grammars and languages from a computer science perspective. It defines language as an infinite set of sentences, where each sentence is a finite sequence of symbols from a finite alphabet. Grammars are described as a means to generate and describe the structure of the sentences in a language. The document outlines different views of language from communication, linguistics, and computer science to establish the terminology and scope used.
This document provides a summary of Valeria de Paiva's background and research interests related to proofs, logic, and programming languages. It discusses her education in mathematics in Brazil and the UK. It then outlines some of her research exploring connections between category theory, proof theory, type theory, and programming through the Curry-Howard correspondence. This includes work on dialectica categories as a model of intuitionistic linear logic. The document promotes an interdisciplinary approach and highlights opportunities at the intersection of logic, computing, linguistics and categories.
This document provides information about various topics in mathematics including the invention of mathematics, algorithms, logarithms, and arithmetic progressions. It discusses how mathematics developed from counting and measurement in early civilizations. Algorithms are defined as well-defined procedures for solving problems, and logarithms are introduced as a way to simplify calculations by relating exponential functions. Arithmetic progressions follow a constant difference between consecutive terms of a sequence. The document also provides details on the development of algorithms, logarithms, and arithmetic as branches of mathematics.
1. Aryabhata, an Indian astronomer and mathematician from the 5th century AD, approximated pi (π) to 3.1416 in his famous work the Aryabhatiya. This approximation correct to four decimal places was one of the most accurate approximations of pi used anywhere in the ancient world.
2. Aryabhata expressed pi as a fraction 62832/20000, which can be expressed in continued fractions as 3 + (4/16). This value of pi was later used and referenced by many Indian and foreign mathematicians and astronomers over subsequent centuries.
3. Scholars debate whether Aryabhata's value of pi was influenced by the Greeks
News international conference at almoraSohil Gupta
The document provides information about an upcoming colloquium on the history of mathematical sciences and a symposium on nonlinear analysis to be held from May 16-19, 2011 in Almora, India. The colloquium will feature talks by national and international speakers on various topics related to the history of mathematical sciences. A registration fee of $150 or 1500 rupees is required to attend. Accommodation and local transportation will be provided for registered participants. Papers on relevant topics are invited for both events.
Narayana Pandita, an important ancient Indian mathematician who lived in the 14th century, developed a method for approximating quadratic surds (square roots of non-perfect squares). His method involved finding integer solutions to the indeterminate equation x2 - Ny2 = 1 and taking the ratio of the solutions as the approximation. The article describes Narayana's method using the example of approximating √10 and relates his method to other ancient approaches like the binomial approximation and Newton's method. It highlights the significance of Narayana's works in the development of ancient Indian mathematics.
The document summarizes rules for calculating the perimeter and area of an ellipse from an ancient Indian mathematics text from the 9th century AD. It provides the ancient text's approximate formulas for perimeter and area, and explains how they were likely empirically derived by comparing an ellipse to simpler shapes like a semicircle or circular segment. It also gives the correct elliptic integral formulas, and shows how the ancient approximations compare to the precise solutions for a numerical example.
Brahmagupta was an ancient Indian mathematician from the 7th century AD who made important contributions to mathematics. This document discusses one of Brahmagupta's formulas for calculating the volume of frustum-like solids (solids with parallel ends of different shapes and sizes). The formula provides a method to calculate both the practical volume and gross volume, and uses the difference between the two to find the accurate volume. The formula is shown to apply to specific cases like truncated wedges and cones. The formula demonstrates Brahmagupta's sophisticated mathematical abilities and had influence in both India and other parts of the ancient world.
The document summarizes a two-day international seminar on the history of mathematics held in New Delhi in 2012. Over 150 mathematicians from 16 countries participated in the event, which celebrated the 125th birth anniversary of Srinivasa Ramanujan and covered various aspects of mathematics history, especially ancient Indian history. There were 11 sessions over the two days featuring talks and papers from scholars. The seminar concluded with positive feedback and was considered a great success in arranging such a large international event.
This document discusses the unreasonable utility of recreational mathematics. It begins by defining recreational mathematics as mathematics that is fun and popular, understood by laypeople, or used for pedagogical purposes. The document then argues that recreational mathematics has been useful in several ways: it has stimulated serious mathematics; provided problems for students; communicated history and culture; and aided historians. Several examples are provided to illustrate how recreational problems led to probability, graph theory, geometry concepts used in science, and more.
COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, VOL. XIII, 001.docxpickersgillkayne
COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, VOL. XIII, 001-14 (1960)
The Unreasonable Effectiveness of Mat hematics
in the Natural Sciences
Richard Courant Lecture i n Mathematical Sciences delivered at New York University,
May 1 1 , 1959
E U G E N E P. W I G N E R
Princeton University
“and it i s probable t h a t there i s some secret here
w h i c h r e m a i n s t o be discovered.” ( C . S . P e i r c e )
There is a story about two friends, who were classmates in high school,
talking about their jobs. One of them became a statistician and was working
on population trends. He showed a reprint to his former classmate, The
reprint started, as usual, with the Gaussian distribution and the statistician
explained t o his former classmate the meaning of the symbols for the actual
population, for the average population, and so on. His classmate was a
bit incredulous and was not quite sure whether the statistician was pulling
his leg. “How can you know that?” was his query. “And what is this
symbol i e r e ? ” “Oh,” said the statistician, “this is n.” “What is t h a t ? ”
“The ratio of the circumference of the circle t o its diameter.” “Well, now
you are pushing your joke too far,” said the classmate, “surely the pop-
ulation has nothing to do with the circumference of the circle.”
Naturally, we are inclined to smile about the simplicity of the classmate’s
approach. Nevertheless, when I heard this story, I had to admit to an
eerie feeling because, surely, the reaction of the classmate betrayed only
plain common sense. I was even more confused when, not many days later,
someone came t o me and expressed his bewilderment1 with the fact that
we make a rather narrow selection when choosing the data on which we
test our theories. “How do we know that, if we made a theory which focusses
its attention on phenomena we disregard and disregards some of the phe-
nomena now commanding our attention, that we could not build another
theory which has little in common with the present one but which, never-
theless, explains just as many phenomena as the present theory.” It has
to be admitted that we have not definite evidence that there is no such theory.
The preceding two stories illustrate the two main points which are the
‘The remark to be quoted was made b y F. Werner when he was a student in Princeton.
1
2 E. P. WIGNER
subjects of the present discourse. The first point is that mathematical con-
concepts turn up in entirely unexpected connections. Moreover, they often
permit an unexpectedly close and accurate description of the phenomena in
these connections. Secondly, just because of this circumstance, and because
we do not understand the reasons of their usefulness, we cannot know
whether a theory formulated in terms of mathematical concepts is uniquely
appropriate. We are in a position similar to that of a man who was provided
with a bunch of keys and who, having to open several .
The document discusses Hilbert's view that unsolved mathematical problems are essential for advancing the field, as they test new methods and ideas and can lead to new discoveries and applications. He cites examples throughout history where difficult problems inspired major developments, such as the problem of the shortest line influencing many areas of math and Klein's work on the icosahedron connecting geometry, group theory, and more. Hilbert argues that unsolved problems keep mathematics alive and ensure its continued independent development.
Lecture 1 Slides -Introduction to algorithms.pdfRanvinuHewage
- The document discusses reasons for studying algorithms and their broad impacts.
- Key reasons include solving hard problems, intellectual stimulation, becoming a proficient programmer, unlocking secrets of life and the universe, and fun.
- Algorithms have roots in ancient times but new opportunities in the modern era with computers and large data. They allow addressing problems that could not otherwise be solved.
The document summarizes a presentation on the development of modern engineering and argues that software engineering needs to adopt a more theory-driven approach. It traces the history from symbolic algebra in the 16th century through the Enlightenment period. Key points included the introduction of symbolic algebra allowing general solutions, the relationship between models and the subjects they represent, and the development of intuitionistic logic and programming languages based on constructive logic. The presentation argues for an approach like proof-carrying code to address software security and bring more discipline to the field through a foundation in mathematical theory and logic.
The Riemann Hypothesis, proposed by Bernhard Riemann in 1859, is an unproven conjecture regarding the location of zeros of the Riemann zeta function. It states that all non-trivial zeros of the zeta function have a real part of 1/2. Proving or disproving the Riemann Hypothesis would have significant implications for mathematics and fields that rely on prime numbers like cryptography. It remains one of the most important unsolved problems in mathematics with a $1 million prize for a successful proof or disproof.
This document discusses David Hilbert's 1900 lecture on 23 important mathematical problems for the future. It provides context for the importance of problems in driving the development of mathematics. Hilbert believed problems attract mathematicians to devise new methods and gain deeper understanding. Good problems should be clear yet challenging, guiding researchers along the path to hidden truths. Historically, difficult specific problems like "the problem of the line of quickest descent" proposed by John Bernoulli spurred advances in mathematics. One such problem was Fermat's assertion that the diophantine equation xn + yn = zn has no integer solutions for n greater than 2.
The document discusses form-content confusion in computer science. It argues that an excessive focus on formalism is hindering development in three areas: theory of computation, programming languages, and education. In theory of computation, our understanding of fundamental computational tradeoffs like time-memory and serial-parallel exchanges is limited. In programming languages, the focus is too much on syntax and not enough on semantics. In education, the curriculum emphasizes formal classifications over practical knowledge.
The document summarizes 18 important mathematical problems for the next century as identified by Steve Smale. Some of the key problems discussed include:
1) The Riemann Hypothesis concerning the distribution of primes.
2) The Poincaré Conjecture regarding classifying 3-dimensional spaces.
3) The famous P vs. NP problem about the difference between solving and verifying solutions to problems.
This document provides an introduction to grammars and languages from a computer science perspective. It defines language as an infinite set of sentences, where each sentence is a finite sequence of symbols from a finite alphabet. Grammars are described as a means to generate and describe the structure of the sentences in a language. The document outlines different views of language from communication, linguistics, and computer science to establish the terminology and scope used.
This document provides a summary of Valeria de Paiva's background and research interests related to proofs, logic, and programming languages. It discusses her education in mathematics in Brazil and the UK. It then outlines some of her research exploring connections between category theory, proof theory, type theory, and programming through the Curry-Howard correspondence. This includes work on dialectica categories as a model of intuitionistic linear logic. The document promotes an interdisciplinary approach and highlights opportunities at the intersection of logic, computing, linguistics and categories.
This document provides information about various topics in mathematics including the invention of mathematics, algorithms, logarithms, and arithmetic progressions. It discusses how mathematics developed from counting and measurement in early civilizations. Algorithms are defined as well-defined procedures for solving problems, and logarithms are introduced as a way to simplify calculations by relating exponential functions. Arithmetic progressions follow a constant difference between consecutive terms of a sequence. The document also provides details on the development of algorithms, logarithms, and arithmetic as branches of mathematics.
Teaching and learning mathematics at university levelharisv9
This document provides a summary and table of contents for a book that presents dialogues between two characters - M, a mathematician, and RME, a researcher in mathematics education. The dialogues discuss issues in learning and teaching mathematics at the university level, grounded in data from previous studies on this topic. Chapters 3-8 contain the dialogues, focusing on students' mathematical reasoning, expression, key concepts, and pedagogy. Chapter 8 discusses the relationship between mathematicians and mathematics education researchers. Introductory and concluding chapters provide background on the studies, methodology, and production of the book.
T
E
D
|
W
il
e
y
T
h
e
E
d
g
e
o
f
K
n
o
w
le
d
g
e
In
st
ru
ct
o
r
M
a
te
ri
a
ls
1
Physics: The Edge of Knowledge
Putting It Together: Summary Comments and Activities
In this series of TEDTalks, we’ve heard how the pursuit of fundamental physics has led us to a
beautiful and concise set of laws. As it stands today, experimental observations, both at the
smallest scales of the elementary particles and the largest distances of the cosmos, agree well with
the predictions of these laws.
Nevertheless, many questions remain unanswered: What is dark matter? What is the correct
quantum theory of gravity? Does supersymmetry exist? Are there extra dimensions of space? These
are just some of the big questions for which we don’t yet have answers.
New discoveries may be just out of reach, ripe for discovery with the next round of experiments, or
harder to find. In the latter case, we’ll need all our ingenuity to design and build the most effective
experiments or to discover the key theoretical idea(s) that will push us forward.
But even if we one day succeed in finding a fundamental “theory of everything,” there will still be
plenty to challenge our understanding. As Nobel-prize winning physicist Steven Weinberg points
out, “In the Middle Ages Europeans drew maps of the world in which there were all kinds of exciting
things like dragons in unknown territories.” Yet even without "here be dragons," our modern-day
world is far from boring.
The successes of fundamental physics attest to humanity’s insatiable desire for exploration and our
thirst to understand the world around us and our place within it. Whatever the future holds, we can
be confident of one thing: humanity will keep finding big questions to ask.
Summary Activities
1. Starting with Sir Isaac Newton’s laws of mechanics and gravitation, explore the key ideas in
fundamental physics, many of which have been highlighted in these TEDTalks. Present
what you’ve learned in the form of a script for a radio or television program, a syllabus
(complete with suggested readings) for a course you might design, or an interactive
timeline (how far into the past can you extend your timeline?) Here are some names to get
you started:
James Joule, John Dalton & Nicolas Léonard Sadi Carnot
Ludwig Boltzmann & Josiah Willard Gibbs
T
E
D
|
W
il
e
y
T
h
e
E
d
g
e
o
f
K
n
o
w
le
d
g
e
In
st
ru
ct
o
r
M
a
te
ri
a
ls
2
Michael Faraday, André-Marie Ampère & James Clerk Maxwell
Max Planck
Ernst Rutherford
Albert Einstein
Erwin Schrödinger & Werner Heisenberg
Paul Dirac & Wolfgang Pauli
Edwin Hubble, Arno Penzias & Robert Wilson
Sheldon Glashow, Steven Weinberg & Adbus Salam
Peter Higgs
2. Write a short science fiction story incorporating any of the following ideas from
fundamental science:
Quantum mechanics allows objects to be in t ...
This document summarizes interactions between computational complexity theory and several fields of mathematics. It discusses the computational complexity of primality testing in number theory, point-line incidences in combinatorial geometry, the Kadison-Singer problem in operator theory, and generation problems in group theory. For each area, it provides background, describes important problems and results, and notes connections to algorithms and complexity theory.
A short history of computational complexitysugeladi
This document provides a 3 paragraph summary of the history of computational complexity:
1) Computational complexity began in the 1960s with work defining time and space complexity of Turing machines. This led to the definition of efficient algorithms running in polynomial time and NP-completeness, showing many problems are as hard as any in NP.
2) In the 1970s, the theory of NP-completeness was developed, with Cook and Karp proving SAT and other problems are NP-complete. This established the central role of NP-completeness in complexity theory and computer science.
3) In the 1970s, complexity classes beyond NP were defined, such as the polynomial hierarchy, capturing different models of computation.
Artificial immune systems and the grand challenge for non classical computationUltraUploader
The document discusses the "Grand Challenge of Non-Classical Computation", which examines breaking free from classical assumptions about computation. It proposes this as a "journey" of exploration rather than having a set goal. Artificial Immune Systems are presented as an example of non-classical computation that could help address this challenge by breaking classical assumptions like the Turing paradigm and having emergent properties rather than predefined specifications. The document encourages the AIS community to position their research within this broader area of non-classical computation.
The design and implementation of systems that possess, reason with, and acquire knowledge is arguably the ultimate intellectual challenge. So why then, when we open almost any
book on Artificial Intelligence, does it open with a painstaking, almost defensive, definition
of what AI is and what AI is not?
This document is a preface to a book titled "An Episodic History of Mathematics" by Steven G. Krantz. It introduces the purpose and scope of the book, which is to acquaint students with mathematical ideas and culture through short historical episodes involving problem solving. The preface explains that the book will expose students to important mathematical concepts through working on the same problems addressed by famous mathematicians. It is intended to be accessible to a wide audience and teach mathematical thinking and discourse through interactive examples and exercises.
Similar to Articles breakthrough in prime numbers (20)
The document presents a new interpretation of a rule in the Ganita-siira-sa4graha (GSS), an ancient Indian mathematics text, for calculating the surface area of a spherical segment.
Previous interpretations by Rangacharya and Jain assumed the rule treated the spherical segment as a flat circular base. The document suggests an alternate interpretation where the Sanskrit term "viskambha" refers to the curvilinear breadth of the segment, not its diameter. This new interpretation matches the true mathematical formula and has less error than the previous views.
This document discusses ancient Indian values of Pi that were used in mathematics. It provides 5 ancient approximations of Pi that were used in India and other parts of the world:
1. The simplest approximation of Pi = 3, which was used by the ancient Babylonians, in the Bible, and in ancient Chinese works. Similar approximations are found in ancient Indian texts.
2. The "Jaina value" of Pi = 22/7, which was frequently used in Jain texts and first appeared in India around the 1st century AD.
3. The Archimedean value of Pi = 22/7, which was recognized as a satisfactory approximation after Archimedes showed Pi is between 3 1/
1. According to Jaina cosmography, the Jambu Island is circular with a diameter of 100,000 yojanas.
2. Ancient Jaina texts provided approximations for the circumference of the Jambu Island using simple formulas like C=3D. More accurate values were also calculated using the formula C=√(10D2) which is better than C=3D.
3. The article examines various circumference values found in ancient Jaina texts like the Tiloya-Pannatti and expresses them using the detailed unit system from the text to compare to the precise modern calculation of the circumference.
1) The ancient Indian mathematician Bhaskara II first described the addition and subtraction theorems for sines in his work Jyotpatti from 1150 AD. These theorems state that the sine of the sum of two arcs is equal to the sum of the sines of individual arcs multiplied by their cosines, and the sine of the difference of two arcs is equal to the difference of the sines.
2) Bhaskara II provided two formulas based on these theorems to tabulate sines at intervals of 225 minutes and 60 minutes. These formulas allow constructing tables of 24 sines and 90 sines respectively using recurrence relations.
3) The article discusses Bhaskara II's contribution to trigon
1) Brahmagupta, an ancient Indian mathematician from the 7th century AD, developed formulas for calculating the area and diagonals of a cyclic quadrilateral using only the lengths of its four sides.
2) His formula for accurate area involved taking the square root of the product of terms involving the differences between half sums of opposite sides.
3) His expressions for the diagonals involved dividing and multiplying terms involving products of the sides, and took the square root of the results. These formulas are considered some of the most remarkable in Hindu geometry.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help boost feelings of calmness, happiness and focus.
1. Madhava of Sangamagrama formulated a rule equivalent to the Gregory series for calculating the inverse tangent function.
2. The rule expresses the inverse tangent as a sum of terms involving sines and cosines, equivalent to the Gregory series.
3. Ancient Indian mathematicians provided proofs of the rule, equivalent to modern proofs through term-by-term integration, showing they understood the concept of infinite series well before Western mathematicians formulated them explicitly.
Bhaskara II, an Indian mathematician from the 12th century, derived the formula for the surface area of a sphere. He used a crude form of integration by dividing the surface into thin circular sections and approximating their areas. This led to the formula of circumference x diameter. Earlier, Aryabhata II had provided the same formula, though Bhaskara criticized another mathematician, Lalla, for providing an incorrect formula. Bhaskara's derivation showed originality compared to Archimedes' method, though it lacked the same level of ingenuity.
1) The ancient Indian text Baudhayana's Sulba Sutra from around 800-400 BC contains a rule for approximating the square root of two. The rule increases the side of a square by one-third and one-fourth parts to get an approximation of 1.41421.
2) This approximation method of linear interpolation was popular in ancient India. The rule can be explained through a two-term interpolation formula.
3) Repeated use of this interpolation process yields the four-term approximation found in the Sulba Sutra, equivalent to methods used by later mathematicians like Neugebauer. While an improvement, the Indian approximation was still less accurate than the sexagesimal
1) Ancient Indian mathematicians developed rational approximations to trigonometric functions like sine, cosine, and versine that were found in Sanskrit works from the 7th to 17th centuries AD.
2) These approximations took the form of algebraic formulas relating the functions to angles in degrees. For example, one approximation for sine was sin(A) ≈ 16A(180-A)/(32400-A^2).
3) These formulas, which were popular in India, can be found in many original Indian mathematical works and show that such approximations were well-established by the 7th century.
1) Neelakantha Somaydji was an important mathematician from medieval India who lived from 1443-1543 AD and wrote several astronomical works.
2) In one of his works called Golasara, Neelakantha provides a formula for computing the length of a small circular arc given the Indian Sine and Versed Sine.
3) The formula, which is equivalent to the modern formula, enables the approximate computation of an angle when its sine or cosine is given.
1. The property that the second order differences of sines are proportional to the sines themselves was known in early Indian mathematics, dating back to Aryabhata I in the 5th century AD.
2. The paper describes various formulations of this proportionality found in important Indian astronomical works from the 5th to 15th centuries. It also outlines Nilakantha Somayaji's ingenious geometric proof of the property from his commentary on Aryabhatiya in the early 16th century.
3. The Indian mathematical method of computing sine tables using this difference property, which involves an implied differential process, was considered novel by Western scholars despite being used in India since ancient times
This document discusses fractional parts of Aryabhata's sine differences that are found in Govindasvami's commentary on Mahabhashya. Specifically:
1. Govindasvami's commentary provides the sexagesimal fractional parts (seconds and thirds) of Aryabhata's 24 tabular sine differences, allowing for a more accurate sine table with values given to the second order of sexagesimal fractions.
2. These fractional parts found in Govindasvami's commentary are subtracted from or added to Aryabhata's sine differences to improve the accuracy of the table.
3. In addition to improving Aryabhata's sine table, the
1. PRIMES Is in P: A
Breakthrough for
“Everyman”
Folkmar Bornemann
“New Method Said to Solve Key Problem in Math” The remarks … are unfounded and/or
was the headline of a story in the New York Times inconsequential. … The proofs in the
on August 8, 2002, meaning the proof of the state- paper do NOT have too many additional
ment primes ∈ P , hitherto a big open problem in problems to mention. The only true
algorithmic number theory and theoretical com- mistake is …, but that is quite easy to
puter science. Manindra Agrawal, Neeraj Kayal, fix. Other mistakes … are too minor to
and Nitin Saxena of the Indian Institute of Tech- mention. The paper is in substance
nology accomplished the proof through a surpris- completely correct.
ingly elegant and brilliantly simple algorithm.
And already on Friday, Dan Bernstein posted on the
Convinced of its validity after only a few days, the
Web an improved proof of the main result, short-
experts raved about it: “This algorithm is beauti-
ened to one page.
ful” (Carl Pomerance); “It’s the best result I’ve heard
This unusually brief—for mathematics—period
in over ten years” (Shafi Goldwasser).
of checking reflects both the brevity and elegance
Four days before the headline in the New York of the argument and its technical simplicity, “suited
Times, on a Sunday, the three authors had sent a for undergraduates”. Two of the authors, Kayal
nine-page preprint titled “PRIMES is in P” to fifteen and Saxena, had themselves just earned their
experts. The same evening Jaikumar Radhakrish- bachelor’s degrees in computer science in the
nan and Vikraman Arvind sent congratulations. spring. Is it then an exception for a breakthrough
Early on Monday one of the deans of the subject, to be accessible to “Everyman”?
Carl Pomerance, verified the result, and in his en- In his speech at the 1998 Berlin International
thusiasm he organized an impromptu seminar for Congress of Mathematicians, Hans-Magnus
that afternoon and informed Sara Robinson of the Enzensberger took the position that mathematics
New York Times. On Tuesday the preprint became is both “a cultural anathema” and at the same time
freely available on the Internet. On Thursday a in the midst of a golden age due to successes of a
further authority, Hendrik Lenstra Jr., put an end quality that he saw neither in theater nor in sports.
to some brief carping in the NMBRTHRY email list To be sure, some of those successes have many
with the pronouncement: mathematicians themselves pondering the gulf
between the priesthood and the laity within math-
Folkmar Bornemann is a professor at the Zentrum Math-
ematik, Technische Universität München and editor of the
ematics. A nonspecialist—cross your heart: how
Mitteilungen der Deutschen Mathematiker-Vereinigung. many of us are not such “Everymen”?—can neither
His email address is bornemann@ma.tum.de. truly comprehend nor fully appreciate the proof of
This article is a translation by the editor of the Notices of
Fermat’s Last Theorem by Andrew Wiles, although
an article by the author that appeared in German in the popularization efforts like the book of Simon Singh
Mitteilungen der Deutschen Mathematiker-Vereinigung help one get an inkling of the connections. Probably
4-2002, 14–21. no author could be found to help “Everyman”
MAY 2003 NOTICES OF THE AMS 545
2. comprehend all the ramifications and the signifi- important and useful in arithmetic. It
cance of the successes of last year’s recipients of has engaged the industry and wisdom
the Fields Medals. of ancient and modern geometers to
So it is that each one adds bricks to his parapet such an extent that it would be super-
in the Tower of Babel named Mathematics and fluous to discuss the problem at length.
deems his constructions there to be fundamental. … Further, the dignity of the science
Rarely is there such a success as at the beginning itself seems to require that every
of August: a foundation stone for the tower that possible means be explored for the
“Everyman” can understand. solution of a problem so elegant and
Paul Leyland expressed a view that has been in so celebrated.
many minds: “Everyone is now wondering what
else has been similarly overlooked.” Can this explain In school one becomes familiar with the sieve
Agrawal’s great astonishment (“I never imagined of Eratosthenes; unfortunately using it to prove that
that our result will be of much interest to traditional n is prime requires computation time essentially
mathematicians”): namely, why within the first ten proportional to n itself. The input length1 of a
days the dedicated website had over two million number, on the other hand, is proportional to
hits and three hundred thousand downloads of the number of binary digits, thus about log2 n , so
the preprint? we have before us an algorithm with exponential
running time O(2log2 n ) . To quote Gauss again from
When a long outstanding problem is article 329 of his Disquisitiones:
finally solved, every mathematician
Nevertheless we must confess that all
would like to share in the pleasure of
methods that have been proposed thus
discovery by following for himself
far are either restricted to very special
what has been done. But too often he
cases or are so laborious and prolix that
is stymied by the abstruseness of so
… these methods do not apply at all to
much of contemporary mathemat-
larger numbers.
ics. The recent negative solution to . . .
is a happy counterexample. In this ar- Can the primality of very large numbers be
ticle, a complete account of this so- decided efficiently in principle? This question is
lution is given; the only knowledge a rendered mathematical in the framework of mod-
reader needs to follow the argument ern complexity theory by demanding a polynomial
is a little number theory: specifically running time. Is there a deterministic2 algorithm
basic information about divisibility of that, with a fixed exponent κ , decides for every
κ
positive integers and linear congru- natural number n in O(log n) steps whether this
ences. number is prime or not; in short, the hitherto open
question: is primes ∈ P ?
Martin Davis, Hilbert’s tenth problem
is unsolvable, American Mathemati- The State of Things before August 2002
cal Monthly 80 (1973), 233–69, first Ever since the time of Gauss, deciding the primal-
paragraph of the introduction. ity of a number has been divorced from finding a
(partial) factorization in the composite case. In
Article 334 of the Disquisitiones he wrote:
As a specialist in numerical analysis and not in
algorithmic number theory, I wanted to test my The second [observation] is superior in
mettle as “Everyman”, outside of my parapet. that it permits faster calculation, but
… it does not produce the factors of
The Problem composite numbers. It does however
Happily the three motivated their work not by the distinguish them from prime numbers.
significance of prime numbers for cryptography The starting point for many such methods is
and e-commerce, but instead at the outset followed Fermat’s Little Theorem. It says that for every prime
the historically aware Don Knuth in reproducing a
quotation from the great Carl Friedrich Gauss from 1The difference between the size of a number and its
article 329 of the Disquisitiones Arithmeticae (1801), length is seen most clearly for such unmistakable giants
given here in the 1966 translation by Arthur A. as the number of atoms in the universe (about 1079 ) or the
Clarke: totality of all arithmetical operations ever carried out
by man and machine (about 1024 ): 80 (respectively 25 )
The problem of distinguishing prime decimal digits can be written out relatively quickly.
numbers from composite numbers and 2That is, an algorithm that does not require random
of resolving the latter into their prime numbers as opposed to a probabilistic algorithm, which
factors is known to be one of the most does require such numbers.
546 NOTICES OF THE AMS VOLUME 50, NUMBER 5
3. number n and every number a coprime to n one Enter Manindra Agrawal
has the relation The computer scientist and
an ≡ a mod n. complexity theorist Manindra
Agrawal received his doctorate
Unfortunately the converse is false: the prime num- in 1991 from the Department
bers cannot be characterized this way. On the other of Computer Science and
hand, “using the Fermat congruence is so simple Engineering of the Indian
that it seems a shame to give up on it just because Institute of Technology in
there are a few counterexamples” (Carl Pomer- Kanpur (IITK). After a stay as
ance). It is no wonder, then, that refinements of this a Humboldt fellow at the
criterion are the basis of important algorithms. University of Ulm in 1995–96
An elementary probabilistic algorithm of Miller (“I really enjoyed the stay in
and Rabin from 1976 makes use of a random num- Ulm. It helped me in my re-
ber generator and shows after k runs either that the search and career in many
number is certainly composite or that the number is ways”), he returned to Kanpur
prime with high probability, where the probability of as a professor. Two years ago
error is less than 4−k. The time complexity is order he gained recognition when Manindra Agrawal
2
O(k log n) , where the big-O involves a relatively small he proved a weak form of the
constant. In practice the algorithm is very fast, and isomorphism conjecture in
it finds application in cryptography and e-commerce complexity theory.4
Around 1999 he worked with his doctoral su-
for the production of “industrial-grade primes” (Henri
pervisor, Somenath Biswas, on the question of de-
Cohen). In the language of complexity theory, one
ciding the identity of polynomials with a proba-
says for short primes ∈ co-RP .
bilistic algorithm. A new probabilistic primality
A deterministic algorithm of Adleman, Pomer-
test appears as a simple application in the publi-
ance, and Rumely from 1983, which uses much cation “Primality and identity testing via Chinese
more theory and a generalization of Fermat’s remaindering” [1].
Little Theorem to integers in cyclotomic fields, The starting point was a generalization of
completely characterizes the prime numbers. The Fermat’s Little Theorem to polynomials, an easy
best deterministic algorithm prior to August 2002, exercise for an introductory course on number
it has running time of superpolynomial order theory or algebra. Namely, if the natural numbers
(log n)O(log log log n) . The triple logarithm in the expo- a and n are relatively prime, then n is prime if and
nent grows so slowly, however, that concrete ver- only if
sions of the algorithm have had excellent success
(x − a)n ≡ (xn − a) mod n
in the pursuit of record-breaking primality proofs
for numbers with more than a thousand decimal in the ring of polynomials Z[x]. Although this is a
digits.3 very elegant characterization of prime numbers, it
Another class of modern algorithms uses ellip- is hardly useful. The calculation of (x − a)n alone
tic curves or abelian varieties of high genus. Thus requires more computation time than does the
Adleman and Huang, in a very difficult and tech- sieve of Eratosthenes. But it was precisely for poly-
nical 1992 monograph, were able to give a proba- nomials of this size that Agrawal and Biswas had
bilistic algorithm with polynomial running time developed a probabilistic identity test, with
that after k iterations either gives a definitive bounded error probability, that completely avoided
answer (with no possibility of error) or gives no the expansion of the polynomial. Unfortunately
answer, the latter case, however, having probabil- the resulting test with polynomial running time was
ity less than 2−k . In the language of complexity far from competitive with that of Miller and Rabin.
theory, one says for short primes ∈ ZPP . A new idea was born, but initially it was interest-
With this background, and in view of the level of ing only as a footnote in the history of primality
difficulty that had been reached and the absence of testing.
Two years later, with his students at IITK,
further successes in over ten years, it was hardly
Agrawal began to examine in detail the potential
to be expected that there could be a short, elegant
of the new characterization of prime numbers, in
resolution of the question that would be under-
which he had great faith.
standable by “Everyman”.
3The hero of another story, Preda Mihailescu, developed
˘ 4The isomorphism conjecture of Berman and Hartmanis
essential refinements of this algorithm in his dissertation implies that P ≠ N P . A proof would therefore solve the
at ETH Zurich, and with his implementation he was for a first of the seven Millennium Prize Problems of the Clay
long time a player in the prime-number-records game. Re- Mathematics Institute and bring a return of one million
cently he proved the Catalan Conjecture. dollars.
MAY 2003 NOTICES OF THE AMS 547
4. Two Bachelor’s Projects the case, so one would have a proof of the primality
3+ε
The admissions procedure for of n in polynomial running time O(log n) .
the Indian Institute of Tech- Here enter the heros of our story, off stage until
nology (IIT) is rigorous and se- now, the students Neeraj Kayal and Nitin Saxena.
lective. There is a two-stage Both were members of the Indian team in the 1997
common procedure called the International Mathematical Olympiad. Studying
Joint Entrance Examination computer science instead of mathematics because
(JEE) for admission to one of of better employment prospects, they found in
the seven branches of the IIT complexity theory a way to continue working with
and two other institutions. Last mathematics on a high level.
year 150,000 Indians applied In their joint bachelor’s project, they examined
for admission, and after an ini- the relation of the test (Tr ,1 ) to known primality
tial three-hour examination in tests that, like (Tr ,1 ), in the negative case give a proof
mathematics, physics, and that a number is composite and in the positive
chemistry, 15,000 were invited case give no definitive answer. There was a rich pay-
to a second test consisting of off. They were able to show that under the as-
Neeraj Kayal a two-hour examination in sumption that the Riemann Hypothesis is true, the
each of the three subjects. Fi- 2
test (Tr ,1 ) could be restricted to r = 2 , …, 4 log2 n
nally 2,900 students were for a primality proof. In this way one would obtain
awarded places, of which 45 a deterministic algorithm of time complexity
were for computer science at O(log
6+
n) . Furthermore, they were able to show
the very renowned IIT in Kan- that the conjecture formulated by Bhattacharjee
pur. It is no wonder that good and Pandey would follow from a long-standing
money is earned in India for conjecture of Carl Pomerance. And in connection
preparing candidates for the with one of their investigations of the class of “in-
dreaded JEE, and graduates of trospective numbers”, they were led to a proof
the IIT are eagerly hired world-
idea that later would turn out to be essential.
wide.
The work of the two, submitted in April 2002,
It was with such highly mo-
bears the title “Towards a deterministic polynomial-
tivated students that Agrawal
time primality test” [9]. A vision, the goal is al-
now worked further on the pri-
ready clearly in view.
mality test. With Rajat Bhat-
tacharjee and Prashant Pandey, Changing the Viewpoint
the idea arose of looking not at That summer they did not go home first but instead
the excessively large polyno- directly began doctoral studies. Saxena actually
mial power (x − a)n but instead had wanted to go abroad, but—irony of fates—he
Nitin Saxena at its remainder after division did not get a scholarship at his university of choice.
by xr − 1. If r stays logarithmic
Only a small change of viewpoint is still needed.
in n , then this very much
Both bachelor’s projects studied the test (Tr ,a ) for
smaller remainder can be directly calculated in
fixed a = 1 and variable r . What happens if one in-
polynomial time with suitable algorithms.
stead fixes r and lets a vary? The breakthrough
If n is prime, then certainly5
came on the morning of July 10: through a suitable
(Tr ,a ) (x − a)n ≡ xn − a mod (xr − 1, n) choice of parameter they obtained nothing less
for all r and n coprime to a. Which a and r permit than a characterization of prime powers.
the converse conclusion that n is prime? The result, as streamlined by Dan Bernstein, is
In their joint bachelor’s project [5], the two stu- the following.
dents fixed a = 1 and examined the requirements Theorem . [Agrawal-Kayal-Saxena] Suppose n ∈ N
on r . Through analyzing experiments with r ≤ 100
and s ≤ n . Suppose primes q and r are chosen
and n ≤ 1010 , they arrived at the following con-
such that q | (r − 1) , n(r −1)/q ≡ 0, 1 mod r , and
jecture. If r is coprime to n and
(Tr ,1 ) (x − 1)n ≡ xn − 1 mod (xr − 1, n), q+s −1 √
≥ n2 r
.
s
then either n is prime or n ≡ 1 mod r . For one of
2
the first log2 n prime numbers r , the latter is not If for all 1 ≤ a < s we have that
5I follow the notation of Agrawal et al. and denote by
(i) a is relatively prime to n, and
(ii) (x − a)n ≡ xn − a mod (xr − 1, n) in the ring
p(x) ≡ q(x) mod (xr− 1, n) the equality of the remainders
of the polynomials p(x) and q(x) after division by xr − 1 of polynomials Z[x],
and division of the coefficients by n . then n is a prime power.
548 NOTICES OF THE AMS VOLUME 50, NUMBER 5
5. The simple, short, and innovative proof of the where C2 = 0.6601618158 . . . is the twin-primes
theorem is so delightful that I could not resist constant.
sketching it in the appendix. If this conjecture were correct, then one could
The theorem now leads directly to the so-called find prime numbers q and r = 2q + 1 of size
AKS-algorithm.6 2
O(log n) satisfying the hypotheses of the theorem.
1. Decide if n is a power of a natural number. If The AKS-algorithm would then have polynomial
so, go to step 5. ˜ 6
running time O(log n) . Since the conjecture
2. Choose (q, r , s) satisfying the hypotheses of the impressively has been confirmed up to x = 1010 , the
theorem. AKS-algorithm behaves like one of complexity
3. For a = 1, . . . , s − 1 do the following: ˜ 6
O(log n) for numbers n up to 100, 000 digits.
(i) If a is a divisor of n, go to step 5. In 1985, nearly ten years before Andrew Wiles
(ii) If (x − a)n ≡ xn − a mod (xr − 1, n) , go to finally proved Fermat’s Last Theorem, Adleman,
step 5. Fouvry, and Heath-Brown proved what one had not
4. n is prime. Done. been able to accomplish with the aid of the Germain
5. n is composite. Done. primes: namely, that the first case of Fermat’s Last
Step 1 can be accomplished in polynomial time Theorem holds for infinitely many primes [8]. In
using a variant of Newton iteration. The running fact, Adleman and Heath-Brown studied, as a gen-
time of the main step 3 using rapid FFT-based eralization of Germain primes, exactly those pairs
˜ 2
arithmetic is O(sr log n) , where the tilde over the (q, r ) that also play a key role in the AKS-algorithm.
big-O incorporates further logarithmic factors in
s , r , and log2 n . A Fields Medal
Thus to achieve our goal we must allow s and r What they required precisely is that the estimate
to grow at most polynomially in log n. This is the
# r ≤ x : q, r prime; q | (r − 1); q ≥ x1/2+δ
job of step 2. We first show what is possible in prin-
ciple. Set s = θq with a fixed factor θ. Stirling’s for- x
≥ cδ
mula gives the asymptotic relation ln x
hold for a suitable exponent δ > 1/6 . The hunt for
q+s −1 −1 the largest δ began in 1969 with Morris Goldfeld
log ∼ cθ q.
s [7], who obtained δ ≈ 1/12 , and concluded for the
Accordingly, the conditions of the theorem require time being in 1985 with Étienne Fouvry [6], whose
the asymptotic estimate value was δ = 0.1687 > 1/6 . All of these works
√ use very deep methods from analytic number the-
q 2cθ r log n. ory that expand on the large sieve of Enrico
Essentially this can happen for large n only if there Bombieri. He published this sieve in 1965 at the age
are infinitely many primes r such that r − 1 has a of twenty-five, and in 1974 he received the Fields
prime factor q ≥ r 1/2+δ . Now this is related to a Medal. Thus a heavy task falls on “Everyman” who
much-studied problem of analytic number theory. wishes to understand the proof of this estimate in
detail. In answer to my question about whether one
Sophie Germain and Fermat’s Last Theorem of the three undertook this task, Manindra Agrawal
The optimal cost-benefit ratio q/r is obtained for wrote:
the primes named after Sophie Germain: these are We tried! But Sieve theory was too dense
the odd primes q for which r = 2q + 1 is prime too. for us—we have no background in an-
She had shown in 1823 that for such primes the alytical number theory. So after a while
so-called first case of Fermat’s Last Theorem holds: we just gave up.
xq + y q = z q has no integer solutions when q xyz. Also they did not need to do it, for “the result
Therefore it became a question of burning inter- was stated there in precisely the form we needed”,
est whether at least there exist infinitely many and they could count on its validity by trusting in
such friendly primes. Unfortunately one does not the referee and a certain interval of time—the more
know the answer even today. Heuristic considera- so since Fouvry’s result related to the hot topic of
tions, however, led Hardy and Littlewood in 1922 Fermat’s Last Theorem appeared in Inventiones.
to the following very precise conjecture on the ac- Or maybe not? Fouvry forgot to take into account
tual density of Germain primes: an additional condition in citing a lemma of Bombieri,
2C2 x
#{q ≤ x : q and 2q + 1 are prime } ∼ 2 , Friedlander, and Iwaniec. This additional condition
ln x reduced the value of δ to δ = 0.1683 > 1/6 . It also
6At http://www.ma.tum.de/m3/ftp/Bornemann/PARI/ might have been below the critical threshold.
aks.txt there is an executable implementation for the Fouvry later told Roger Baker about this correction,
freely available number-theory software package PARI-GP and he and Glyn Harman published it in a survey
(http://www.parigp-home.de/). article [3] in 1996.
MAY 2003 NOTICES OF THE AMS 549
6. Incidentally, it was in an Internet search with not forget that the definition of complexity classes
Google that Agrawal, Kayal, and Saxena ran across like P is a purely theoretical question of an
Fouvry’s article in the bibliography of an article by asymptotic statement as n → ∞. In a particular
Pomerance and Shparlinski. When they inquired case, therefore, the advantage in running time of
about the best-known value for δ, Pomerance re- a polynomial algorithm as opposed to a super-
ferred them to the article of Baker and Harman. polynomial algorithm very possibly could become
Regardless of the optimal value, δ > 0 suffices manifest only for n so large that neither of the two
to guarantee an allowable triple (q, r , s) for the algorithms would produce an answer within our
AKS-algorithm of the necessary polynomial size, lifetime on current hardware. In practice the con-
1/δ 1+1/2δ
stants in the big-O in the complexity estimate also
r = O(log n), q, s = O(log n). come into play.
Thus the AKS-algorithm has, all told, a guaranteed Lower-quality “industrial-grade primes” with
˜ 3+3/2δ 512 binary digits can be produced in a fraction of
running time of O(log n) . Hence the state-
a second using the Miller-Rabin test on an off-the-
ment primes ∈ P is proved; the breakthrough is
shelf 2GHz PC. If required, their primality can ac-
achieved. Kudos! Fouvry’s corrected value for δ
11.913 tually be proved in a couple of seconds with the
˜
gives O(log n) , or, simpler to remember and
12
ECPP-method of Atkin-Morain based on elliptic
also without the tilde, O(log n) .7 curves.10 The running-time complexity of this prob-
The director of the IIT in Kanpur, Sanjay Dhande, abilistic algorithm is, to be sure, a “cloudy issue”
was so enthusiastic about the headline in the New (Carl Pomerance), but heuristic considerations
York Times that he declared Agrawal would be suggest that the likely value lies right around
nominated for the highest honors in mathematics.8 ˜ 6
O(log n) .
In 2006 Agrawal will be forty years old. On the other hand, because of the high cost of
the polynomial congruence in the third step of the
How Practical!?
AKS-algorithm, the constant in the conjectured
In Internet newsgroups and in newspapers the ˜ 6
O(log n) running-time bound is so large that the
question quickly arose of practical applications,
algorithm is estimated to take a couple of days on
since large prime numbers are these days an im- a 512 -bit prime number, although Dan Bernstein,
portant component of cryptography and Hendrik Lenstra, Felipe Voloch, Bjorn Poonen, and
e-commerce. We firmly believe that first of all an Jeff Vaaler have already improved this constant by
important theoretical problem was solved that for a factor of at least 2 · 106 relative to the original
several decades had eluded the experts. Agrawal formulation of the algorithm—the status as of
himself emphasizes that the problem interested January 25, 2003; cf. [4].
him as an intellectual challenge and that presently Thus a factor of about 105 is missing to reach
the AKS-algorithm is much slower than those al- a competitive level. The ECPP-method too started
gorithms that have raised the record in primality with a completely impractical but groundbreaking
proofs to 5,020 decimal digits.9 Finally, one should new idea of Goldwasser and Kilian. Since the
7On January 22, 2003, Dan Bernstein posted on the Web method that Agrawal, Kayal, and Saxena have now
a new version of his draft paper [4]. There, a small vari- produced is so unexpectedly new and brilliant, we
ation of the Agrawal-Kayal-Saxena theorem, which he may confidently anticipate improved capabilities
had learned from Lenstra, allows one to complete the after further maturation of the algorithm.
proof of primes ∈ P without referring to any deep ana-
lytic number theory. A well-known theorem of Chebyshev, The Media Pipeline
asserting that the primes ≤ 2k have product at least 2k , Except for an excellently researched, technically cor-
is enough to guarantee the existence of suitable numbers rect, very readable, and detailed report in the In-
r , s = O(log5 n) for which the algorithm works. This re-
dian weekly Frontline of August 17, the reporting
moves the last bulwark of difficult mathematics that might
have prevented “Everyman” from completely under-
in the general media was deplorable. Agrawal
standing the result. Probably Paulo Ribenboim is right in passed over my inquiry about his impression with
writing me: “Our specialists should reflect about their con- a polite, “Leave aside the general public coverage.”
voluted reasoning.” To be sure, the previously cited New York Times
8Already on October 30, 2002, he received the Clay Re- article celebrated the result as a triumph, but
search Award. Previous winners were Andrew Wiles, the opaquely by choosing to simplify to a ridiculous
probabilists Smirnov and Schramm, and Fields Medalists extent: polynomial running time became “quickly”;
Connes, Lafforgue, and Witten. deterministic became “definitively”. The article
9Please do not confuse this with the record for the largest thus reads as follows: three Indians obtained a
known prime number, which is at this time 213,466,917 − 1 ,
a Mersenne prime with 4, 053, 946 decimal places. These 10See http://www.ellipsa.net/pages/primo.html
numbers have a lot of structure that allows a customized for the freely available program PRIMO by Marcel Mar-
algorithm to be used. tin, which for the time being holds the record.
550 NOTICES OF THE AMS VOLUME 50, NUMBER 5
7. breakthrough because the computer could now algorithms like RSA and the Data En-
say “quickly and definitively” if a number is prime. cryption Standard DES; the keys are
On the other hand, the new algorithm has no numbers with prime factorizations, and
immediate application, because the already exist- if that can now be easily done in a time
ing methods are faster and do not err in practice. that is polynomial in the input data …”
“Some breakthrough,” readers would say to them- “But it is already well known, for ex-
selves. ample by the Miller-Rabin test, that if
The Associated Press (AP) made the New York one iterates enough times, one can find
Times article into a wire report in which “defini- a primality test with as large a proba-
tively” became “accurately” and the aspect of the bility as desired of being correct even
running time disappeared into the background. for the biggest numbers,” contradicted
The sad end of this pipeline was the website of the Rudra. “And the encoding prime fac-
Tagesschau. On August 12, under the heading “At torization has nothing to do with the
last: prime numbers can be exactly calculated!” ap- test of whether a number is prime,
peared such rubbish as “The joy at German schools which is a completely different problem;
is boundless: finally one can calculate prime num- for security people what the guys have
bers without tears!” The report was removed after done is worthless.” At dawn, the host-
protests from participants in the newsgroup ess Ushas finally found the magic words
de.sci.mathematik. of reconciliation: “Let us simply take
Aside from the article in the New York Times, pleasure in an elegant result that the
the story went virtually unnoticed in the American West also admires and in the continu-
press. In the UK a story in the New Scientist of Au- ing inspiration of our great mathemat-
gust 17 at least used the words “polynomial time”, ical tradition!”
but it went on to speak of “an algorithm that gives
a definite answer to the problem in a reasonable What reader would get from this the reason for all
time.” A retrospective piece on November 4 in the the fuss?
Wall Street Journal bore the misleading title “One
beautiful mind from India is putting the Internet Future Plans
on alert”. A year-end column by Clive Thompson The three plan to submit their work to Annals of
in the Sunday New York Times of December 15 as- Mathematics and have been in contact with Peter
serted, “Ever since the time of the ancient Greeks, Sarnak about this. They want to rewrite the article
finding a simple way to prove a number is prime “in a more ‘mathematical’ way as opposed to ‘com-
has been the holy grail of mathematics. … This year, puter science’ way, as that would be more suitable
it finally arrived. …This new algorithm could guar- in Annals.”
antee primes so massive they would afford almost As to the emotional state and the future of the
perfect online security.” two doctoral students Kayal and Saxena, Agrawal
And the large German-language daily newspa- says:
pers? The Neue Züricher Zeitung had its first re-
port on August 30. The article falsely suggested that They are happy, but at the same time
until now no absolutely certain certificate of pri- quite cool about it. I would say they are
mality could be calculated “within reasonable time” very level-headed boys. As for their
for prime numbers used in cryptography and that Ph.D., yes, I am sure that this work will
the three Indians had now achieved precisely this; qualify for their Ph.D. But I have ad-
the result was, however, not so greatly lauded by vised them to stay back for a couple of
the news agencies and the media because it could years, since this is the best time they
not handle the largest known prime number. have for learning. They still need to pick
In the August 9 arts section, under the heading up so many things. But they are free to
“Polynomial gods: Resourceful Indians and their make the decision—they already have an
prime numbers”, the Frankfurter Allgemeine offer from TIFR [Tata Institute of Fun-
Zeitung had a cryptic text that first made a con- damental Research].
nection between Indian mathematics and the Indian
pantheon and then let four such deities hold a Appendix
short discussion of the new result:
The following is the promised sketch of the
“What is it good for?” expostulated Agni, proof of the Agrawal-Kayal-Saxena theorem. I fol-
and Lakshmi retorted: “For hacking! low the streamlined presentation of Dan Bernstein
One needs prime numbers for encoding [4].
data for electronic transmission—there Sketch of proof. We take a prime factor p of n
are various so-called cryptographic for which already p(r −1)/q ≡ 0, 1 mod r , and we
MAY 2003 NOTICES OF THE AMS 551
8. show that if (i) and (ii) hold for all 1 ≤ a < s , then References
the number n is a power of p. [1] MANINDRA AGRAWAL and SOMENATH BISWAS, Primality and
To do this we consider—as did Agrawal on that identity testing via Chinese remaindering, in 40th An-
morning of July 10 when the theorem was found— nual Symposium on Foundations of Computer Science,
√ IEEE Computer Soc., Los Alamitos, CA, 1999, pp. 202–8.
products of the form t = ni pj with 0 ≤ i, j ≤ r .
[2] MANINDRA AGRAWAL, NEERAJ KAYAL, and NITIN SAXENA,
The pigeon-hole principle gives two distinct pairs
PRIMES is in P, IIT Kanpur, Preprint of August 8,
(i1 , j1 ) and (i2 , j2 ) of such exponents for which 2002, http://www.cse.iitk.ac.in/news/
t1 = ni1 pj1 ≡ ni2 pj2 = t2 mod r . The goal is now to primality.html.
prove that actually t1 = t2 , whence n = p for [3] ROGER C. BAKER and GLYN HARMAN, The Brun-Titchmarsh
some . Theorem on average, in Proceedings of a Conference
Via Fermat’s Little Theorem, it follows from (ii) in Honor of Heini Halberstam, Vol. 1, Birkhäuser Boston,
that Boston, MA, 1996, pp. 39–103.
[4] DANIEL BERNSTEIN, Proving Primality after Agrawal-Kayal-
(*) (x − a)tµ ≡ xtµ − a mod (xr − 1, p) Saxena, version of January 25, 2003, http://cr.yp.
to/papers.html#aks.
for all 1 ≤ a ≤ p and µ = 1, 2 . In their bachelor’s [5] RAJAT BHATTACHARJEE and PRASHANT PANDEY, Primality
project, Kayal and Saxena called such exponents “in- Testing, Bachelor of Technology Project Report, IIT
trospective”, and for these they showed that the Kanpur, April 2001, http://www.cse.iitk.ac.in/
congruence t1 ≡ t2 mod r lifts to a congruence research/btp2001/primality.html.
t1 ≡ t2 mod #G with #G r . For a suitable choice [6] ÉTIENNE FOUVRY, Théorème de Brun-Titchmarsh; appli-
of parameters, #G becomes so large that t1 = t2 fol- cation au théorème de Fermat, Invent. Math. 79 (1985)
383–407.
lows. According to Agrawal this lifting is “the nicest
[7] MORRIS GOLDFELD, On the number of primes p for which
part of the paper.” p + a has a large prime factor, Mathematika 16 (1969)
How does one do the lifting? Since 23–27.
t1 ≡ t2 mod r , we have that xr − 1 divides the dif- [8] D. ROGER HEATH-BROWN, The first case of Fermat’s Last
ference xt1 − xt2 , so from (*) it follows finally that Theorem, Math. Intelligencer 7, no. 4 (1985), 40–47, 55.
[9] NEERAJ KAYAL and NITIN SAXENA, Towards a Determinis-
(x − a)t1 ≡ (x − a)t2 mod (xr − 1, p). tic Polynomial-Time Primality Test, Bachelor of Tech-
nology Project Report, IIT Kanpur, April 2002,
Therefore g t1 = g t2 for all g ∈ G ; here G denotes http://www.cse.iitk.ac.in/research/btp2002/
the multiplicative subgroup generated by the lin- primality.html.
ear factors (ζr − a) inside the cyclotomic field over [10] R. RAMACHANDRAN, A prime solution, Frontline, India’s
Z/pZ generated by adjunction of the r th roots of National Magazine, 19 (August 17, 2002), http://www.
unity ζr . Taking a primitive element g , that is, one flonnet.com/fl1917/19171290.htm.
of order #G, shows that #G | (t1 − t2 ) .
On the other hand, in view of (i) and because
p(r −1)/q ≡ 0, 1 mod n , the group G has—by some
combinatorics and elementary theory of cyclotomic
q+s−1
polynomials—at least s
elements. Therefore
by the hypothesis on the binomial coefficients
√ √ √
q+s−1
|t1 − t2 | < n r
p r
≤ n2 r
≤ s
≤ #G,
whence follows the desired equality t1 = t2 .
Note Added in Proof
Early in March 2003, Agrawal, Kayal, and Saxena
posted on the Web a revision of their preprint:
http://www.cse.iitk.ac.in/news/
primality_v3.pdf
It contains the improvements by Lenstra and cul-
minates in the new time-complexity bound
0(log7.5 n) , cf. Theorem 5.3.
Acknowledgment
My sincere thanks to Manindra Agrawal for his
willingness, despite thousands of congratulatory
emails, to answer my inquiries about background
information graciously and thoroughly.
552 NOTICES OF THE AMS VOLUME 50, NUMBER 5