A proof of Fermat’s last theorem is demonstrated. It is very brief, simple, elementary, and absolutely arithmetical. The necessary premises for the proof are only: the three definitive properties of the relation of equality (identity, symmetry, and transitivity), modus tollens, axiom of induction, the proof of Fermat’s last theorem in the case of n=3 as well as the premises necessary for the formulation of the theorem itself. It involves a modification of Fermat’s approach of infinite descent. The infinite descent is linked to induction starting from n=3 by modus tollens. An inductive series of modus tollens is constructed. The proof of the series by induction is equivalent to Fermat’s last theorem. As far as Fermat had been proved the theorem for n=4, one can suggest that the proof for n≥4 was accessible to him.
An idea for an elementary arithmetical proof of Fermat’s last theorem (FLT) by induction is suggested. It would be accessible to Fermat unlike Wiles’s proof (1995), and would justify Fermat’s claim (1637) for its proof. The inspiration for a simple proof would contradict to Descartes’s dualism for appealing to merge “mind” and “body”, “words” and “things”, “terms” and “propositions”, all orders of logic. A counterfactual course of history of mathematics and philosophy may be admitted. The bifurcation happened in Descartes and Fermat’s age. FLT is exceptionally difficult to be proved in our real branch rather than in the counterfactual one.
This document discusses using the homotopy perturbation method to solve integral equations. It begins by introducing the homotopy perturbation method and how it can be used to find approximate solutions to problems by considering the solution as a sum of an infinite series. It then shows how to apply the method to solve Fredholm and Volterra integral equations of the second kind by constructing a homotopy and obtaining iteration formulas. The document concludes that the homotopy perturbation method provides sufficient conditions for the convergence of solutions to integral equations and integro-differential equations.
A geometric sequence is a sequence where each term after the first is found by multiplying the previous term by a fixed, non-zero number called the common ratio. Unlike an arithmetic sequence, the difference between consecutive terms in a geometric sequence varies. Key formulas for geometric sequences include the recursive formula, explicit formula, and finding the geometric mean. Examples are provided to illustrate identifying common ratios and writing explicit formulas to find specific terms in geometric sequences.
The document defines and provides examples of geometric sequences. A geometric sequence is a sequence where each term is found by multiplying the previous term by a common ratio. The general formula for a geometric sequence is an = a1rn-1, where a1 is the first term and r is the common ratio between terms. Examples show how to identify the common ratio r and use the general formula to determine the specific formula for different geometric sequences.
(1) The document discusses hyponormal matrices in inner product spaces where the inner product may be degenerate.
(2) It reviews definitions of H-selfadjoint, H-skewadjoint, H-unitary, and H-normal matrices for both the nondegenerate and degenerate cases.
(3) A key challenge in the degenerate case is defining the H-adjoint, as it may not exist as a matrix; two approaches are to use a Moore-Penrose inverse or define the H-adjoint as a linear relation.
The document defines integration as the inverse operation of differentiation or the antiderivative. Integration finds the function given its derivative, while differentiation finds the derivative of a function. The key points are:
1) Integration is denoted by the integral sign ∫ and finds the antiderivative F(x) of a function f(x) plus a constant c.
2) Some basic integration rules and theorems are presented, including formulas for integrating polynomials and trigonometric functions.
3) The substitution rule is described for performing integral substitutions to solve integrals that can't be solved with basic formulas. Examples of integrating trigonometric functions and expressions involving square roots are provided.
This document discusses several numerical analysis methods for finding roots of equations or solving systems of equations. It describes the bisection method for finding roots of continuous functions, the method of false positions for approximating roots between two values with opposite signs, Gauss elimination for transforming a system of equations into triangular form, Gauss-Jordan method which further eliminates variables in equations below, and iterative methods which find solutions through successive approximations rather than direct computation.
1. The document discusses methods for finding integrals of rational functions.
2. It states that the integrals of many rational functions can be found by decomposing them into partial fractions.
3. An example is provided to demonstrate decomposing a rational function into partial fractions.
We continue the study of the concepts of minimality and homogeneity in the fuzzy context. Concretely, we introduce two new notions of minimality in fuzzy bitopological spaces which are called minimal fuzzy open set and pairwise minimal fuzzy open set. Several relationships between such notions and a known one are given. Also, we provide results about the transformation of minimal, and pairwise minimal fuzzy open sets of a fuzzy bitopological space, via fuzzy continuous and fuzzy open mappings, and pairwise continuous and pairwise open mappings, respectively. Moreover, we present two new notions of homogeneity in the fuzzy framework. We introduce the notions of homogeneous and pairwise homogeneous fuzzy bitopological spaces. Several relationships between such notions and a known one are given. And, some connections between minimality and homogeneity are given. Finally, we show that cut bitopological spaces of a homogeneous (resp. pairwise homogeneous) fuzzy bitopological space are homogeneous (resp. pairwise homogeneous) but not conversely.
This document discusses using the homotopy perturbation method to solve integral equations. It begins by introducing the homotopy perturbation method and how it can be used to find approximate solutions to problems by considering the solution as a sum of an infinite series. It then shows how to apply the method to solve Fredholm and Volterra integral equations of the second kind by constructing a homotopy and obtaining iteration formulas. The document concludes that the homotopy perturbation method provides sufficient conditions for the convergence of solutions to integral equations and integro-differential equations.
A geometric sequence is a sequence where each term after the first is found by multiplying the previous term by a fixed, non-zero number called the common ratio. Unlike an arithmetic sequence, the difference between consecutive terms in a geometric sequence varies. Key formulas for geometric sequences include the recursive formula, explicit formula, and finding the geometric mean. Examples are provided to illustrate identifying common ratios and writing explicit formulas to find specific terms in geometric sequences.
The document defines and provides examples of geometric sequences. A geometric sequence is a sequence where each term is found by multiplying the previous term by a common ratio. The general formula for a geometric sequence is an = a1rn-1, where a1 is the first term and r is the common ratio between terms. Examples show how to identify the common ratio r and use the general formula to determine the specific formula for different geometric sequences.
(1) The document discusses hyponormal matrices in inner product spaces where the inner product may be degenerate.
(2) It reviews definitions of H-selfadjoint, H-skewadjoint, H-unitary, and H-normal matrices for both the nondegenerate and degenerate cases.
(3) A key challenge in the degenerate case is defining the H-adjoint, as it may not exist as a matrix; two approaches are to use a Moore-Penrose inverse or define the H-adjoint as a linear relation.
The document defines integration as the inverse operation of differentiation or the antiderivative. Integration finds the function given its derivative, while differentiation finds the derivative of a function. The key points are:
1) Integration is denoted by the integral sign ∫ and finds the antiderivative F(x) of a function f(x) plus a constant c.
2) Some basic integration rules and theorems are presented, including formulas for integrating polynomials and trigonometric functions.
3) The substitution rule is described for performing integral substitutions to solve integrals that can't be solved with basic formulas. Examples of integrating trigonometric functions and expressions involving square roots are provided.
This document discusses several numerical analysis methods for finding roots of equations or solving systems of equations. It describes the bisection method for finding roots of continuous functions, the method of false positions for approximating roots between two values with opposite signs, Gauss elimination for transforming a system of equations into triangular form, Gauss-Jordan method which further eliminates variables in equations below, and iterative methods which find solutions through successive approximations rather than direct computation.
1. The document discusses methods for finding integrals of rational functions.
2. It states that the integrals of many rational functions can be found by decomposing them into partial fractions.
3. An example is provided to demonstrate decomposing a rational function into partial fractions.
We continue the study of the concepts of minimality and homogeneity in the fuzzy context. Concretely, we introduce two new notions of minimality in fuzzy bitopological spaces which are called minimal fuzzy open set and pairwise minimal fuzzy open set. Several relationships between such notions and a known one are given. Also, we provide results about the transformation of minimal, and pairwise minimal fuzzy open sets of a fuzzy bitopological space, via fuzzy continuous and fuzzy open mappings, and pairwise continuous and pairwise open mappings, respectively. Moreover, we present two new notions of homogeneity in the fuzzy framework. We introduce the notions of homogeneous and pairwise homogeneous fuzzy bitopological spaces. Several relationships between such notions and a known one are given. And, some connections between minimality and homogeneity are given. Finally, we show that cut bitopological spaces of a homogeneous (resp. pairwise homogeneous) fuzzy bitopological space are homogeneous (resp. pairwise homogeneous) but not conversely.
Recurrence relation of Bessel's and Legendre's functionPartho Ghosh
This presentation tells about use recurrence relation in finding the solution of ordinary differential equations, with special emphasis on Bessel's and Legendre's Function.
This document discusses constraints on parity violating conformal field theories in 3 dimensions from the positivity of energy flux. It considers local perturbations created by the stress tensor or conserved current with specific polarizations. The energy measured in different directions is defined and required to be positive. This leads to constraints on the parameter space of 3-point functions involving the stress tensor and current. Explicit calculations are shown for the energy matrix corresponding to charge excitations, with off-diagonal elements arising from parity violating terms in 3-point functions. Positivity of energy flux implies an inequality constraint on the parameters.
This document summarizes a paper that deals with matching doctors to hospitals, taking into account individual doctor preferences as well as couple preferences for doctors that are married. It considers a market with two hospitals, two single doctors, and one doctor couple. Some preference profiles for doctors and hospitals result in no stable matchings, since there are always blocking pairs where a hospital and doctor both prefer each other to their current matching. However, the concept of credible blocking and credible stability is introduced, where every preference profile has at least one credibly stable matching.
1. The work done by a force F acting along a material path L can be calculated using the integral A = ∫_0^S F(s) ds, where s is the parameter of the path.
2. The mass of an object with non-uniform density ρ(x) along the x-axis between a and b can be found using the integral m = ∫_a^b ρ(x) dx.
3. The coordinates (xc, yc) of the centroid of a curve given in parametric form x=φ(t), y=ψ(t) between α and β can be determined using the integrals xc = ∫_α^
1. The document discusses differential equations of higher order. It defines a general differential equation of order n with variable coefficients and explains when it can be considered a constant coefficient differential equation.
2. Methods for solving constant coefficient higher order differential equations are presented, including finding the characteristic equation and general solution involving exponential functions of roots of the characteristic equation.
3. Examples of solving second and third order differential equations with the given methods are provided. The complete solutions are written in terms of exponential functions and arbitrary constants.
It is known as two-valued logic because it have only two values Ramjeet Singh Yadav
This document discusses fuzzy logic and its applications. It begins by explaining classical logic and crisp sets, which have binary membership. It then introduces fuzzy logic, which was developed by Lotfi Zadeh in 1965 and allows partial set membership between 0 and 1. This allows fuzzy logic to handle concepts involving degrees of truth. The key concepts of fuzzy logic discussed include fuzzy sets and membership functions, fuzzy operations like union and intersection, fuzzy relations, fuzzy rules, and fuzzy inference systems. Real-world applications of fuzzy inference systems are also mentioned, such as automatic control, expert systems, and medical engineering.
This document discusses conformal field theories and three point functions. It begins by introducing conformal field theories and noting that three point functions of stress tensors are constrained by conformal invariance. For parity even CFTs in d=4 dimensions, the three point functions of the stress tensor T can be written in terms of coefficients that correspond to effective numbers of scalars, fermions, and vectors. Positivity of energy flux from collider experiments imposes constraints on these coefficients. The document then discusses extending such analyses to parity violating CFTs in d=3 dimensions, where the three point functions of the stress tensor T and conserved current j contain both parity even and odd terms. It proposes studying modifications to existing constraints from applying coll
The document discusses polynomials. It defines polynomials as expressions constructed from variables and constants using addition, subtraction, multiplication, and non-negative integer exponents. It provides examples of polynomials and non-polynomial expressions. It also discusses the degrees of terms and polynomials, and how polynomials can be added or multiplied by distributing terms. The document also covers monomials, binomials, trinomials, and the factor theorem.
The document discusses geometric sequences, which are sequences defined by an exponential formula where the nth term is equal to the first term multiplied by the common ratio raised to the power of n minus 1. It provides examples of geometric sequences, such as the sequence of powers of 2, and discusses properties of geometric sequences including that the ratio between any two consecutive terms is equal to the common ratio. Formulas for determining the specific formula for a geometric sequence given the first term and ratio are presented.
This document discusses geometric sequences, which are sequences where each term is found by multiplying the preceding term by a constant ratio. It provides the recursive and explicit forms for writing geometric sequences, and gives examples of finding specific terms and writing the explicit formula given the first term and ratio. Key details include that the recursive form is an+1 = ar, and the explicit form is an = arn-1, where a is the first term and r is the common ratio.
Reading Testing a point-null hypothesis, by Jiahuan Li, Feb. 25, 2013Christian Robert
The paper studies the relationship between p-values and Bayesian measures of evidence for testing point null hypotheses. It finds that p-values can be highly misleading about the evidence provided by data against the null hypothesis. Through examples, it shows that a small p-value does not necessarily mean high posterior probability for rejecting the null or a large Bayes factor against the null. The paper derives lower bounds on these Bayesian measures of evidence for several classes of distributions to demonstrate the conflicts between p-values and Bayesian analysis.
The document discusses differential equations, which are equations involving derivatives of functions with respect to independent variables. It defines ordinary and partial differential equations, provides examples, and discusses methods for solving differential equations, including finding general and particular solutions. It also describes the relationship between solutions of differential equations and their integrals.
This document contains sections from a textbook on abstract algebra covering relations, functions, and inverse functions. Some key points:
- It defines relations as subsets of Cartesian products and distinguishes them from Cartesian products.
- It provides examples of relations and determines their domains and ranges.
- It defines functions as special types of relations where each element in the domain maps to only one element in the range.
- It discusses properties of functions like being one-to-one, onto, and invertible. A function must be one-to-one and onto to be invertible.
- It provides examples of functions and determines if they are one-to-one, onto, invertible, and calculates their
1. The document discusses complete differential equations and their integrals. It provides an example of solving the complete differential equation 2 − 9xy2dx + (4y2 − 6x3)dy = 0 for the integral u(x,y).
2. It explains that if a differential equation can be written as the complete differential of some function u, then the integral is a function of x and y alone.
3. Methods for finding the integrating factor μ(x,y) are presented to transform a given differential equation into a complete differential equation, including the possibility that μ depends only on x or y.
This document provides an introduction to polynomial functions including definitions of key terms like monomial, polynomial, standard form, degree of terms and polynomials, classifying polynomials by number of terms and degree, examples of graphs of low-degree polynomials, and how to combine like terms. It defines a monomial as an expression with variables and numbers, a polynomial as a sum of terms with whole number exponents. Standard form writes polynomials in descending order of exponents. Degree is determined by highest exponent of terms or polynomial. Polynomials are classified by number of terms (monomial, binomial, trinomial, etc.) or degree (linear, quadratic, cubic, etc.). Examples show graphs changing shape with increasing degree. Combining like terms adds coefficients of
The document discusses methods for solving higher order linear differential equations with constant coefficients. It provides:
1) An overview of higher order linear differential equations of the form y(n) + a1y(n-1) + ... + any = f(x).
2) Methods for finding the general solution which involves finding the complementary function yC(x) and particular integral yP(x).
3) Ways to find the particular integral yP(x) by assuming it has the same terms as the forcing function f(x).
This document discusses arithmetic and geometric sequences. It defines arithmetic sequences as having a constant difference between consecutive terms, called the common difference. Geometric sequences have a constant ratio between consecutive terms, called the common ratio. Formulas are provided for finding the nth term of an arithmetic sequence and a geometric sequence based on the initial term and common difference or ratio. Examples are given of identifying the type of sequence and calculating terms. The document also discusses notation, formal definitions, and graphing sequences.
1. The document discusses differential equations of higher order, which are equations where the derivatives are functions of higher algebraic order.
2. It provides examples of functions that are of higher order, and explains that a differential equation is of higher order if the functions P(x,y) and Q(x,y) are of higher algebraic order.
3. Methods for solving higher order differential equations are presented, including reducing them to an equivalent ordinary differential equation using appropriate substitutions, and then solving using standard integration techniques.
IOSR Journal of Mathematics(IOSR-JM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...mathsjournal
- The document presents a probabilistic algorithm for computing the polynomial greatest common divisor (PGCD) with smaller factors.
- It summarizes previous work on the subresultant algorithm for computing PGCD and discusses its limitations, such as not always correctly determining the variant τ.
- The new algorithm aims to determine τ correctly in most cases when given two polynomials f(x) and g(x). It does so by adding a few steps instead of directly computing the polynomial t(x) in the relation s(x)f(x) + t(x)g(x) = r(x).
Recurrence relation of Bessel's and Legendre's functionPartho Ghosh
This presentation tells about use recurrence relation in finding the solution of ordinary differential equations, with special emphasis on Bessel's and Legendre's Function.
This document discusses constraints on parity violating conformal field theories in 3 dimensions from the positivity of energy flux. It considers local perturbations created by the stress tensor or conserved current with specific polarizations. The energy measured in different directions is defined and required to be positive. This leads to constraints on the parameter space of 3-point functions involving the stress tensor and current. Explicit calculations are shown for the energy matrix corresponding to charge excitations, with off-diagonal elements arising from parity violating terms in 3-point functions. Positivity of energy flux implies an inequality constraint on the parameters.
This document summarizes a paper that deals with matching doctors to hospitals, taking into account individual doctor preferences as well as couple preferences for doctors that are married. It considers a market with two hospitals, two single doctors, and one doctor couple. Some preference profiles for doctors and hospitals result in no stable matchings, since there are always blocking pairs where a hospital and doctor both prefer each other to their current matching. However, the concept of credible blocking and credible stability is introduced, where every preference profile has at least one credibly stable matching.
1. The work done by a force F acting along a material path L can be calculated using the integral A = ∫_0^S F(s) ds, where s is the parameter of the path.
2. The mass of an object with non-uniform density ρ(x) along the x-axis between a and b can be found using the integral m = ∫_a^b ρ(x) dx.
3. The coordinates (xc, yc) of the centroid of a curve given in parametric form x=φ(t), y=ψ(t) between α and β can be determined using the integrals xc = ∫_α^
1. The document discusses differential equations of higher order. It defines a general differential equation of order n with variable coefficients and explains when it can be considered a constant coefficient differential equation.
2. Methods for solving constant coefficient higher order differential equations are presented, including finding the characteristic equation and general solution involving exponential functions of roots of the characteristic equation.
3. Examples of solving second and third order differential equations with the given methods are provided. The complete solutions are written in terms of exponential functions and arbitrary constants.
It is known as two-valued logic because it have only two values Ramjeet Singh Yadav
This document discusses fuzzy logic and its applications. It begins by explaining classical logic and crisp sets, which have binary membership. It then introduces fuzzy logic, which was developed by Lotfi Zadeh in 1965 and allows partial set membership between 0 and 1. This allows fuzzy logic to handle concepts involving degrees of truth. The key concepts of fuzzy logic discussed include fuzzy sets and membership functions, fuzzy operations like union and intersection, fuzzy relations, fuzzy rules, and fuzzy inference systems. Real-world applications of fuzzy inference systems are also mentioned, such as automatic control, expert systems, and medical engineering.
This document discusses conformal field theories and three point functions. It begins by introducing conformal field theories and noting that three point functions of stress tensors are constrained by conformal invariance. For parity even CFTs in d=4 dimensions, the three point functions of the stress tensor T can be written in terms of coefficients that correspond to effective numbers of scalars, fermions, and vectors. Positivity of energy flux from collider experiments imposes constraints on these coefficients. The document then discusses extending such analyses to parity violating CFTs in d=3 dimensions, where the three point functions of the stress tensor T and conserved current j contain both parity even and odd terms. It proposes studying modifications to existing constraints from applying coll
The document discusses polynomials. It defines polynomials as expressions constructed from variables and constants using addition, subtraction, multiplication, and non-negative integer exponents. It provides examples of polynomials and non-polynomial expressions. It also discusses the degrees of terms and polynomials, and how polynomials can be added or multiplied by distributing terms. The document also covers monomials, binomials, trinomials, and the factor theorem.
The document discusses geometric sequences, which are sequences defined by an exponential formula where the nth term is equal to the first term multiplied by the common ratio raised to the power of n minus 1. It provides examples of geometric sequences, such as the sequence of powers of 2, and discusses properties of geometric sequences including that the ratio between any two consecutive terms is equal to the common ratio. Formulas for determining the specific formula for a geometric sequence given the first term and ratio are presented.
This document discusses geometric sequences, which are sequences where each term is found by multiplying the preceding term by a constant ratio. It provides the recursive and explicit forms for writing geometric sequences, and gives examples of finding specific terms and writing the explicit formula given the first term and ratio. Key details include that the recursive form is an+1 = ar, and the explicit form is an = arn-1, where a is the first term and r is the common ratio.
Reading Testing a point-null hypothesis, by Jiahuan Li, Feb. 25, 2013Christian Robert
The paper studies the relationship between p-values and Bayesian measures of evidence for testing point null hypotheses. It finds that p-values can be highly misleading about the evidence provided by data against the null hypothesis. Through examples, it shows that a small p-value does not necessarily mean high posterior probability for rejecting the null or a large Bayes factor against the null. The paper derives lower bounds on these Bayesian measures of evidence for several classes of distributions to demonstrate the conflicts between p-values and Bayesian analysis.
The document discusses differential equations, which are equations involving derivatives of functions with respect to independent variables. It defines ordinary and partial differential equations, provides examples, and discusses methods for solving differential equations, including finding general and particular solutions. It also describes the relationship between solutions of differential equations and their integrals.
This document contains sections from a textbook on abstract algebra covering relations, functions, and inverse functions. Some key points:
- It defines relations as subsets of Cartesian products and distinguishes them from Cartesian products.
- It provides examples of relations and determines their domains and ranges.
- It defines functions as special types of relations where each element in the domain maps to only one element in the range.
- It discusses properties of functions like being one-to-one, onto, and invertible. A function must be one-to-one and onto to be invertible.
- It provides examples of functions and determines if they are one-to-one, onto, invertible, and calculates their
1. The document discusses complete differential equations and their integrals. It provides an example of solving the complete differential equation 2 − 9xy2dx + (4y2 − 6x3)dy = 0 for the integral u(x,y).
2. It explains that if a differential equation can be written as the complete differential of some function u, then the integral is a function of x and y alone.
3. Methods for finding the integrating factor μ(x,y) are presented to transform a given differential equation into a complete differential equation, including the possibility that μ depends only on x or y.
This document provides an introduction to polynomial functions including definitions of key terms like monomial, polynomial, standard form, degree of terms and polynomials, classifying polynomials by number of terms and degree, examples of graphs of low-degree polynomials, and how to combine like terms. It defines a monomial as an expression with variables and numbers, a polynomial as a sum of terms with whole number exponents. Standard form writes polynomials in descending order of exponents. Degree is determined by highest exponent of terms or polynomial. Polynomials are classified by number of terms (monomial, binomial, trinomial, etc.) or degree (linear, quadratic, cubic, etc.). Examples show graphs changing shape with increasing degree. Combining like terms adds coefficients of
The document discusses methods for solving higher order linear differential equations with constant coefficients. It provides:
1) An overview of higher order linear differential equations of the form y(n) + a1y(n-1) + ... + any = f(x).
2) Methods for finding the general solution which involves finding the complementary function yC(x) and particular integral yP(x).
3) Ways to find the particular integral yP(x) by assuming it has the same terms as the forcing function f(x).
This document discusses arithmetic and geometric sequences. It defines arithmetic sequences as having a constant difference between consecutive terms, called the common difference. Geometric sequences have a constant ratio between consecutive terms, called the common ratio. Formulas are provided for finding the nth term of an arithmetic sequence and a geometric sequence based on the initial term and common difference or ratio. Examples are given of identifying the type of sequence and calculating terms. The document also discusses notation, formal definitions, and graphing sequences.
1. The document discusses differential equations of higher order, which are equations where the derivatives are functions of higher algebraic order.
2. It provides examples of functions that are of higher order, and explains that a differential equation is of higher order if the functions P(x,y) and Q(x,y) are of higher algebraic order.
3. Methods for solving higher order differential equations are presented, including reducing them to an equivalent ordinary differential equation using appropriate substitutions, and then solving using standard integration techniques.
IOSR Journal of Mathematics(IOSR-JM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...mathsjournal
- The document presents a probabilistic algorithm for computing the polynomial greatest common divisor (PGCD) with smaller factors.
- It summarizes previous work on the subresultant algorithm for computing PGCD and discusses its limitations, such as not always correctly determining the variant τ.
- The new algorithm aims to determine τ correctly in most cases when given two polynomials f(x) and g(x). It does so by adding a few steps instead of directly computing the polynomial t(x) in the relation s(x)f(x) + t(x)g(x) = r(x).
Matrix Transformations on Paranormed Sequence Spaces Related To De La Vallée-...inventionjournals
In this paper, we determine the necessary and sufficient conditions to characterize the matrices which transform paranormed sequence spaces into the spaces 푉휎 (휆) and 푉휎 ∞(휆) , where 푉휎 (휆) denotes the space of all (휎, 휆)-convergent sequences and 푉휎 ∞(휆) denotes the space of all (휎, 휆)-bounded sequences defined using the concept of de la Vallée-Pousin mean.
A Hilbert space is an infinite-dimensional vector space consisting of sequences of real numbers that satisfy a convergence condition. It allows vector addition and scalar multiplication. In quantum mechanics, state vectors span a Hilbert space. For identical particles, boson state vectors are symmetric and fermion state vectors are antisymmetric. Linear algebra concepts like operators, eigenvectors, and superposition are used in Dirac's formulation of quantum mechanics postulates. Observables are represented by operators and eigenvectors correspond to eigenvalues. Any state can be written as a superposition of eigenvectors.
Section 9: Equivalence Relations & CosetsKevin Johnson
This document discusses equivalence relations and cosets from abstract algebra. It contains the following key points:
1) It defines equivalence relations as relations that satisfy reflexivity, symmetry, and transitivity. Modular arithmetic and group conjugacy are given as examples of equivalence relations.
2) It introduces the concept of equivalence classes, which are the subsets of elements related by an equivalence relation. It proves that the equivalence classes partition the set.
3) It defines right cosets as translations of a subgroup by group elements. Examples are given of finding the right cosets of subgroups of Z6 and S3.
Chapter 2 Mathematical Language and Symbols.pdfRaRaRamirez
This document discusses mathematical language and symbols. It defines key concepts such as sets, relations, functions, and binary operations. Sets are collections of distinct objects that can be defined using a roster or rule. Relations pair elements between two sets. A function is a special type of relation where each input is paired with exactly one output. Binary operations take two inputs from a set and return an output in that same set. Common properties of binary operations include commutativity and associativity.
Fermat’s theorem
Corollary ON Fermat’s theorem
Set of residues modulo 𝑚
Reduced set of residues modulo 𝑚
Theorems based on residue modulo m
Euler's theorem or Euler's generalization of Fermat's theorem
Fermat's theorem from Euler's theorem
Examples
The document discusses the differences and relationships between quadratic functions and quadratic equations. It notes that quadratic functions can take any real number as an input, while quadratic equations only have two solutions. The roots of a quadratic equation are also the x-intercepts of the graph of the corresponding quadratic function. The remainder theorem states that the value of a polynomial when a number is substituted for the variable is equal to the remainder when the polynomial is divided by the linear factor corresponding to that number. This connects the roots of quadratic equations to factors of quadratic functions. A quadratic can only have two distinct roots, as having three would mean it has an infinite number of roots.
We disclose a simple and straightforward method of solving ordinary or linear partial differential equations of any order and apply it to solve the generalized Euler-Tricomi equation. The method is easier than classical methods and also didactic.
Date: Jan, 10, 202
https://utilitasmathematica.com/index.php/Index
Our Journal has prominent publication in the realm of mathematical sciences, has recognized the profound importance of embracing justice, equity, diversity, and inclusion (JEDI) in its operations. This commitment reflects a broader recognition that the mission to "Promote the Practice and Profession of Statistics" can only be realized when it is firmly grounded in JEDI principles.
The journal publishes original research Mathematica Journal. its professional Mathematica Academy UMJ Statement on journal papers, Utilitas Mathematica Journal there are Operations Research,there are Mathematical Economics,and Computer Science
https://utilitasmathematica.com/index.php/Index
https://utilitasmathematica.com/
Utilitas Mathematica Journal (e-ISSN: 0315-3681) is a broad scope journal that publishes original research and review articles on all aspects of both pure and applied mathematics.
The EM algorithm is used to find maximum likelihood estimates for problems with latent variables. It works by alternating between an E-step (computing expected values of the latent variables) and an M-step (maximizing the likelihood with respect to the parameters). For mixture of Gaussians, the E-step computes the posterior probabilities that each data point belongs to each component. The M-step then updates the mixture weights, means, and covariances by taking weighted averages/sums of the data using these posteriors.
This document discusses continued β-fractions in the field of Laurent series over a finite field Fq. Specifically:
1) It introduces the concept of β-expansions and continued β-fractions where the base β is a unit Pisot series in Fq((x-1)).
2) It summarizes previous work characterizing elements of Fq((x-1)) having finite β-fractions when β is a quadratic Pisot unit.
3) The main result of the paper is to improve upon this by studying the case when β is a Pisot unit (not necessarily quadratic) in Fq((x-1)).
This document discusses continued β-fractions in the field of Laurent series over a finite field Fq. Specifically:
1) It introduces the concept of β-expansions and continued β-fractions where the base β is a unit Pisot series in Fq((x-1)).
2) It summarizes previous work characterizing elements of Fq((x-1)) having finite β-fractions when β is a quadratic Pisot unit.
3) The main result of the paper is to improve upon this by studying the case when β is a Pisot unit (not necessarily quadratic) in Fq((x-1)).
This document is the preface to a textbook on number theory. It discusses the goals of the textbook, which are to encourage independent thinking and problem solving rather than rote memorization. Number theory is well-suited for this purpose as patterns in the natural numbers can be discerned through observation and experimentation, but proving theorems requires rigorous demonstration. The textbook was originally written for a course at Brown University designed to attract non-science majors to mathematics. The prerequisites are few, requiring only high school algebra and a willingness to experiment, make mistakes, learn from them, and persevere.
Optimal Estimating Sequence for a Hilbert Space Valued ParameterIOSR Journals
Some optimality criteria used in estimation of parameters in finite dimensional space has been extended to a separable Hilbert space. Different optimality criteria and their equivalence are established for estimating sequence rather than estimator. An illustrious example is provided with the estimation of the mean of a Gaussian process
EM 알고리즘을 jensen's inequality부터 천천히 잘 설명되어있다
이것을 보면, LDA의 Variational method로 학습하는 방식이 어느정도 이해가 갈 것이다.
옛날 Andrew Ng 선생님의 강의노트에서 발췌한 건데 5년전에 본 것을
아직도 찾아가면서 참고하면서 해야 된다는 게 그 강의가 얼마나 명강의였는지 새삼 느끼게 된다.
The document defines key concepts related to linear equations and systems of linear equations, including:
- A linear equation is an expression of the form a1x1 + a2x2 + ... + anxn = b, where the a's and b are real numbers and the x's are variables.
- The solution to a linear equation is any set of real numbers that makes the equation true when substituted for the variables.
- A system of linear equations consists of two or more linear equations with the same variables. The solution to a system is any set of values for the variables that satisfies all the equations simultaneously.
- Systems can be represented in matrix form as AX = B, where A
Similar to FERMAT’S LAST THEOREM PROVED BY INDUCTION (accompanied by a philosophical comment) (20)
The generalization of the Periodic table. The "Periodic table" of "dark matter"Vasil Penchev
The thesis is: the “periodic table” of “dark matter” is equivalent to the standard periodic table of the visible matter being entangled. Thus, it is to consist of all possible entangled states of the atoms of chemical elements as quantum systems. In other words, an atom of any chemical element and as a quantum system, i.e. as a wave function, should be represented as a non-orthogonal in general (i.e. entangled) subspace of the separable complex Hilbert space relevant to the system to which the atom at issue is related as a true part of it. The paper follows previous publications of mine stating that “dark matter” and “dark energy” are projections of arbitrarily entangled states on the cognitive “screen” of Einstein’s “Mach’s principle” in general relativity postulating that gravitational field can be generated only by mass or energy.
Modal History versus Counterfactual History: History as IntentionVasil Penchev
The distinction of whether real or counterfactual history makes sense only post factum. However, modal history is to be defined only as ones’ intention and thus, ex-ante. Modal history is probable history, and its probability is subjective. One needs phenomenological “epoché” in relation to its reality (respectively, counterfactuality). Thus, modal history describes historical “phenomena” in Husserl’s sense and would need a specific application of phenomenological reduction, which can be called historical reduction. Modal history doubles history just as the recorded history of historiography does it. That doubling is a necessary condition of historical objectivity including one’s subjectivity: whether actors’, ex-anteor historians’ post factum. The objectivity doubled by ones’ subjectivity constitute “hermeneutical circle”.
Both classical and quantum information [autosaved]Vasil Penchev
Information can be considered a the most fundamental, philosophical, physical and mathematical concept originating from the totality by means of physical and mathematical transcendentalism (the counterpart of philosophical transcendentalism). Classical and quantum information. particularly by their units, bit and qubit, correspond and unify the finite and infinite:
As classical information is relevant to finite series and sets, as quantum information, to infinite ones. The separable complex Hilbert space of quantum mechanics can be represented equivalently as “qubit space”) as quantum information and doubled dually or “complimentary” by Hilbert arithmetic (classical information).
A CLASS OF EXEMPLES DEMONSTRATING THAT “푃푃≠푁푁푁 ” IN THE “P VS NP” PROBLEMVasil Penchev
The CMI Millennium “P vs NP Problem” can be resolved e.g. if one shows at least one counterexample to the “P=NP” conjecture. A certain class of problems being such counterexamples will be formulated. This implies the rejection of the hypothesis “P=NP” for any conditions satisfying the formulation of the problem. Thus, the solution “P≠NP” of the problem in general is proved. The class of counterexamples can be interpreted as any quantum superposition of any finite set of quantum states. The Kochen-Specker theorem is involved. Any fundamentally random choice among a finite set of alternatives belong to “NP’ but not to “P”. The conjecture that the set complement of “P” to “NP” can be described by that kind of choice exhaustively is formulated.
The space-time interpretation of Poincare’s conjecture proved by G. Perelman Vasil Penchev
This document discusses the generalization of Poincaré's conjecture to higher dimensions and its interpretation in terms of special relativity. It proposes that Poincaré's conjecture can be generalized to state that any 4-dimensional ball is topologically equivalent to 3D Euclidean space. This generalization has a physical interpretation in which our 3D space can be viewed as a "4-ball" closed in a fourth dimension. The document also outlines ideas for how one might prove this generalization by "unfolding" the problem into topological equivalences between Euclidean spaces.
FROM THE PRINCIPLE OF LEAST ACTION TO THE CONSERVATION OF QUANTUM INFORMATION...Vasil Penchev
In fact, the first law of conservation (that of mass) was found in chemistry and generalized to the conservation of energy in physics by means of Einstein’s famous “E=mc2”. Energy conservation is implied by the principle of least action from a variational viewpoint as in Emmy Noether’s theorems (1918): any chemical change in a conservative (i.e. “closed”) system can be accomplished only in the way conserving its total energy. Bohr’s innovation to found Mendeleev’s periodic table by quantum mechanics implies a certain generalization referring to
the quantum leaps as if accomplished in all possible trajectories (according to Feynman’s interpretation) and therefore generalizing the principle of least action and needing a certain generalization of energy conservation as to any quantum change.The transition from the first to the second theorem of Emmy Noether represents well the necessary generalization: its chemical meaning is the ge eralization of any chemical reaction to be accomplished as if any possible course of time rather than in the standard evenly running time (and equivalent to energy conservation according to the first theorem). The problem: If any quantum change is accomplished in al possible “variations (i.e. “violations) of energy conservation” (by different probabilities),
what (if any) is conserved? An answer: quantum information is what is conserved. Indeed, it can be particularly defined as the counterpart (e.g. in the sense of Emmy Noether’s theorems) to the physical quantity of action (e.g. as energy is the counterpart of time in them). It is valid in any course of time rather than in the evenly running one. That generalization implies a generalization of the periodic table including any continuous and smooth transformation between two chemical elements.
From the principle of least action to the conservation of quantum information...Vasil Penchev
In fact, the first law of conservation (that of mass) was found in chemistry and generalized to the conservation of energy in physics by means of Einstein’s famous “E=mc2”. Energy conservation is implied by the principle of least action from a variational viewpoint as in Emmy Noether’s theorems (1918):any chemical change in a conservative (i.e. “closed”) system can be accomplished only in the way conserving its total energy. Bohr’s innovation to found Mendeleev’s periodic table by quantum mechanics implies a certain generalization referring to the quantum leaps as if accomplished in all possible trajectories (e.g. according to Feynman’s viewpoint) and therefore generalizing the principle of least action and needing a certain generalization of energy conservation as to any quantum change.
The transition from the first to the second theorem of Emmy Noether represents well the necessary generalization: its chemical meaning is the generalization of any chemical reaction to be accomplished as if any possible course of time rather than in the standard evenly running time (and equivalent to energy conservation according to the first theorem).
The problem: If any quantum change is accomplished in all possible “variations (i.e. “violations) of energy conservation” (by different probabilities), what (if any) is conserved?
An answer: quantum information is what is conserved. Indeed it can be particularly defined as the counterpart (e.g. in the sense of Emmy Noether’s theorems) to the physical quantity of action (e.g. as energy is the counterpart of time in them). It is valid in any course of time rather than in the evenly running one. (An illustration: if observers in arbitrarily accelerated reference frames exchange light signals about the course of a single chemical reaction observed by all of them, the universal viewpoint shareаble by all is that of quantum information).
That generalization implies a generalization of the periodic table including any continuous and smooth transformation between two chemical elements necessary conserving quantum information rather than energy: thus it can be called “alchemical periodic table”.
Poincaré’s conjecture proved by G. Perelman by the isomorphism of Minkowski s...Vasil Penchev
- The document discusses the relationship between separable complex Hilbert spaces (H) and sets of ordinals (H) and how they should not be equated if natural numbers are identified as finite.
- It presents two interpretations of H: as vectors in n-dimensional complex space or as squarely integrable functions, and discusses how the latter adds unitarity from energy conservation.
- It argues that Η rather than H should be used when not involving energy conservation, and discusses how the relation between H and HH generates spheres representing areas and can be interpreted physically in terms of energy and force.
Why anything rather than nothing? The answer of quantum mechnaicsVasil Penchev
Many researchers determine the question “Why anything
rather than nothing?” to be the most ancient and fundamental philosophical problem. It is closely related to the idea of Creation shared by religion, science, and philosophy, for example in the shape of the “Big Bang”, the doctrine of first cause or causa sui, the Creation in six days in the Bible, etc. Thus, the solution of quantum mechanics, being scientific in essence, can also be interpreted philosophically, and even religiously. This paper will only discuss the philosophical interpretation. The essence of the answer of quantum mechanics is: 1.) Creation is necessary in a rigorously mathematical sense. Thus, it does not need any hoice, free will, subject, God, etc. to appear. The world exists by virtue of mathematical necessity, e.g. as any mathematical truth such as 2+2=4; and 2.) Being is less than nothing rather than ore than nothing. Thus creation is not an increase of nothing, but the decrease of nothing: it is a deficiency in relation to nothing. Time and its “arrow” form the road from that diminishment or incompleteness to nothing.
The Square of Opposition & The Concept of Infinity: The shared information s...Vasil Penchev
The power of the square of opposition has been proved during millennia, It supplies logic by the ontological language of infinity for describing anything...
6th WORLD CONGRESS ON THE SQUARE OF OPPOSITION
http://www.square-of-opposition.org/square2018.html
Mamardashvili, an Observer of the Totality. About “Symbol and Consciousness”,...Vasil Penchev
The paper discusses a few tensions “crucifying” the works and even personality of the great Georgian philosopher Merab Mamardashvili: East and West; human being and thought, symbol and consciousness, infinity and finiteness, similarity and differences. The observer can be involved as the correlative counterpart of the totality: An observer opposed to the totality externalizes an internal part outside. Thus the phenomena of an observer and the totality turn out to converge to each other or to be one and the same. In other words, the phenomenon of an observer includes the singularity of the solipsistic Self, which (or “who”) is the same as that of the totality. Furthermore, observation can be thought as that primary and initial action underlain by the phenomenon of an observer. That action of observation consists in the externalization of the solipsistic Self outside as some external reality. It is both a zero action and the singularity of the phenomenon of action. The main conclusions are: Mamardashvili’s philosophy can be thought both as the suffering effort to be a human being again and again as well as the philosophical reflection on the genesis of thought from itself by the same effort. Thus it can be recognized as a powerful tension between signs anа symbol, between conscious structures and consciousness, between the syncretism of the East and the discursiveness of the West crucifying spiritually Georgia
Completeness: From henkin's Proposition to Quantum ComputerVasil Penchev
This document discusses how Leon Henkin's proposition relates to concepts in logic, set theory, information theory, and quantum mechanics. It argues that Henkin's proposition, which states the provability of a statement within a formal system, is equivalent to an internal and consistent position regarding infinity. The document then explores how this connects to Martin Lob's theorem, the Einstein-Podolsky-Rosen paradox in quantum mechanics, theorems about the absence of hidden variables, entanglement, quantum information, and ultimately quantum computers.
Why anything rather than nothing? The answer of quantum mechanicsVasil Penchev
This document discusses the philosophical question of why there is something rather than nothing from the perspective of quantum mechanics. It argues that quantum mechanics provides a solution where creation is permanent and due to the irreversibility of time. The creation in quantum mechanics represents a necessary loss of information as alternatives are rejected in the course of time, rather than being due to some external cause like God's will. This permanent creation process makes the universe mathematically necessary rather than requiring an initial singular event like the Big Bang.
The outlined approach allows a common philosophical viewpoint to the physical world, language and some mathematical structures therefore calling for the universe to be understood as a joint physical, linguistic and mathematical universum, in which physical motion and metaphor are one and the same rather than only similar in a sense.
Hilbert Space and pseudo-Riemannian Space: The Common Base of Quantum Informa...Vasil Penchev
Hilbert space underlying quantum mechanics and pseudo-Riemannian space underlying general relativity share a common base of quantum information. Hilbert space can be interpreted as the free variable of quantum information, and any point in it, being equivalent to a wave function (and thus, to a state of a quantum system), as a value of that variable of quantum information. In turn, pseudo-Riemannian space can be interpreted as the interaction of two or more quantities of quantum information and thus, as two or more entangled quantum systems. Consequently, one can distinguish local physical interactions describable by a single Hilbert space (or by any factorizable tensor product of such ones) and non-local physical interactions describable only by means by that Hilbert space, which cannot be factorized as any tensor product of the Hilbert spaces, by means of which one can describe the interacting quantum subsystems separately. Any interaction, which can be exhaustedly described in a single Hilbert space, such as the weak, strong, and electromagnetic one, is local in terms of quantum information. Any interaction, which cannot be described thus, is nonlocal in terms of quantum information. Any interaction, which is exhaustedly describable by pseudo-Riemannian space, such as gravity, is nonlocal in this sense. Consequently all known physical interaction can be described by a single geometrical base interpreting it in terms of quantum information.
This document discusses using Richard Feynman's interpretation of quantum mechanics as a way to formally summarize different explanations of quantum mechanics given to hypothetical children. It proposes that each child's understanding could be seen as one "pathway" or explanation, with the total set of explanations forming a distribution. The document then suggests that quantum mechanics itself could provide a meta-explanation that encompasses all the children's perspectives by describing phenomena probabilistically rather than deterministically. Finally, it gives some examples of how this approach could allow defining and experimentally studying the concept of God through quantum mechanics.
This document discusses whether artificial intelligence can have a soul from both scientific and religious perspectives. It begins by acknowledging that "soul" is a religious concept while AI is a scientific one. The document then examines how Christianity views creativity as a criterion for having a soul. It proposes formal scientific definitions of creativity involving learning rates and probabilities. An example is given comparing a master's creativity to an apprentice's. The document argues science can describe God's infinite creativity and human's finite creativity uniformly. It analyzes whether criteria for creativity can apply to AI like a Turing machine. Hypothetical examples involving infinite algorithms and self-learning machines are discussed.
Analogia entis as analogy universalized and formalized rigorously and mathema...Vasil Penchev
THE SECOND WORLD CONGRESS ON ANALOGY, POZNAŃ, MAY 24-26, 2017
(The Venue: Sala Lubrańskiego (Lubrański’s Hall at the Collegium Minus), Adam Mickiewicz University, Address: ul. Wieniawskiego 1) The presentation: 24 May, 15:30
Ontology as a formal one. The language of ontology as the ontology itself: th...Vasil Penchev
“Formal ontology” is introduced first to programing languages in different ways. The most relevant one as to philosophy is as a generalization of “nth-order logic” and “nth-level language” for n=0. Then, the “zero-level language” is a theoretical reflection on the naïve attitude to the world: the “things and words” coincide by themselves. That approach corresponds directly to the philosophical phenomenology of Husserl or fundamental ontology of Heidegger. Ontology as the 0-level language may be researched as a formal ontology
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
BREEDING METHODS FOR DISEASE RESISTANCE.pptxRASHMI M G
Plant breeding for disease resistance is a strategy to reduce crop losses caused by disease. Plants have an innate immune system that allows them to recognize pathogens and provide resistance. However, breeding for long-lasting resistance often involves combining multiple resistance genes
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Equivariant neural networks and representation theory
FERMAT’S LAST THEOREM PROVED BY INDUCTION (accompanied by a philosophical comment)
1. 1
FERMAT’S LAST THEOREM PROVED BY INDUCTION
(accompanied by a philosophical comment)
Vasil Dinev Penchev,
Bulgarian Academy of Sciences: Institute for the Study of Societies and Knowledge:
Dept. of Logical Systems and Models
vasildinev@gmail.com
Abstract. A proof of Fermat’s last theorem is demonstrated. It is very brief, simple, elementary,
and absolutely arithmetical. The necessary premises for the proof are only: the three definitive
properties of the relation of equality (identity, symmetry, and transitivity), modus tollens, axiom
of induction, the proof of Fermat’s last theorem in the case of 𝑛𝑛 = 3 as well as the premises
necessary for the formulation of the theorem itself. It involves a modification of Fermat’s
approach of infinite descent. The infinite descent is linked to induction starting from 𝑛𝑛 = 3 by
modus tollens. An inductive series of modus tollens is constructed. The proof of the series by
induction is equivalent to Fermat’s last theorem. As far as Fermat had been proved the theorem
for 𝑛𝑛 = 4, one can suggest that the proof for 𝑛𝑛 ≥ 4 was accessible to him.
An idea for an elementary arithmetical proof of Fermat’s last theorem (FLT) by induction is
suggested. It would be accessible to Fermat unlike Wiles’s proof (1995), and would justify
Fermat’s claim (1637) for its proof. The inspiration for a simple proof would contradict to
Descartes’s dualism for appealing to merge “mind” and “body”, “words” and “things”, “terms” and
“propositions”, all orders of logic. A counterfactual course of history of mathematics and
philosophy may be admitted. The bifurcation happened in Descartes and Fermat’s age. FLT is
exceptionally difficult to be proved in our real branch rather than in the counterfactual one.
2. 2
The theorem known as “Fermat’s last theorem” (FLT) was formulated by the French
mathematician in 1637 and proved by Andrew Wiles (1995). Fermat remained both its
statement and his claim for the proof “too long for the margin”. So, the challenge of a simple
proof accessible to Fermat is alive during centuries.
Andrew Wiles’s proof is too complicated. It is not only beyond arithmetic, but even the question
whether it is within set theory can be asked.
What follows is a simple and elementary proof by the axiom of induction applied to an
enumerated series of uniform recurrent arithmetical statements sharing the logical form of
modus tollens.
The necessary premises are only: the definition of equality in mathematics including the three
property: identity, symmetry, and transitivity; modus tollens; the axiom of induction, the proof of
FLT for 𝑛𝑛 = 3. All premises necessary for the theorem itself to be formulated should be added.
Thus, all Peano axioms of arithmetic are included. The set of all natural numbers, designated as
“N”, is the only set meant anywhere bellow. All variables (𝑥𝑥, 𝑦𝑦, 𝑧𝑧, 𝑤𝑤, 𝑛𝑛, 𝑎𝑎, 𝑏𝑏) and the constant “c”
are defined only on it: their values are its elements. However, the set “N” is not used. It is
utilized only for simplifying the notations.
The idea of proof is a modification of Fermat’s infinite descent, consisting in the following:
The modification is not directed to construct a false statement included in any proof by reductio
ad absurdum. Furthermore, it starts as if from infinity rather than from any finite natural number.
Anyway, the modification is able to be restricted only to arithmetic and the axiom of induction
(i.e. without the set-theory “actual infinity”) by means of an enumerated series of modus tollens.
Thus, Fermat’s infinite descent is seen and utilized as “reversed”: as an ascent by induction.
3. 3
If one decomposes FLT to an enumerated series of statements, namely: FLT(3), FLT(4) FLT(5)
… FLT(n), FLT (n+1) , ..., the idea of the proof is:
∀(𝑥𝑥, 𝑦𝑦, 𝑧𝑧, 𝑛𝑛) 𝜖𝜖 𝑁𝑁: [(𝑎𝑎 = 𝑥𝑥 𝑛𝑛+1
) → (𝑏𝑏 = 𝑥𝑥 𝑛𝑛
)] ↔ [𝐹𝐹𝐹𝐹𝐹𝐹(𝑛𝑛) → 𝐹𝐹𝐹𝐹𝐹𝐹(𝑛𝑛 + 1)].
According to FLT, all FLT(n) are negative statements. If one considers the corresponding
positive statements, FLT*(n) = ¬FLT(n), the link to the series of modus tollens is obvious:
∀(𝑥𝑥, 𝑦𝑦, 𝑧𝑧, 𝑛𝑛)𝜖𝜖 𝑁𝑁: [(𝑎𝑎 = 𝑥𝑥 𝑛𝑛+1
) → (𝑏𝑏 = 𝑥𝑥 𝑛𝑛
) ↔ [¬FLT*(n) → ¬ FLT*(n+1)].
This is the core of proof. It needs a reflection even philosophical.
Two triple equalities ("𝑎𝑎 = 𝑥𝑥 𝑛𝑛+1
= 𝑦𝑦 𝑛𝑛+1
+ 𝑧𝑧 𝑛𝑛+1
”, and “𝑏𝑏 = 𝑥𝑥 𝑛𝑛
= 𝑦𝑦 𝑛𝑛
+ 𝑧𝑧 𝑛𝑛
”) are linked to each
other by modus tollens. What is valid for the left parts, (𝑎𝑎 = 𝑥𝑥 𝑛𝑛+1
) → (𝑏𝑏 = 𝑥𝑥 𝑛𝑛
), is transferred to
the right parts, ¬(𝑥𝑥 𝑛𝑛
= 𝑦𝑦 𝑛𝑛
+ 𝑧𝑧 𝑛𝑛
) → ¬(𝑥𝑥 𝑛𝑛+1
= 𝑦𝑦 𝑛𝑛+1
+ 𝑧𝑧 𝑛𝑛+1
), as an equivalence. The mediation of
each middle member in both triple equalities is crucial: it allows for the transition. An extended
description of “∀(𝑥𝑥, 𝑦𝑦, 𝑧𝑧, 𝑛𝑛)𝜖𝜖 𝑁𝑁: [(𝑎𝑎 = 𝑥𝑥 𝑛𝑛+1
) → (𝑏𝑏 = 𝑥𝑥 𝑛𝑛
) ↔ [¬FLT*(n) → ¬ FLT*(n+1)]” is:
∀(𝑥𝑥, 𝑦𝑦, 𝑧𝑧, 𝑛𝑛)𝜖𝜖 𝑁𝑁: [(𝑎𝑎 = 𝑥𝑥 𝑛𝑛+1
= 𝑦𝑦 𝑛𝑛+1
+ 𝑧𝑧 𝑛𝑛+1) → (𝑏𝑏 = 𝑥𝑥 𝑛𝑛
= 𝑦𝑦 𝑛𝑛
+ 𝑧𝑧 𝑛𝑛)] ↔
↔ [¬(𝑏𝑏 = 𝑥𝑥 𝑛𝑛
= 𝑦𝑦 𝑛𝑛
+ 𝑧𝑧 𝑛𝑛) → ¬(𝑎𝑎 = 𝑥𝑥 𝑛𝑛+1
= 𝑦𝑦 𝑛𝑛+1
+ 𝑧𝑧 𝑛𝑛+1)].
In fact, the equality (“=”) and logical equality “↔” are divided disjunctively. Their equivalence is
not necessary or used.
Anyway, their equivalence is valid as a mathematical isomorphism. Even more, the law of
identity in logic, “∀𝑎𝑎: 𝑎𝑎 ↔ 𝑎𝑎”, referring to the propositional logic, and the axiom of identity,
“∀𝑎𝑎: 𝑎𝑎 = 𝑎𝑎”, referring to any set of objects in a (first-order, second-order, …,
n-order, …) logic are isomorphic mathematically. The identity is a special kind of relation, in
which all orders are merged, and thus, indistinguishable from each other within it. Nonetheless,
any exemplification of that indistinguishability of identity due to mathematical isomorphism is not
used in the proof.
The proof in detail:
FLT: ∀(𝑥𝑥, 𝑦𝑦, 𝑧𝑧, 𝑛𝑛 ≥ 3) 𝜖𝜖 𝑁𝑁: ¬"𝑥𝑥 𝑛𝑛
= 𝑦𝑦 𝑛𝑛
+ 𝑧𝑧 𝑛𝑛
"
4. 4
“FLT(c)” means: ¬"xc
=yc
+zc
" where “c” is a constant: 𝑐𝑐 ≥ 3, 𝑐𝑐 𝜖𝜖 𝑁𝑁. FLT will be proved as
FLT(c) will be proved for each “c” (∀𝑐𝑐) by induction. The equivalence of “FLT” and "∀c: FLT(c)”
is granted as obvious. The set of all “FLT(c)” is neither used nor involved in any way.
The relation of equality can be defined by its three properties: identity, symmetry, and
transitivity. They will be used to be proved a corollary from modus tollens, which is necessary to
be linked Fermat’s infinite descent to an inductive ascent.
Law (axiom) of identity [LI]: ∀𝐴𝐴: 𝐴𝐴 = 𝐴𝐴
It will be modified equivalently to “∀𝑥𝑥: 𝑥𝑥 ↔ 𝑥𝑥 = 𝑥𝑥” for the present utilization. Indeed:
∀𝑥𝑥: 𝑥𝑥 = 𝑥𝑥; ∀𝑥𝑥: 𝑥𝑥 → 𝑥𝑥; ∀𝑥𝑥: 𝑥𝑥 → 𝑥𝑥 = 𝑥𝑥; ∀𝑥𝑥: 𝑥𝑥 = 𝑥𝑥 → 𝑥𝑥; ∀𝑥𝑥: 𝑥𝑥 ↔ 𝑥𝑥 = 𝑥𝑥
Symmetry of equality [SE]: 𝐴𝐴 = 𝐵𝐵 ↔ 𝐵𝐵 = 𝐴𝐴.
It will be interpreted literally to the variables 𝑥𝑥, 𝑦𝑦, the values of which are elements of N.
∀𝑥𝑥, 𝑦𝑦: 𝑥𝑥 = 𝑦𝑦 ↔ 𝑦𝑦 = 𝑥𝑥
Transitivity of equality [TE]: 𝐴𝐴 = 𝐵𝐵 ∧ 𝐵𝐵 = 𝐶𝐶 ↔ 𝐴𝐴 = 𝐵𝐵 = 𝐶𝐶.
It will be interpreted literally to the variables 𝑥𝑥, 𝑦𝑦, 𝑧𝑧, the values of which are elements of N.
∀𝑥𝑥, 𝑦𝑦, 𝑧𝑧: [(𝑥𝑥 = 𝑦𝑦) ˄ (𝑦𝑦 = 𝑧𝑧)] ↔ (𝑥𝑥 = 𝑦𝑦 = 𝑧𝑧)
The definitive properties of the relation of equality imply: ∀𝑥𝑥, 𝑦𝑦: 𝑥𝑥 = (𝑥𝑥 = 𝑦𝑦). Indeed:
Let 𝑧𝑧 = 𝑥𝑥, then:
∀𝑥𝑥, 𝑦𝑦: [(𝑥𝑥 = 𝑦𝑦)˄ (𝑦𝑦 = 𝑥𝑥)] ↔ (𝑥𝑥 = 𝑦𝑦 = 𝑥𝑥) ↔ [𝑥𝑥 = (𝑦𝑦 = 𝑥𝑥)]
𝑆𝑆𝑆𝑆
↔ [𝑥𝑥 = (𝑥𝑥 = 𝑦𝑦)]
Modus tollens [MT]: (𝐴𝐴 → 𝐵𝐵) ↔ (¬𝐵𝐵 → ¬𝐴𝐴)
If one interprets MT as to the variables 𝑥𝑥, 𝑦𝑦, 𝑧𝑧, 𝑤𝑤 defined on N and means the proved statement
above (“∀𝑥𝑥, 𝑦𝑦: 𝑥𝑥 = (𝑥𝑥 = 𝑦𝑦)”) in order to substitute “𝑦𝑦” by “𝑦𝑦 = 𝑧𝑧” and “x” by “𝑥𝑥 = 𝑤𝑤”, the result is:
∀𝑥𝑥, 𝑦𝑦, 𝑧𝑧, 𝑤𝑤: [(𝑎𝑎 = 𝑥𝑥) → (𝑏𝑏 = 𝑦𝑦)] ↔ [¬(𝑦𝑦 = 𝑧𝑧) → ¬(𝑥𝑥 = 𝑤𝑤)]. Both “𝑎𝑎” and “𝑏𝑏” serve only to
transform the terms “𝑥𝑥” and “𝑦𝑦” into propositions and to allow for the implication. They mediate
5. 5
and therefore divide disjunctively the equality of terms (“=”) from the logical equivalence of
propositions in modus tollens (“↔”).
Axiom of induction [AI]: ∀𝑝𝑝, 𝑛𝑛: 𝑝𝑝(1) ˄ [𝑝𝑝(𝑛𝑛) → 𝑝𝑝(𝑛𝑛 + 1)] → 𝑝𝑝 where “𝑝𝑝(𝑛𝑛)” is an arithmetical
proposition referring to the natural number “𝑛𝑛”, and “p” is the same proposition referring to all
natural numbers. “Arithmetical proposition” means a proposition in a first-order logic applied to
arithmetic. The axiom of induction is modified starting from 𝑛𝑛 = 3 rather than from 𝑛𝑛 = 1:
(∀𝑝𝑝, 𝑛𝑛: 𝑝𝑝(1) ˄ [𝑝𝑝(𝑛𝑛) → 𝑝𝑝(𝑛𝑛 + 1)] → 𝑝𝑝} → ∀𝑝𝑝, 𝑛𝑛 ≥ 3: 𝑝𝑝(3) ˄ [𝑝𝑝(𝑛𝑛) → 𝑝𝑝(𝑛𝑛 + 1)] → 𝑝𝑝
A modification of Fermat’s infinite descent [MFID]: MT modified as above is applied as starting
from 𝑛𝑛 = 3 as follows:
… 𝑛𝑛, 𝑛𝑛 − 1, … 5, 4, 3
The same descent is interpreted as a series of enumerated propositions:
… (𝑛𝑛), (𝑛𝑛 − 1), … (5), (4), (3)
A reverse chain of negations is implied:
¬(3), ¬(4), ¬(5), … , ¬(𝑛𝑛 − 1), ¬(𝑛𝑛), …
Both ascent of “negations” and infinite descent are constructed step by step following the
increasing number of the negative propositions (rather than the decreasing number of the
positive propositions):
[(4) → (3)] ↔ [¬(3) → ¬(4)], [(5) → (4)] ↔ [¬(4) → ¬(5)], [(6) → (5)] ↔ [¬(5) → ¬(6)], …
… [(𝑛𝑛 + 1) → (𝑛𝑛)] ↔ [¬(𝑛𝑛) → ¬(𝑛𝑛 + 1)], [(𝑛𝑛 + 2) → (𝑛𝑛 + 1)] ↔ [¬(𝑛𝑛 + 1) → ¬(𝑛𝑛 + 2)], … ….
So, one builds a series of modus tollens starting from 𝑛𝑛 = 3.
FLT(3): 𝑥𝑥, 𝑦𝑦, 𝑧𝑧 𝑎𝑎𝑎𝑎𝑎𝑎 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛. 𝑇𝑇ℎ𝑒𝑒𝑒𝑒𝑒𝑒 𝑑𝑑𝑑𝑑 𝑛𝑛𝑛𝑛 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 𝑎𝑎𝑎𝑎𝑎𝑎 𝑥𝑥, 𝑦𝑦, 𝑧𝑧:
𝑥𝑥3
+ 𝑦𝑦3
= 𝑧𝑧3
Many mathematicians beginning with Euler claimed its proof. Ernst Kummer’s proof (1847) will
be cited here for its absolute rigor. It refers to all cases of “regular prime numbers” defined by
Kummer, among which the case “𝑛𝑛 = 3” is.
Furthermore, the “n”-th member of the series of modus tollens, namely:
"[(𝑛𝑛 + 1) → (𝑛𝑛)] ↔ [¬(𝑛𝑛) → ¬(𝑛𝑛 + 1)]”, is valid as far as "(n+1)→(n)" is valid.
One interprets that "(n+1)→(n)" in the case of FLT:
∀𝑥𝑥, 𝑛𝑛: (𝑎𝑎 = 𝑥𝑥 𝑛𝑛+1
) → (𝑏𝑏 = 𝑥𝑥 𝑛𝑛
). This is true for 𝑥𝑥 𝑛𝑛+1
= 𝑥𝑥 𝑛𝑛
. 𝑥𝑥" . Thus, 𝑏𝑏 = 𝑥𝑥 𝑛𝑛
is a necessary
condition for 𝑎𝑎 = 𝑥𝑥 𝑛𝑛+1
and the former is implied by the latter. The expression "𝑥𝑥 𝑛𝑛+1
→ 𝑥𝑥 𝑛𝑛
" will
be used further as a brief notation for “∀𝑥𝑥, 𝑛𝑛: (𝑎𝑎 = 𝑥𝑥 𝑛𝑛+1
) → (𝑏𝑏 = 𝑥𝑥 𝑛𝑛
)” just as FLT(n) is a
corresponding brief notation.
6. 6
The “a” and “b” transform the logical terms "𝑥𝑥 𝑛𝑛+1
" and “𝑥𝑥 𝑛𝑛
" into propositions correspondingly.
They serve to mediate between the relation of equality “=” referring to terms and the logical
equivalence “↔” referring to propositions therefore dividing them disjunctively.
The inspiration for the proof is the isomorphism of the law of identity in logic and the axiom of
identity in the definition of equality: “a” and “b” are necessary in order to remove that
isomorphism as a premise for the proof.
The series of modus tollens is interpreted in terms of FLT as the following series of implications:
[“𝑥𝑥4
→ 𝑥𝑥3
” ˄ “FLT(3)”] → “FLT(4)”; [“𝑥𝑥5
→ 𝑥𝑥4
” ˄ “FLT(4)”] → “FLT(5)”, [“𝑥𝑥6
→ 𝑥𝑥5
” ˄ “FLT(5)”] →
“FLT(6)”; … , [“𝑥𝑥 𝑛𝑛+1
→ 𝑥𝑥 𝑛𝑛
” ˄ “FLT(n)”] → “FLT(n+1)”; [“𝑥𝑥 𝑛𝑛+2
→ 𝑥𝑥 𝑛𝑛+1
” ˄ “FLT(n+1)”] → “FLT(n+2)”,
… …
The member of the series of implications is true for “n=3”, the validity for “n” implies the validity
for “n+1”. Thus, it is valid for “any member enumerated by a natural number greater than two” in
virtue of the axiom of induction.
FLT is proved.
If one accepts that Fermat (1670: 338-339) had proved FLT(4) and as far as the above proof
seems to be accessible to him, he might prove FLT for 𝑛𝑛 ≥ 4.
Literature:
Wiles, A. (1995) Modular Elliptic Curves and Fermat's Last Theorem. Annals of Mathematics,
Second Series, Vol. 141, No. 3 (May, 1995), pp. 443-55.
Kummer, E. (1847) Beweis des Fermat’schen Satzes der Unmöglichkeit von 𝑥𝑥 𝜆𝜆
+ 𝑦𝑦 𝜆𝜆
= 𝑧𝑧 𝜆𝜆
für
eine unendliche Anzahl Primzahlen λ. In: Kummer, Ernst Eduard. Collected papers (ed. Andre
Weil) Volume 1. Contributions to number theory. Berlin, Heidelberg, New York: Springer, 1975,
pp. 274-297.
Fermat, P. (1670) Diophanti Alexandrini Arithmeticorum libri sex, et De numeris multangulis liber
unus. Cum commentariis C.G. Bacheti v.c. & obseruationibus D.P. de Fermat .senatoris
Tolosani. Acessit Doctrinae analyticae inuentum nouum, collectum ex varijs eiusdem D. de
Fermat epistolis. Tolosae: Excudebat Bernardus Bosc, è regione Collegii Societatis lesu,
M. DC. LXX. (Source: http://books.google.com/books?id=yx9VIgeaCEYC&hl=&source=gbs_api
1670.)
7. 7
A comment. The original proof of Fermat's las theorem: a philosophical reflection
Prehistory and background:
Fermat remained a statement (supposedly, in 1637) in the margin of his exemplar of Diophantus’s works
as well as the claim he had proved it, but the proof was too long to be recorded in the margin. The
statement is known as Fermat’s last theorem (FLT): the equation 𝑥𝑥 𝑛𝑛
+ 𝑦𝑦 𝑛𝑛
= 𝑧𝑧 𝑛𝑛
where x, y, z, n are
natural numbers does not have any solution for 𝑛𝑛 ≥ 3.
Fermat himself remained a proof for the case 𝑛𝑛 = 4. Euler proved it for 𝑛𝑛 = 3 in two different ways, but
both proofs met criticism. Gauss’s proof of the same particular case met criticism as well. Ernst
Kummer’s one (1847) refers to all “regular prime numbers” define by him, among which the case 𝑛𝑛 = 3
is. His proof is mathematically rigorous enough. The theorem has been proved for many particular cases
since then.
The statement in general was proved only in 1995 by Andrew Wiles as a corollary from the Taniyama-
Shimura-Weil conjecture known as the modularity theorem after his proof. It links the continuous (even
smooth) elliptic curves with the discrete modular forms. The proof is not arithmetical or accessible to
Fermat. It is even so complicated that the question whether it is within set theory rather than only
within arithmetic can be asked.
Anyway, Fermat’s challenge for an elementary arithmetical proof remains.
An idea for the inductive proof of FLT:
An idea for proving FLT might be the following. The statement is proved for 𝑛𝑛 = 3. If one proves that its
validity for 𝑛𝑛 implies its validity for 𝑛𝑛 + 1, the statement will be proved by virtue of the axiom of
induction in arithmetic, and thus: absolutely arithmetically.
Fermat’s “infinite descent” involving modus tollens might be modified for the purpose. The modification
would consists in the following. The infinite descent would not include in a reductio ad absurdum proof.
It would starts as if “from infinity” rather than from any finite natural number. The descent might be
transformed into a corresponding ascent, to which the axiom of induction would be applicable, by
means of modus tollens.
The inspiration of the inductive proof might be the mathematical isomorphism of the law of identity in
logic and the axiom of identity in the definition of the relation of equality axiomatically. It implies the
isomorphism of logical equivalence referring to propositions and the relation of equality referring to
objects or terms. Thus, identity in virtue of its definition is a special “singularity” where propositional
logic and any logic of an arbitrary order merge to each other.
The inspiration for merging can originate from a few philosophical doctrines such as Pythagoreanism,
Hegel’s panlogism, Husserl’s phenomenology, Heidegger’s doctrine(s). Furthermore, it can originate
from an experimental science such as quantum mechanics and its mathematical formalism grounded on
the separable complex Hilbert space.
However, it can originate from the absence or instability of Descartes’s dualism during Fermat’s life (as
far as he is almost Descartes’s contemporary). This together with some kind of naïve Pythagoreanism
admissible or even probable for a mathematician in number theory might inspire Fermat for a simple
arithmetical proof of FLT.
Fermat's accessibility to that kind of inductive proof:
The idea of inductive proof is relative to Fermat’s infinite descent. He himself had been prove FLT
8. 8
for 𝑛𝑛 = 4. So, the conjecture that Fermat meant the proof for 𝑛𝑛 ≥ 4 is probable.
Furthermore, the one of Euler’s proof (the more correct one though containing gaps) was admissible to
Fermat at least in principle. So, the option for Fermat meant the case 𝑛𝑛 ≥ 3 might not be excluded
absolutely.
The meaning of identity in an eventual inductive proof:
The identity is able to inspire an elementary, simple, very brief, and absolutely arithmetical proof of FLT
by the method of mathematical induction. The philosophical essence of that identity is the identity of
anything (i.e. an object in any n-order logic) and the "word" designating the same thing (i.e. the
corresponding term in the (n-1)-order logic). In other words, this means the identity of any
mathematical structure and its interpretation of any kind. For example, meaning that identity, one
should erase the distinction between mathematical models and their material interpretation in physics.
The identity refers only to its scope: it does not imply mathematical models and their interpretations to
be identified at all. It states only the existence of a mathematical model identical to a given physical
system or vice versa: a given physical system identical to a given physical model.
This seems to be a philosophical postulate. On the contrary, Western philosophy (and particularly,
philosophy of science) as far as originates from Descartes's dualism postulates the opposite statement:
no mathematical model can be identical to the physical system interpreting it. Only a few, though the
most important, properties and relations of the system can be captured by the model, but there exist
always others, which are not, regardless of whether there exist property or relation of the system which
cannot be reflected in the model ever at all, in principle.
Thus, a difference whether absolute or historically relative justifies the distinction of "things" and
"words" in Western philosophy and philosophy of science. Its "blind spot" is all knowledge within the
opposite postulate allowing the identity of a mathematical model and a physical system. Thus, the
necessary inspiration for an elementary proof of FLT by induction turns out to be within the same 'blind
spot". The proof out of it, e.g. that of Andrew Wiles, is incredibly complicated, indirect, and mediated by
many, many chain links way to be seen the invisible being situated in the area corresponding to the
"blind spot". If one manages to reflect the existence of "blind spot", the change of position is much
simple as to observing what is located in the "blind spot" area. Abandoning the metaphor, this means to
be accepted the opposite postulate: mathematical model and reality can coincide and thus, be identical.
Once, that postulate is the case, the identity necessary for an elementary inductive proof of FLT is
admissible and the pathway to it is open. However, one can suggest that the actual pathway as a
"Wittgenstein ladder" might be removed in the final analysis. That is: the identity is necessary for the
inspiration to an elementary proof, but not within it actually, not within its text literally.
In facts, quantum mechanics was forced almost violently by the experimental results to see the area
corresponding to the "blind spot" of Western philosophy and science. The shocking theorems of the
"absence of hidden variables" in quantum mechanics were proved. They implied the absolute
coincidence of model and reality at least as to quantum mechanics, and thus, their identity.
Consequently, quantum mechanics might serve as an "inspiration" for an elementary, inductive proof of
FLT.
Identity in terms of Descartes's dualism:
However, Fermat's contemporary Descartes, his "older brother" by age, born, lived, and died about ten
years ago, ten years before Fermat, postulated the non-identity at issue by his doctrine of dualism.
9. 9
Perhaps only God and belief might be the "key" to identity, otherwise inaccessible to human beings.
Thus, the direct pathway for an elementary inductive proof of FLT would turn out to be closed for
centuries, and a very, very round and indirect road would be walked exceptionally hard only at the end
of millennium by Andrew Wiles.
One can admit easily, that the direct pathway had not been closed ultimately in Fermat's age. So,
Fermat might prove his last theorem really, relying on that identity still possible and admissible in
science and mathematics.
Descartes remained his "analytic geometry" together with the "dualism": what is gapped in the latter is
linked in the former. Measuring the earth (i.e. "geometry") is identical to algebra generating
mathematical models only. Thus, a Foucault "episteme" of identities and distinctions has been
determined for centuries:
An identity (that of analytic geometry) has linked geometry to algebra and arithmetic, therefore
constituting mathematics. A distinction (that of dualism) has gapped mathematics from physics,
therefore constituting experimental science.
So, a plot about the “bystander – victim” Fermat, a "younger brother" of the “villain” Descartes can be
outlined. The original proof of FLT would be the symbol and title of that scenario.
Identity in terms of the modern western philosophy:
Descartes’s dualism underlay all Western philosophy after him. One of its main problems was how the
dualism might be overcome without God’s help. Kant’s transcendentalism and Hegel’s dialectic
panlogism can be considered as ones of the most successful solutions.
The fundamentally different is only Husserl’s phenomenology (as well as Heidegger’s doctrine(s)
influenced by it crucially). They considered the dualism as an axiom, and postulated its negation in their
doctrines.
Conclusion:
A counterfactual course of time in history of philosophy and in history of mathematics can be allowed.
The real branch of Descartes’s dualism and a lost original proof of FLT is ours. The counterfactual one
would suggest an elementary arithmetical proof and perhaps the absence of Descartes’s dualism and
the originating Western philosophy following it. The bifurcation happened in Descartes and Fermat’s
age.
One might attempt to write down those counterfactual histories of mathematics and philosophy.