Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this document? Why not share!

346 views

Published on

an embarrassingly simple intro to calculus

No Downloads

Total views

346

On SlideShare

0

From Embeds

0

Number of Embeds

2

Shares

0

Downloads

5

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Elements of Mathematics: some embarrasignly simple (but practical) tools for calculus Jordi Vill` i Freixa (jordi.villa@upf.edu) a November 23, 2011Contents1 To start with 32 Tools from calculus 5 2.1 Real numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Real functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Function limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.4 Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Derivatives 8 3.1 Diﬀerential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Newton’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.3 Chain rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.4 Extreme values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.5 Mean value theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Taylor’s approximation 10 4.1 Taylor series for N D functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Optimization 12 1
- 2. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences 5.1 One-dimensional optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.1.1 Golden Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.2 Secant method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.3 Unconstrained optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 5.3.1 Gradient Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 5.3.2 The Newton-Raphson method . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 5.3.3 Conjugated gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5.3.4 Quasi-Newton methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5.4 Constrained optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5.4.1 Lagrange multipliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5.5 Global Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5.5.1 Stochastic methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 5.5.2 Simulated annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 5.5.3 Simplex method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.5.4 Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Integral calculus 17 6.1 Summatories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 6.2 Deﬁnite integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.3 Numerical integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 6.3.1 Trapezium formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 6.3.2 Simpson’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Sources of information 23Calculus 2
- 3. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences Figure 1: A very simple function and its two ﬁrst derivatives See also [1, 2].1 To start withDraw a function with these characteristics: • Its limit when x → −∞, is 5. The function approximates −∞ when x is arbitrarialy big. • f (x) decreases for x ≤ −2, and increases in [1, 5]. • It has a local maxima at x = 5. • f (−2) = 0 = f (6) • When x left-approaches 1, the function approaches +∞, and when it right-approaches that value the function approaches −1. Now have a look at the plots in Figure 1 and try to determine to which functions they correspond(Trick: they correspond to succesive derivatives of an initial function)1 Exercise 1 6xDraw a sketch of the function f (x) = 1+x2 on the interval [−3, 3] Exercise 2 xDraw a sketch of the function f (x) = 1−x2 on the interval [−3, 3] This course is focussed on optimization, although a ﬁnal section is given on integration. The aimis that you realize what is the meaning of continuty, derivability and discrete character of functions.Let us start with stating the optimization problem 1 Here are the solutions: f (x) = 3x3 − 10x2 − 56x + 5; f (x) = 9x2 − 20x − 56; f (x) = 18x − 20.Calculus 3
- 4. MAT: 2011-31035-T1 MSc Bioinformatics for Health SciencesFigure 2: Example of complex functions to optimize: a complex surface and the commercial agenttravelling problemFigure 3: The structure of a given (bio)chemical system may have been associated to an energyfunction composed by a sum of all bonded (bonds, bends and tornions) and non bonded interactions(typically van der Waals and Coulomb interactions). Such a function yield a potential energysurface (PES) describing the energy of any given conﬁguration of the system (the ﬁgure representsan oversimpliﬁcation of this problem, by considering a PES depending on just 2 variables) • An objective function which we want to minimize or maximize. • A set of unknowns or variables which aﬀect the value of the objective function. • A set of constraints that allow the unknowns to take on certain values but exclude others. Functions can be as simple as those in Figure 1 or as complex as those in Figure 2 Sometimes, for example, we may be interested in optimizing the structure of molecular structures,and for this we need to get good descriptions of the functions controling the energetics of suchmolecules (see Figure 3).Calculus 4
- 5. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences2 Tools from calculus2.1 Real numbersSome useful issues: • R and the real line • Absolute value a if a ≥ 0; |a| = −a if a < 0. • Triangle inequality |a + b| ≤ |a| ± |b| • Variable: leter we assign to any member of the set • Domain: set of real numbers the variable represents • Intervals; examples: (a, b) = {x : a < x < b} [a, b) = {x : a ≤ x < b} [a, b] = {x : a ≤ x ≤ b} (a, ∞) = {x : a < x}2.2 Real functionsDeﬁnition 1. A function f from D to E is a correspondence that assigns, to each element x ∈ D,a unique element y ∈ E that we call f (x). The function is repreented by f D→E f :D→ED is the domain of the function f and the antidomain of f is the E-subset of all possible f (x)for x ∈ D.Deﬁnition 2. Let be f a function such as when x is in D, −x it is also in D. • f is even if f (−x) = f (x); ∀x ∈ D. • f is odd if f (−x) = −f (x); ∀x ∈ D.Deﬁnition 3. A function f with domain D and antidomain E, is a biunique function is whena = b in D, then f (a) = f (b) in E.Calculus 5
- 6. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences Exercise 3Are these functions biunique? f (x) = 3x + 2 g(x) = x4 + 2x2Deﬁnition 4. A function f is a polinomial if f (x) = an xn + an−1 xn−1 + . . . + a1 x + a0where the coeﬃcients a0 , a1 , . . . , an are real numbers and the powers are non-negative integers.Deﬁnition 5. Let be f a function from D to E and let be g a function from E to K. The compositefunction g ◦ f is a function from D to K deﬁned by (g ◦ f )(x) = g(f (x)); ∀x ∈ D2.3 Function limitLet be a within an open interval, and let be f a function deﬁned in the whole interval except,perhaps, in a, and L a real number. Then, lim f (x) = L x→aDeﬁnition 6. (informal) means that f (x) can arbitrarily approach L if x is chosen close to a (butx = a).Deﬁnition 7. (formal) means that ∀ε > 0 ∈ R, ∃δ > 0 ∈ R; si 0 < |x − a| < δ, then |f (x) − L| < ε. Exercise 4Check that limx→4 1 (3x − 1) = 2 11 2Theorem 1. Let be a a point contained in an open interval and f a function deﬁned within thewhole interval excepts, perhaps, in a. Then, limx→a f (x) = L if and only if limx→a− f (x) = L andlimx→a+ f (x) = L. Exercise 5Find limx→1− f (x), limx→1+ f (x) and limx→1 f (x) for 2−x per x < 1 f (x) = x2 + 1 per x > 1 Some properties of the limit deﬁnition: • limx→a c = c i limx→a x = aCalculus 6
- 7. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences • If limx→a f (x) = L i limx→a g(x) = M : lim [f (x) + g(x)] = L + M x→a lim [f (x) · g(x)] = L · M x→a f (x) L lim [ ]= , si M = 0 x→a g(x) M lim [cf (x)] = cL, ∀c ∈ R x→a lim [f (x) − g(x)] = L − M x→aTheorem 2. Let us assume that for all x in an open interval containing a except, perhaps, forx = a, f (x) ≤ h(x) ≤ g(x). If limx→a f (x) = L = limx→a g(x), then limx→a h(x) = L. Exercise 6 1Demonstrate, using the above theorem, that limx→0 x2 sin x = 0. It is simple to solve: 1 −1 ≤ sin ≤1 x 1 −x2 ≤ x2 sin ≤ x2 x 1 lim −x2 ≤ lim x2 sin ≤ lim x2 x→0 x→0 x x→0 1 lim x2 sin = 0 x→0 x2.4 ContinuityTheorem 3. We call a function f continuous at a if: • f is deﬁned in an open interval containing a; • limx→a f (x) exists; and • limx→a f (x) = f (a)Theorem 4. (mean value theorem) If f is a continuous function within [a, b], and w is any valuebetween f (a) and f (b), then, there exists at least one value c ∈ [a, b] such that f (c) = w.Theorem 5. If a function f is continuous in a given interval and does not have zeroes in it, thenf (x) > 0 or f (x) < 0 for all x in that interval.Calculus 7
- 8. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences Exercise 7Find where P (x) = 1 (5x3 − 3x) is positive or negative. 23 DerivativesTheorem 6. f is deﬁned in an open interval containing a. The derivative of f in a, representedby f (a), is given by f (a + h) − f (a) f (a) = lim h→0 h f (x)−f (a)if this limit exists, or f (a) = limx→a x−a . It is easy to see that the derivative of a function is also a function. Notation: dy d f (x) = Dx [f (x)] = Dx y = y = = [f (x)] dx dx Some rules worth having in mind: Dx (c) = 0 Dx (x) = 1 Dx (xn ) = nxn−1 si n ∈ Z Dx [cf (x)] = cDx [f (x)] Dx [f (x) ± g(x)] = Dx [f (x)] ± Dx [g(x)] Dx [f (x)g(x)] = g(x)Dx [f (x)] + f (x)Dx [g(x)] f (x) g(x)Dx [f (x)] − f (x)Dx [g(x)] Dx [ ]= , g(x) = 0 g(x) [g(x)]2 Exercise 8Use the linearization of f (x) = x2 at x = 5 to approximate (5.1)2 .3.1 DiﬀerentialTheorem 7. Be y = f (x), where f is a derivable function, and being ∆x and increment of x. • The diﬀerential dx of x is dx = ∆x. • The diﬀerential dy of the dependent variable y is dy = f (x)∆x = f (x)dx. and one can write: dy ∆y = f (x) = lim dx ∆x→0 ∆xCalculus 8
- 9. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences3.2 Newton’s methodTheorem 8. Being f a derivable function and being r a value where f is zero. If xn is an approxi-mation to r and f (xn ) = 0, then the next approximation xn+1 is given by: f (xn ) xn+1 = xn − f (xn ) Exercise 9Find the zeroes of f (x) = x3 + 3x2 − 9x − 29 and g(x) = x3 − 4x Exercise 10 2xFind where the graphs for the functions f (x) = 1+x2 and g(x) = arctan x cross. iteration xn f(x) f’(x) f”(x) dx 0 0.0 +5.0 -56.0 -20.0 -2.8 1 -2.8 +17.5440 +70.560 -70.40 +1.0023 2 -1.7977 +55.9249 +9.0395 -52.3586 +0.1726 3 -1.6251 +56.7207 +0.2683 -49.2510 -0.0054 4 -1.6197 +56.7214 +0.0049 -49.1546 -0.0001 iteration xn f(x) f’(x) f”(x) dx 0 +2.0 -123.0 -60.0 +16.0 +3.75 1 +5.750 -77.2969 +70.560 +83.50 -1.5157 2 +4.2343 -183.6597 +9.0395 +56.2174 -0.3678 3 +3.8665 -187.6118 +0.2683 +49.5967 -0.0246 4 +3.8419 -187.6268 +0.0049 +49.1551 -0.00013.3 Chain rule dy duDeﬁnition 8. If y = f (u), u = g(x), and the derivatives du and dx exist, then the functioncomposed by y = f (g(x)) has the derivative dy dy du = = f (u)g (x) = f (g(x))g (x) dx du dx Exercise 11Find the derivative of cos2 (5x) + sin2 (5x).3.4 Extreme valuesTheorem 9. If a function f is continuous in a closed interval [a, b], then f hits a minimum and amaximum at least once in [a, b].Calculus 9
- 10. MAT: 2011-31035-T1 MSc Bioinformatics for Health SciencesTheorem 10. Being c a number within the domain of f : f (c) is a local maximum if ∃(a, b) containingc in such a way that f (x) ≤ f (c) for all x ∈ (a, b). Local minimum is deﬁned in an analogous way.Theorem 11. A number c in the domain of a function f is called a critical point of f if f (c) = 0or f (c) does not exist.Theorem 12. (Rolle) If a function f is continuous within the closed interval [a, b], derivable withinthe open interval (a, b) and f (a) = f (b), then there exists at least one number c within (a, b) suchthat f (c) = 0. Exercise 12Verify Rolle’s thm for the function f (x) = 4x2 − 20x + 29 at (1, 4).3.5 Mean value theoremTheorem 13. (Mean value) If a function f is continuous in a closed interval [a, b] and derivable inan open interval (a, b), then there exists a number c in (a, b) such that f (b) − f (a) = f (c)(b − a) Exercise 13Verify the mean value thm for f (x) = x3 − 8x − 5 at (1, 4). Exercise 14 9 Let f (x) = x+1 . Find the point (c, d) on the graph of f , with c in the open interval (10, 12), suchthat 1. the tangent line to y = f (x) at (c, d) is parallel to the chord line joining (10, f (10)) and (12, f (12)), and 2. c is the largest value in (10, 12) that satisﬁes condition 1.4 Taylor’s approximationTheorem 14. Being f a function with n + 1 derivatives in a given interval containing c. If x is anumber of the interval, then there exists a value z within c and x such that n f (i) (c) f (n+1) (z) f (x) = f (c) + (x − c)i + (x − c)n+1 i=1 i! (n + 1)! Taylor’s polynomial residualCalculus 10
- 11. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences Figure 4: Taylor approximations for f (x) = ln(x + 1) Exercise 15 Estimate the precission of approximating f (x) = ln x with a Taylor’s polynomial for n = 3 andc = 1. Exercise 16 1Get the Taylor’s polynomial of degree n around x = 0 for f (x) = 1−x with x = 1. Figure 4 shows the diﬀerent Taylor approximations for f (x) = ln(x + 1). Taylor’s polynomials around c are called MacLaurin polynomials. Exercise 17 What is the bound on the error that the MacLaurin polynomial yields when trying to approximatecos(x) by p4 (π)?4.1 Taylor series for N D functionsIn a general multidimensional (N ) case: T 1 T f (x) ≈ f (xk ) + (x − xk ) · gk + (x − xk ) · Hk · (x − xk ) 2 linear term quadratic termwhere gk is the gradient vector of N dimensions and Hk is the N ×N Hessian matrix at point xk : ∂2f ∂2f ∂2f ∂f ∂x1 ∂x1 ∂x1 ∂x1 ∂x2 ··· ∂x1 ∂xN ∂f ∂2f ∂2f ∂2f ··· ∂x2 ∂x2 ∂x1 ∂x2 ∂x2 ∂x2 ∂xN gk = . , Hk = . . . . . ∂f ∂2f ∂2f ∂2f ∂xN xk ∂xN ∂x1 ∂xN ∂x2 ··· ∂xN ∂xN xkCalculus 11
- 12. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences Figure 5: Schema for Golden Search5 Optimization5.1 One-dimensional optimization5.1.1 Golden SearchSee Figure 5. 1 • when bracketting a zero in a function we use intervals that are 2 the size of the two previous values. • From the Rolle thm we know we can bracket a minimum in a continuous function • It can be shown that the best way to bracket a minimum is using the Golden Mean value. Thus: suppose, we search for the minimum of a given function f (x). The initial interval [x1, x4] (the middle one in the ﬁgure below) is symmetrically divided into three subintervals so that (x2 − x1 ) = (x4 − x3 ) = g(x4 − x1 ) and (x3 − x1 ) = (1 − g)(x4 − x1 ). Suppose, we know that the minimum is somewhere in the interval. If f (x2 ) < f (x3 ) then we can bracket the minimum by the interval [x1, x3] (see 5), so that xnew = xold , xnew = xold , xnew = xold . The function 1 1 3 2 4 3 at xnew should be calculated. If we require that the new subintervals have the same relative 2 √ lengths then we come to the equation g(1 − g) = (1 − 2g) which solves for 3−2 5 ≈ 0.38197.2See also http://www.shokhirev.com/nikolai/abc/optim/optim/optim.html and http://en.wikipedia.org/wiki/Golden_section_search.5.2 Secant method • Assumes a function to be approximately linear in the region of interest. • Each improvement is taken as the point where the approximating line crosses the axis. • Retains only the most recent estimate! 2 In mathematics and the arts, two quantities are in the golden ratio if the ratio of the sum of the quantities to thelarger quantity is equal to the ratio of the larger quantity to the smaller one: a+b = a = ψ = 1 − g a bCalculus 12
- 13. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences Figure 6: Secant method (left) and Gradient Search metghod (right)5.3 Unconstrained optimization5.3.1 Gradient SearchSee Figure 6B. Starting at point x0 . As many times as needed, move from point xi to xi+1 byminimizing along the line from xi in the direction of the local downhill gradient − f (xi ). Exercise 18 2 2Get the absolute maximum for f (x, y) = 2e−(x−1) −(y−1) cos(x2 + y 2 ) using a gradient search. • select a point (x0 , y0 ) • Compute f (x0 , y0 ) and f (x0 , y0 ) • Get the maximum for the Davidon function: ψ(t) = f (x0 + fx (x0 , y0 )t, y0 + fy (x0 , y0 )t), and t0 will give you the new iteration point. Nota de classe mira http://linneus20.ethz.ch:8080/1_5_3.html#SECTION00253100000000000000 per de-tall del steepest descent, que no inclou la maximitzaci per Davidon function sino a pl.5.3.2 The Newton-Raphson methodFrom the Taylor expansion of the function: T 1 T f (x) ≈ f (xk ) + (x − xk ) · gk + (x − xk ) · Hk · (x − xk ) 2 linear term quadratic termwe can take derivatives f (x) = gk + Hk · (x − xk )Calculus 13
- 14. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences If we assume that f (x) takes its minimum at x = x∗ , the gradient is zero: Hk · (x∗ − xk ) + gk = 0which is a simple linear system. The Newton-Raphson considers x∗ to be the next point in theiterative formula: xk+1 = xk − H−1 · gk k Exercise 19 2 2Get the absolute maximum for f (x, y) = 2e−(x−1) −(y−1) cos(x2 + y 2 ) using: 1. a gradient search; 2. a Newton Raphson approach.You may use a table generated with a spread sheet or you may produce a short program in yourpreferred language. Nota de classe l’exercici proposat de maximitzar una funci de dues variables s difcil i cal treballar-lo. Mirar bel mtode d’steepest descent i el de Newton Raphson (els dos proposats a l’exercici). Mirar tambd’entendre correctament el mtode d’optimitzar la funci de Davidson. Mirar una bona explicacia http://linneus20.ethz.ch:8080/1_5_3.html#SECTION00253100000000000000. Fer un dibuixque mostri el concepte de ”constant norm”, ent`s com un radi determinat al voltant d’x, un radi edonat per l’stepsize. s prou entendor aix5.3.3 Conjugated gradient • Let’s come back to the gradient search • Let’s minimize f (x) over the hyperplane that contains all previous search directions. x0 + < p0 , p1 , p2 , . . . , pi > • If the vectors pi are chosen to be L.I. we should ideally perform only N searches. 1 f (x) ≈ c − g · x + x · H · x 2 • initial gradient g0 and an initial h0 = g0 • the CG method will construct gi+1 = gi − λH · hi and hi+1 = gi+1 + γhiCalculus 14
- 15. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences • these vectors satisfy the orthogonality and conjugacy conditions: gi · gj = 0 hi · H · hj = 0 gi · hj = 0 and the scalars are given by: gi · gi λi = hi · H · hi gi+1 · gi+1 γi = gi · gi5.3.4 Quasi-Newton methods xk+1 = xk − H−1 · gk k • Davidon-Fletcher-Powell (DFP) • Broyden-Fletcher-Goldfarb-Shanno (BFGS) • Build an iterative approximation of the inverse of the Hessian matrix: lim Ai = H−1 i→inf5.4 Constrained optimization5.4.1 Lagrange multipliersTheorem 15. Let be f (x, y) and g(x, y) two functions with continuous partial derivatives such thatf has a maximum or minimum f (x0 , y0 ) when (x, y) is restricted by g(x, y) = 0. If g(x0 , y0 ) = 0then a value λ exists such that: f (x0 , y0 ) = λ g(x0 , y0 ) Typically we ﬁrst build the Lagrangian function L = f − λg and calculate their critical pointsby making L = 0. We ﬁnd the values of x, y, ... as a function of λ and then substitute them intothe constraint equation. After that, we can come back to the critical points and ﬁnd the point(s)that we were interested in. Exercise 20Find the extreme values of f (x, y) = xy for (x, y) restricted to the elipse 4x2 + y 2 = 4.5.5 Global OptimizationWe need this approach when we have complex functions to optimize. Complex means here that, forexample, they contain a huge number of minima or that they are discrete functions.Calculus 15
- 16. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences Figure 7: Functins to be optimized can have complex shapeFigure 8: Simulated annealing: (a) f (x) = Ax2 + cos(x/n) with diﬀerent ruggedness (b) Distri-bution of 10000 SA processes started at random initial positions for the PES with A=1 (left) andA=0.1(right) at the given T5.5.1 Stochastic methods • Fundamental challenge in stochastic optimization: to balance the number of downhill moves of the dynamical process against the number of uphill moved. • Number of metastable states grows exponentially with degrees of freedom5.5.2 Simulated annealing • SA simulates the ﬁnite T dynamics of the system • Starting from r with energy E(r) one generates a new r with energy E(r ) which replaces the original conﬁguration with probability: exp (−β[E(r ) − E(r)]) if E(r ) > E(r) P = 1 otherwise • At a given β SA samples the conﬁgurations r of the PES according to their thermodynamic probability. • basic hopping techniqueCalculus 16
- 17. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences Figure 9: The simplex method5.5.3 Simplex methodThe method uses the concept of a simplex, which is a polytope of N + 1 vertices in N dimensions:a line segment in one dimension, a triangle in two dimensions, a tetrahedron in three-dimensionalspace and so forth.35.5.4 Genetic AlgorithmsThe results of a genetic algorithm, because of the implicit discreteness of the approximation and itsstochasticity, look like the graph in Figure 10.6 Integral calculus6.1 Summatories n ak = a1 + a2 + a3 + · · · + an k=1Theorem 16. Let be n a positive integer and let be {a1 , a2 , . . . , an } i {b1 , b2 , . . . , bn } two sets ofreal numbers. Then 3 http://mathworld.wolfram.com/SimplexMethod.htmlCalculus 17
- 18. MAT: 2011-31035-T1 MSc Bioinformatics for Health SciencesFigure 10: Genetic algorithms are good solutions to optimize complex functions. Lower right corner:typical results from a GA optimization. The ﬁtness function improves in a discrete mannerCalculus 18
- 19. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences Figure 11: A graphical representation of a Riemann sum n n n • k=1 (ak + bk ) = k=1 ak + k=1 bk n n • k=1 cak = c ( k=1 ak ) ; ∀c ∈ R n n n • k=1 (ak − bk ) = k=1 ak − k=1 bk A Riemann sum (Figure 11) is deﬁned as followsDeﬁnition 9. Let be f a function deﬁned in a closed interval [a, b] and let be P a partition in [a, b].A Riemann’s sum of f (or f (x)) for P is an expression RP obtained by n RP = f (wk )∆xk k=1where wk is a value in [xk−1 , xk ] and k = 1, 2, . . . , n.6.2 Deﬁnite integralDeﬁnition 10. Let be f a function deﬁned in a closed interval [a, b]. The deﬁnite integral of fbetween a and b is given by: b f (x)dx = lim f (wk )∆xk a ||P ||→0 k d c • If c > d, then c f (x)dx = − d f (x)dx. a • If f (a) exists, then a f (x)dx = 0.Calculus 19
- 20. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences If f is an integrable function and f (x) ≥ 0, for all x ∈ [a, b] then Area = b a f (x)dx.Theorem 17. If a function f is continuous in [a, b], then f is integrable in [a, b].Theorem 18. If f and g are integrable functions in [a, b] and c ∈ R: b • a cdx = c(b − a) b b • a cf (x)dx = c a f (x)dx b b b • a [f (x) ± g(x)]dx = a f (x)dx ± a g(x)dx Exercise 21 π/2Find 0 (sinθ + cosθ)dθTheorem 19. (Mean value of the integral) If f is a continuous function in [a, b], then there existsa number z in (a, b) such that b f (x)dx = f (z)(b − a) aand f (z) is called the mean value of f in [a, b]. Exercise 22 3 2If 0 x dx = 9, ﬁnd the value z that satisﬁes the above theoremTheorem 20. (Fundamental calculus theorem) Let f be a continuous function in [a, b]. 1. If the function G is deﬁned by x G(x) = f (t)dt a ∀x ∈ [a, b], G is an antiderivative of f in [a, b]. 2. If F is whatever antiderivative function of f in [a, b], then b b f (x)dx = F (x)]a = F (b) − F (a) aCalculus 20
- 21. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences Exercise 23 xCheck the above theorem for 1 3dt. We can get, x G(x) = 3dt = 3(x − 1) = 3x − 3 1and we can see that for any z ∈ [a, b] G(z + h) − G(z) 3(z + h) − 3 − (3z − 3) G (z) = limh→0 = limh→0 =3 h hWe should note that it is not necessary to evaluate the exact value of the integral, as any functionlike G(x) = 3x + K fullﬁlls the theorem. In addition, the second result for this theorem says: 2 2 3dt = (3x + K)]1 = (3 · 2 + K) − (3 · 1 + K) = 3 16.3 Numerical integrationDeﬁnition 11. If f is continuous in [a, b] and a = x0 , x1 , . . . , xn = b determines a uniform partitionof [a, b], then b n b−a f (x)dx ≈ f (¯i ) x a n i=1where xk = (xk−1 + xk )/2 is the mid point of [xk−1 , xk ]. ¯6.3.1 Trapezium formulaDeﬁnition 12. If f is a continuous function in [a, b] and if a = x0 , x1 , . . . , xn = b determines auniform partition of [a, b], then b b−a f (x)dx ≈ [f (x0 ) +2f (x1 ) + 2f (x2 ) + · · · a 2n +2f (xn−1 ) + f (xn )]If M > 0 ∈ R : |f (x)| ≤ M, ∀x ∈ [a, b], then the error when using the formula is less than or equalto M (b − a)3 /(12n2 ). Exercise 24 2Find the error in approximating the integral 1 (1/x)dx with the trapezium formula with n = 10. k xk f (xk ) m mf (xk )Calculus 21
- 22. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences 0 1 1.0000 1 1.0000 1 1.1 0.9091 2 1.8182 2 1.2 0.8333 2 1.6667 3 1.3 0.7692 2 1.5385 4 1.4 0.7143 2 1.4286 5 1.5 0.6667 2 1.3333 6 1.6 0.6250 2 1.2500 7 1.7 0.5882 2 1.1765 8 1.8 0.5556 2 1.1111 9 1.9 0.5263 2 1.0526 10 2 0.5000 1 0.5000 The sum of the last column is 13.8754, that multiplied by b−a = 2−1 = 2n 20 1 20 gives an approximationto the integral 2 1 1 dx =≈ (13.8754) ≈ 0.69377 1 x 20To evaluate the error we should ﬁnd a number M > 0 that is bigger than the second derivativeof the function in the completeconsidered interval. f (x) = (1/x), f (x) = (2/x3 ), in the intervalx ∈ [a, b], reaches a maximum at x = 1 and, thus: 2 f (x)| ≤ =2 (1)3Taking M = 2 and substituting in the formula for the error we get: 2(2 − 1)3 error = < 0.002 12(10)2This is, the integral, corresponding to the evaluation of the value ln 2, has a signiﬃcant error. Inorder to reduce it we could apply Simpson’s formula, that takes into account the approxiamtion tothe shape of the function to be integrated to the second order.6.3.2 Simpson’s formulaDeﬁnition 13. Let f be a continuous function in [a, b] and n an even integer. If a = x0 , x1 , . . . , xn =b determines a uniform partition for [a, b], then b b−a a f (x)dx ≈ 3n [f (x0 ) + 4f (x1 ) + 2f (x2 ) + 4f (x3 ) + · · · + 2f (xn−2 ) + 4f (xn−1 ) + f (xn )]If M > 0 ∈ R : |f (iv) (x)| ≤ M, ∀x ∈ [a, b], then the error is less than or equal to M (b − a)5 /(180n4 ). Exercise 25 2Find the error when evaluating 1 (1/x)dx with n = 10 using Simpson’s formula.Calculus 22
- 23. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences7 Sources of information • http://www.nr.com • http://www.math.dartmouth.edu/~klbooksite/ • http://www.math.temple.edu/~cow/ • http://www.math.ucdavis.edu/~calculus/ • http://www.math.scar.utoronto.ca/calculus/Redbook/ • http://www.math.utep.edu/Faculty/mabry/web/1411.htm • http://www.math.dartmouth.edu/~klbooksite/all_exercises.htm • Software: – http://www.r-project.org/ – http://www.octave.org/ – http://www.gnuplot.info/ – http://demonstrations.wolfram.com/download-cdf-player.htmlReferences[1] A. Isaev. Introduction to mathematical methods in bioinformatics. Springer Verlag, 2004.[2] C. Neuhauser. Calculus for biology and medicine. Prentice Hall Upper Saddle River (New Jersey), 2000.Calculus 23

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment