Upcoming SlideShare
×

# Midterm II Review Session Slides

2,883 views
2,762 views

Published on

These are the slides from the review session. THE FILE IS BIG AND MAY HAVE BEEN CORRUPTED. IF YOU CAN'T SEE IT THROUGH THE FLASH INTERFACE, JUST CLICK THE "DOWNLOAD" LINK and view it on your own computer.

Published in: Technology, Education
0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
Your message goes here
• Be the first to comment

• Be the first to like this

Views
Total views
2,883
On SlideShare
0
From Embeds
0
Number of Embeds
21
Actions
Shares
0
224
0
Likes
0
Embeds 0
No embeds

No notes for slide

### Midterm II Review Session Slides

1. 1. Review for Midterm II Math 20 December 4, 2007 Announcements Midterm I 12/6, Hall A 7–8:30pm ML Oﬃce Hours Wednesday 1–3 (SC 323) Old exams and solutions on website
2. 2. Outline Graphing/Contour Plots Rank and other Linear Algebra Partial Derivatives Linear dependence Diﬀerentiation Rank The Chain Rule Eigenbusiness Implicit Diﬀerentiation Eigenvector and Eigenvalue Diagonalization Optimization The Spectral Theorem Unconstrained Optimization Functions of several variables Constrained Optimization
3. 3. Rank and other Linear Algebra Learning Objectives Determine whether a set of vectors is linearly independent Find the rank of a matrix
4. 4. Linear Independence Deﬁnition Let {a1 , a2 , . . . , an } be a set of vectors in Rm . We say they are linearly dependent if there exist constants c1 , c2 , . . . , ck ∈ R, not all zero, such that c1 a1 + c2 a2 + · · · + ck an = 0. If the equation only holds when all c1 = c2 = · · · = cn = 0, then the vectors are said to be linearly independent.
5. 5. Deciding linear dependence We showed a1 , . . . , an LD ⇐⇒ c1 a1 + · · · + cn an = 0 has a nonzero sol’n  c1 . ⇐⇒ a1 . . . an  .  = 0 has a nonzero sol’n . cn A c ⇐⇒ system has some free variables ⇐⇒ rref(A) has a column with no leading entry to it
6. 6. Example Determine if the vectors   1 3 0 0 , −2 , 2 1 2 1 are linearly dependent.
7. 7. Example Determine if the vectors   1 3 0 0 , −2 , 2 1 2 1 are linearly dependent. Solution   1 3 0 −1   1 30 −2 2 − 2 2 0 0   −1 1 0 1 ←+ − 1 2     ←+ − 1 3 0 10 3 1 − 1 −3 − 1 0 0 1     00 0 00 0 So the vectors are linearly dependent.
8. 8. Example Determine if the vectors   1 3 0 0 , −2 , 2 1 2 1 are linearly dependent. Solution   1 3 0 −1   1 30 −2 2 − 2 2 0 0   −1 1 0 1 ←+ − 1 2     ←+ − 1 3 0 10 3 1 − 1 −3 − 1 0 0 1     00 0 00 0 So the vectors are linearly dependent.
9. 9. Deciding linear independence So a1 , . . . , an LI ⇐⇒ every column of rref(A) has a leading entry to it In ⇐⇒ A ∼ O
10. 10. Example Determine if the vectors    1 3 0 0 −2  2   , ,  1  2   1  −1 1 0 are linearly dependent.
11. 11. Solution 1 3 0 1 0 3     −1 − 2 2 1 −1 0 0         1 2 1 0 0 0 −1 −1 0 1 ←+ − 0 1 10 3 100     0 1 −1  0 1 0         0 0 0 0 0 1 −2 00 000 So the vectors are linearly independent.
12. 12. Rank Deﬁnition The rank of a matrix A, written r (A) is the maximum number of linearly independent column vectors in A.
13. 13. Rank Deﬁnition The rank of a matrix A, written r (A) is the maximum number of linearly independent column vectors in A. If A is a zero matrix, we say r (A) = 0.
14. 14. Example Since    130 10 3 rref 0 −2 2 = 0 1 −1 121 00 0 this matrix has rank 2.
15. 15. Example Since    1 30 1 0 0  0 −2 2 0 1 0 rref  =  1 2 1 0 0 1 −1 0 1 0 0 0 this matrix has rank 3.
16. 16. Another way to compute rank Theorem Book Theorem 14.1 The rank of A is the size of the largest nonvanishing minor of A.
17. 17. Rank and consistency Fact Let A be an m × n matrix, b an n × 1 vector, and Ab the matrix A augmented by b. Then the system of linear equations Ax = b has a solution (is consistent) if and only if r (A) = r (Ab ).
18. 18. Rank and redundancy Fact Let A be an m × n matrix, b an n × 1 vector, and Ab the matrix A augmented by b. Suppose that r (A) = r (Ab ) = k < m (m is the number of equations in the system Ax = b). Then m − k of the equations are redundant; they can be removed and the system has the same solutions.
19. 19. Rank and freedom Fact Let A be an m × n matrix, b an n × 1 vector, and Ab the matrix A augmented by b. Suppose that r (A) = r (Ab ) = k < n (n is the number of variables in the system Ax = b). Then n − k of the variables are free; they can be chosen at will and the rest of the variables depend on them, getting inﬁnitely many solutions.
20. 20. Outline Graphing/Contour Plots Rank and other Linear Algebra Partial Derivatives Linear dependence Diﬀerentiation Rank The Chain Rule Eigenbusiness Implicit Diﬀerentiation Eigenvector and Eigenvalue Diagonalization Optimization The Spectral Theorem Unconstrained Optimization Functions of several variables Constrained Optimization
21. 21. Eigenbusiness Learning Objectives Determine if a vector is an eigenvalue of a matrix Determine if a scalar is an eigenvalue of a matrix Find all the eigenvalues of a matrix Find all the eigenvectors of a matrix for a given eigenvalue Diagonalize a matrix Know when a matrix is diagonalizable
22. 22. Eigenbusiness Deﬁnition Let A be an n × n matrix. The number λ is called an eigenvalue of A if there exists a nonzero vector x ∈ Rn such that Ax = λx. (1) Every nonzero vector satisfying (1) is called an eigenvector of A associated with the eigenvalue λ.
23. 23. Example Midterm II, Fall 2006, Problem 4 4 −2 Let A = 11 Problem 2 Is an eigenvector for A? 1
24. 24. Example Midterm II, Fall 2006, Problem 4 4 −2 Let A = 11 Problem 2 Is an eigenvector for A? 1 Solution Use the deﬁnition of eigenvector: 4 −2 2 6 2 = =3 11 1 3 1 So the vector is an eigenvector corresponding to the eigenvalue 3.
25. 25. 4 −2 Let A = 11 Problem Is 0 an eigenvalue for A?
26. 26. 4 −2 Let A = 11 Problem Is 0 an eigenvalue for A? Solution The number 0 is an eigenvalue for A if and only if the determinant of A − 0I = A is zero. But det A = 4 · 1 − 1 · (−2) = 6. So it’s not.
27. 27. Methods To ﬁnd the eigenvalues of a matrix A, ﬁnd the determinant of A − λI. This will be a polynomial in λ (called the characteristic polynomial of A, and its roots are the eigenvalues. To ﬁnd the eigenvector(s) of a matrix corresponding to an eigenvalue λ, do Gaussian Elimination on A − λI.
28. 28. Diagonalization Procedure Find the eigenvalues and eigenvectors. Arrange the eigenvectors in a matrix P and the corresponding eigenvalues in a diagonal matrix D. If you have “enough” eigenvectors so that the matrix P is square and invertible, the original matrix is diagonalizable and equal to PDP−1 .
29. 29. Example Problem 23 Let A = . Diagonalize. 21
30. 30. Example Problem 23 Let A = . Diagonalize. 21 Solution To ﬁnd the eigenvalues, ﬁnd the characteristic polynomial and its roots: 2−λ 3 |A − λI| = = (2 − λ)(1 − λ) − 6 1−λ 2 = λ2 − 3λ − 4 = (λ + 1)(λ − 4) So the eigenvalues are −1 and 4.
31. 31. To ﬁnd an eigenvector corresponding to the eigenvalue −1, 33 11 A+I= 22 00 1 So is an eigenvector. −1
32. 32. To ﬁnd an eigenvector corresponding to the eigenvalue 4, −2 3 1 −3/2 A − 4I = 2 −3 0 0 3 So is an eigenvector. 2
33. 33. Let 13 P= −1 2 so 2 −3 1 P−1 = 11 5 Then −1 0 2 −3 1 13 A= −1 2 04 11 5
34. 34. The Spectral Theorem Theorem (Baby Spectral Theorem) Suppose An×n has n distinct real eigenvalues. Then A is diagonalizable.
35. 35. The Spectral Theorem Theorem (Baby Spectral Theorem) Suppose An×n has n distinct real eigenvalues. Then A is diagonalizable. Theorem (Spectral Theorem for Symmetric Matrices) Suppose An×n is symmetric, that is, A = A. Then A is diagonalizable. In fact, the eigenvectors can be chosen to be pairwise orthogonal with length one, which means that P−1 = P . Thus a symmetric matrix can be diagonalized as A = PDP ,
36. 36. Outline Graphing/Contour Plots Rank and other Linear Algebra Partial Derivatives Linear dependence Diﬀerentiation Rank The Chain Rule Eigenbusiness Implicit Diﬀerentiation Eigenvector and Eigenvalue Diagonalization Optimization The Spectral Theorem Unconstrained Optimization Functions of several variables Constrained Optimization
37. 37. Functions of several variables Learning Objectives identify functions, graphs, and contour plots ﬁnd partial derivatives of functions of several variables
38. 38. Types of functions linear polynomial rational Cobb-Douglas etc.
39. 39. Examples Problem In each of the following, ﬁnd the domain and range of the function. Is it linear? polynomial? rational? algebraic? Cobb-Douglas? 9 − x2 − y2 f (x, y ) = y − x (a) (h) f (x, y ) = √ f (x, y ) = y − x (b) (i) f (x, y ) = ln(x 2 + y 2 ) f (x, y ) = 4x 2 + 9y 2 (c) f (x, y ) = x 2 − y 2 (d) 2 +y 2 ) (j) f (x, y ) = e −(x (e) f (x, y ) = xy f (x, y ) = y /x 2 (f) (k) f (x, y ) = arcsin(y − x) 1 (g) f (x, y ) = y (l) f (x, y ) = arctan 16 − x 2 − y 2 x
40. 40. Graphing/Contour Plots A function of two variables can be visualized by its graph: the surface (x, y , f (x, y ) in R3 a contour plot: a collection of level curves
41. 41. Example Graph and contour plot of f (x, y ) = y − x
42. 42. Example Graph and contour plot of f (x, y ) = y − x 2 1 0 4 2 2 0 1 2 4 1 2 0 1 1 0 1 2 2 2 1 0 1 2 2
43. 43. Example √ y −x Graph and contour plot of f (x, y ) =
44. 44. Example √ y −x Graph and contour plot of f (x, y ) = 2 1 0 2.0 1.5 2 1.0 1 0.5 0.0 1 0 2 1 1 0 1 2 2 2 1 0 1 2 2
45. 45. Example Graph and contour plot of f (x, y ) = 4x 2 + 9y 2
46. 46. Example Graph and contour plot of f (x, y ) = 4x 2 + 9y 2 2 1 0 40 2 20 1 1 0 2 0 1 1 0 1 2 2 2 1 0 1 2 2
47. 47. Example Graph and contour plot of f (x, y ) = x 2 − y 2
48. 48. Example Graph and contour plot of f (x, y ) = x 2 − y 2 2 1 0 4 2 2 0 1 2 4 1 2 0 1 1 0 1 2 2 2 1 0 1 2 2
49. 49. Example Graph and contour plot of f (x, y ) = xy
50. 50. Example Graph and contour plot of f (x, y ) = xy 2 1 0 4 2 2 0 1 2 4 1 2 0 1 1 0 1 2 2 2 1 0 1 2 2
51. 51. Example y Graph and contour plot of f (x, y ) = x2
52. 52. Example y Graph and contour plot of f (x, y ) = x2 2 1 0 5 2 0 1 5 1 2 0 1 1 0 1 2 2 2 1 0 1 2 2
53. 53. Example 1 Graph and contour plot of f (x, y ) = 16 − x 2 − y 2
54. 54. Example 1 Graph and contour plot of f (x, y ) = 16 − x 2 − y 2 4 2 0 1.0 4 0.5 2 0.0 2 0 4 2 2 0 2 4 4 4 2 0 2 4 4
55. 55. Example 9 − x2 − y2 Graph and contour plot of f (x, y ) =
56. 56. Example 9 − x2 − y2 Graph and contour plot of f (x, y ) = 3 2 1 0 3 2 1 2 1 0 0 2 2 0 2 3 2 3 2 1 0 1 2 3
57. 57. Example Graph and contour plot of f (x, y ) = ln(x 2 + y 2 )
58. 58. Example Graph and contour plot of f (x, y ) = ln(x 2 + y 2 ) 3 2 1 0 2 1 1 2 0 1 0 2 2 0 2 3 2 3 2 1 0 1 2 3
59. 59. Example 2 +y 2 ) Graph and contour plot of f (x, y ) = e −(x
60. 60. Example 2 +y 2 ) Graph and contour plot of f (x, y ) = e −(x 3 2 1 0 1.0 1 0.5 2 0.0 0 2 2 0 2 3 2 3 2 1 0 1 2 3
61. 61. Example Graph and contour plot of f (x, y ) = arcsin(y − x)
62. 62. Example Graph and contour plot of f (x, y ) = arcsin(y − x) 2 1 0 1 2 0 1 1 1 0 2 1 1 0 1 2 2 2 1 0 1 2 2
63. 63. Example y Graph and contour plot of f (x, y ) = arctan x
64. 64. Example y Graph and contour plot of f (x, y ) = arctan x 2 1 0 1 2 0 1 1 1 0 2 1 1 0 1 2 2 2 1 0 1 2 2
65. 65. The process To diﬀerentiate a function of several variables with respect to one of the variables, pretend that the others are constant.
66. 66. Examples Example Let f (x, y ) = 3x + 2xy 2 − 2y 4 . Find both the partial derivatives of f.
67. 67. Examples Example Let f (x, y ) = 3x + 2xy 2 − 2y 4 . Find both the partial derivatives of f. Solution We have ∂f ∂f = 3 + 2y 2 = 4xy − 8y 3 ∂x ∂y
68. 68. Example Let w = sin α cos β. Find both the partial derivatives of w .
69. 69. Example Let w = sin α cos β. Find both the partial derivatives of w . Solution We have ∂w ∂w = − sin α sin β = cos α cos β ∂α ∂β
70. 70. Example Let f (u, v ) = arctan(u/v ). Find both the partial derivatives of f .
71. 71. Example Let f (u, v ) = arctan(u/v ). Find both the partial derivatives of f . Solution For this it’s important to remember the chain rule! ∂f 1 ∂u 1 1 = = 2 ∂u v 2v ∂u 1 + (u/v ) 1 + (u/v ) −u ∂f 1 ∂u 1 = = 1 + (u/v )2 ∂v v 1 + (u/v )2 v 2 ∂u Another way to write this is −u ∂f v ∂f =2 =2 u + v2 u + v2 ∂u ∂v
72. 72. Example 2 2 2 x1 + x2 + · · · + xn . Find all the derivatives of u. Let u =
73. 73. Example 2 2 2 x1 + x2 + · · · + xn . Find all the derivatives of u. Let u = Solution We have a partial derivative for each index i, but luckily they’re symmetric. So each derivative is represented by: ∂u 1 ∂2 2 2 (x1 + x2 + · · · + xn ) = ∂xi 2 + x 2 + · · · + x 2 ∂xi 2 x n 1 2 xi = 2 2 2 x1 + x2 + · · · + xn
74. 74. Example Let f (x, y ) = 3x + 2xy 2 − 2y 4 . Find all the second derivatives.
75. 75. Example Let f (x, y ) = 3x + 2xy 2 − 2y 4 . Find all the second derivatives. Solution ∂2f ∂2f =0 = 4y ∂x 2 ∂x ∂y ∂2f ∂2f = −24y 2 = 4y ∂y 2 ∂y ∂x
76. 76. Tangent Planes Fact The tangent plane to z = f (x, y ) through (x0 , y0 , z0 = f (x0 , y0 )) has normal vector (f1 (x0 , y0 ), f2 (x0 , y0 ), −1) and equation f1 (x0 , y0 )(x − x0 ) + f2 (x0 , y0 )(y − y0 ) − (z − z0 ) = 0
77. 77. Tangent Planes Fact The tangent plane to z = f (x, y ) through (x0 , y0 , z0 = f (x0 , y0 )) has normal vector (f1 (x0 , y0 ), f2 (x0 , y0 ), −1) and equation f1 (x0 , y0 )(x − x0 ) + f2 (x0 , y0 )(y − y0 ) − (z − z0 ) = 0 or z = f (x0 , y0 ) + f1 (x0 , y0 )(x − x0 ) + f2 (x0 , y0 )(y − y0 )
78. 78. Tangent Planes Fact The tangent plane to z = f (x, y ) through (x0 , y0 , z0 = f (x0 , y0 )) has normal vector (f1 (x0 , y0 ), f2 (x0 , y0 ), −1) and equation f1 (x0 , y0 )(x − x0 ) + f2 (x0 , y0 )(y − y0 ) − (z − z0 ) = 0 or z = f (x0 , y0 ) + f1 (x0 , y0 )(x − x0 ) + f2 (x0 , y0 )(y − y0 ) This is the best linear approximation to f near (x0 , y0 ). is is the ﬁrst-degree Taylor polynomial (in two variables) for f .
79. 79. Outline Graphing/Contour Plots Rank and other Linear Algebra Partial Derivatives Linear dependence Diﬀerentiation Rank The Chain Rule Eigenbusiness Implicit Diﬀerentiation Eigenvector and Eigenvalue Diagonalization Optimization The Spectral Theorem Unconstrained Optimization Functions of several variables Constrained Optimization
80. 80. Fact (The Chain Rule, version I) When z = F (x, y ) with x = f (t) and y = g (t), then z (t) = F1 (f (t), g (t))f (t) + F2 (f (t), g (t))g (t) or dz ∂F dx ∂F dy = + dt ∂x dt ∂y dt
81. 81. Fact (The Chain Rule, version I) When z = F (x, y ) with x = f (t) and y = g (t), then z (t) = F1 (f (t), g (t))f (t) + F2 (f (t), g (t))g (t) or dz ∂F dx ∂F dy = + dt ∂x dt ∂y dt We can generalize to more variables, too. If F is a function of x1 , x2 , . . . , xn , and each xi is a function of t, then dz ∂F dx1 ∂F dx2 ∂F dxn + ··· + = + dt ∂x1 dt ∂x2 dt ∂xn dt
82. 82. Tree Diagrams for the Chain Rule F ∂F ∂F ∂y ∂x y x dx dy dt dt t t To diﬀerentiate with respect to t, ﬁnd all “leaves” marked t. Going down each branch, chain (multiply) all the derivatives together. Then add up the result from each branch. dz dF ∂F dx ∂F dy = = + dt dt ∂x dt ∂y dt
83. 83. Fact (The Chain Rule, Version II) When z = F (x, y ) with x = f (t, s) and y = g (t, s), then ∂z ∂F ∂x ∂F ∂y = + ∂t ∂x ∂t ∂y ∂t ∂z ∂F ∂x ∂F ∂y = + ∂s ∂x ∂s ∂y ∂s F y x s s t t
84. 84. Example ∂z ∂z Suppose z = xy 2 , x = t + s and y = t − s. Find and at ∂t ∂s (t, z) = (1/2, 1) in two ways: (i) By expressing z directly in terms of t and s before diﬀerentiating. (ii) By using the chain rule.
85. 85. Example ∂z ∂z Suppose z = xy 2 , x = t + s and y = t − s. Find and at ∂t ∂s (t, z) = (1/2, 1) in two ways: (i) By expressing z directly in terms of t and s before diﬀerentiating. (ii) By using the chain rule. Solution (i) We have z = (t + s)(t − s)2 = s 3 − ts 2 − t 2 s + t 3 So ∂z = −s 2 − 2ts + 3t 2 ∂t ∂z = 3s 2 − 2ts − t 2 ∂s
86. 86. Solution (ii) We have z = xy 2 y =t −s x =t +s So ∂z ∂z ∂x ∂z ∂y = + ∂t ∂x ∂t ∂y ∂t = y · 1 + 2xy · 1 = (t − s)2 + 2(t + s)(t − s) 2 ∂z ∂z ∂x ∂z ∂y = + ∂s ∂x ∂s ∂y ∂s = y 2 · 1 + 2xy (−1) = (t − s)2 − 2(t + s)(t − s) These should be the same as in the previous calculation.
87. 87. Theorem (The Chain Rule, General Version) Suppose that u is a diﬀerentiable function of the n variables x1 , x2 , . . . , xn , and each xi is a diﬀerentiable function of the m variables t1 , t2 , . . . , tm . Then u is a function of t1 , t2 , . . . , tm and ∂u ∂u ∂x1 ∂u ∂x2 ∂u ∂xn + ··· + = + ∂ti ∂x1 ∂ti ∂x2 ∂ti ∂xn ∂ti In summation notation n ∂u ∂xj ∂u = ∂ti ∂xj ∂ti j=1
88. 88. Implicit Diﬀerentiation The Big Idea Fact Along the level curve F (x, y ) = c, the slope of the tangent line is given by dy dy ∂F /∂x F (x, y ) =− 1 =− = dx dx F ∂F /∂x F2 (x, y )
89. 89. Tree diagram F y x x ∂F ∂F dy + =0 ∂x ∂y dx F
90. 90. More than two variables The basic idea is to close your eyes and use the chain rule: Example Suppose a surface is given by F (x, y , z) = c. If this deﬁnes z as a function of x and y , ﬁnd zx and zy .
91. 91. More than two variables The basic idea is to close your eyes and use the chain rule: Example Suppose a surface is given by F (x, y , z) = c. If this deﬁnes z as a function of x and y , ﬁnd zx and zy . Solution Setting F (x, y , z) = c and remembering z is implicitly a function of x and y , we get ∂F ∂F ∂z ∂z Fx =− + = 0 =⇒ ∂x ∂z ∂x ∂x Fz F F Fy ∂F ∂F ∂z ∂z =− + = 0 =⇒ ∂y ∂z ∂y ∂y Fz F F
92. 92. Tree diagram F y x z x ∂F ∂F ∂z ∂z Fx =− + = 0 =⇒ ∂x ∂z ∂x ∂x Fz F F
93. 93. Example Problem 16.8.4 Problem Let D = f (r , P) denote the deman for an agricultural commodity when the price is P and r is the producers’ total advertising expenditure. Let supply be given by S = g (w , P), where w is an index for how favorable the weather has been. Assume gw (w , P) > 0. Equilibrium now requires f (r , P) = g (w , P). Assume that this equation deﬁnes P implicitly as a diﬀerentiable function of r and w . Compute Pw and comment on its sign.
94. 94. Solution We have f (r , P) − g (w , P) ≡ 0 f −g r w P w ∂f ∂P ∂g ∂g ∂P − − ∂P ∂w ∂w ∂P ∂w f =g f =g
95. 95. Answer So ∂g ∂P ∂w = ∂f ∂g ∂w f =g − ∂P ∂P ∂f ∂g ∂g < 0 and > 0. We assumed that > 0. So in this case, ∂P ∂P ∂w ∂P < 0, ∂w f =g meaning the price decreasing with improving weather.
96. 96. Outline Graphing/Contour Plots Rank and other Linear Algebra Partial Derivatives Linear dependence Diﬀerentiation Rank The Chain Rule Eigenbusiness Implicit Diﬀerentiation Eigenvector and Eigenvalue Diagonalization Optimization The Spectral Theorem Unconstrained Optimization Functions of several variables Constrained Optimization
97. 97. Optimization Learning Objectives Find the critical points of a function deﬁned on an open set (so unconstrained) Classify the critical points of a function Find the critical points of a function restricted to a surface (constrained)
98. 98. Theorem (Fermat’s Theorem) Let f (x, y ) be a function of two variables. If f has a local maximum or minimum at (a, b), and is diﬀerentiable at (a,b), then ∂f ∂f (a, b) = 0 (a, b) = 0 ∂x ∂y As in one variable, we’ll call these points critical points.
99. 99. Theorem (The Second Derivative Test) Let f (x, y ) be a function of two variables, and let (a, b) be a critical point of f . Then 2 2 2 2 ∂2f If ∂xf2 ∂yf2 − ∂x∂y ∂∂ ∂f > 0 and > 0, the critical point is a ∂x 2 local minimum. 2 2 2 2 ∂2f If ∂xf2 ∂yf2 − ∂x∂y ∂∂ ∂f > 0 and < 0, the critical point is a ∂x 2 local maximum. 2 ∂2f ∂2f ∂2f − If < 0, the critical point is a saddle point. ∂x 2 ∂y 2 ∂x∂y All derivatives are evaluated at the critical point (a, b).
100. 100. Example Problem Find and classify the critical points of f (x, y ) = 4xy − x 4 − y 4
101. 101. Example Problem Find and classify the critical points of f (x, y ) = 4xy − x 4 − y 4 Solution ∂f ∂f We have ∂x = 4y − 4x 3 and ∂y = 4x − 4y 3 . Both of these are zero when y = x 3 and x = y 3 So x 9 = x. Since x 9 − x = x(x 8 − 1) = x(x 4 + 1)(x 2 + 1)(x + 1)(x − 1) the real solutions are x = 0, x = 1, and x = −1. The corresponding y values are 0, 1, and −1. So the critical points are (0, 0), (1, 1), (−1, −1)
102. 102. The second derivatives are ∂2f ∂2f = −12x 2 =4 ∂x 2 ∂y ∂x ∂2f ∂2f = −12y 2 =4 ∂x ∂y ∂y ∂x So −3x 2 1 H(x, y ) = 4 −3y 2 1 01 At (0, 0), the matrix is , which has determinant < 0. So 10 it’s a saddle point. At the other two points, the matrix is −3 1 , which has positive determinant. So those points are 1 −3 local maxima.
103. 103. Graph and contour plot of f (x, y ) = 4xy − x 4 − y 4 2 1 0 0 2 10 20 1 30 1 2 0 1 1 0 1 2 2 2 1 0 1 2 2
104. 104. Theorem (The Method of Lagrange Multipliers) Let f (x1 , x2 , . . . , xn ) and g (x1 , x2 , . . . , xn ) be functions of several variables. The critical points of the function f restricted to the set g = 0 are solutions to the equations: ∂f ∂g (x1 , x2 , . . . , xn ) = λ (x1 , x2 , . . . , xn ) for each i = 1, . . . , n ∂xi ∂xi g (x1 , x2 , . . . , xn ) = 0. Note that this is n + 1 equations in the n + 1 variables. x1 , . . . , xn , λ.
105. 105. Problem Find the critical points and values of f (x, y ) = ax 2 + 2bxy + cy 2 subject to the constraint that x 2 + y 2 = 1.
106. 106. Problem Find the critical points and values of f (x, y ) = ax 2 + 2bxy + cy 2 subject to the constraint that x 2 + y 2 = 1. Solution We have fx = λgx =⇒ 2ax + 2by = λ(2x) fy = λgy =⇒ 2bx + 2cy = λ(2y ) So the critical points happen when ab x x =λ bc y y
107. 107. The critical values are ab x f (x, y ) = x y bc y x = λ(x 2 + y 2 ) = λ =x yλ y
108. 108. The critical values are ab x f (x, y ) = x y bc y x = λ(x 2 + y 2 ) = λ =x yλ y So The critical points are eigenvectors! The critical values are eigenvalues!