2010 CS                                         	                                                                         ...
2010 CS                                                                   	                                               ...
2010 CS                                                                                     	                             ...
2010 CS                                                                                       	                           ...
2010 CS                                                                                                                   ...
2010 CS                                                                              	                                    ...
2010 CS                                                                                    	                              ...
2010 CS                                                                                               	                   ...
both sides of the above equation to obtain                        We wish to compute the expected number of heads, so we t...
= 1 · (1/2) + 0 · (1/2)2010 CS                                          = 1/2 .        	           Thus the expected numbe...
n+1                                     n                                                                                W...
m             n+1              n          m−1     n2010 CS                                                                ...
2010 CS                                                                        	                                  	       ...
2010 CS                                                                   	                                  	            ...
2010 CS                                                                   	                   	 •  Hire-­‐Assistant       ...
Upcoming SlideShare
Loading in...5
×

アルゴリズムイントロダクション 5.1-5.2章

2,286

Published on

アルゴリズムイントロダクション 5章の勉強会資料

下記でPPT版,KEY版も配布してます.
http://tniky1.com/study/

Published in: Technology, Health & Medicine
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,286
On Slideshare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
17
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

アルゴリズムイントロダクション 5.1-5.2章

  1. 1. 2010 CS tniky1 Copyright©  2010  tniky1    All  rights  reserved.   Page  1
  2. 2. 2010 CS ( )   Copyright©  2010  tniky1    All  rights  reserved.   Page  2
  3. 3. 2010 CS •  5.1     •  •  5.2     •  •  5.3     •    •  5.4     •   p87-­‐113 Copyright©  2010  tniky1    All  rights  reserved.   Page  3
  4. 4. 2010 CS (^^)   •    –    •    –  n –  1 –  ,   –  –  –  ( )   p87 Copyright©  2010  tniky1    All  rights  reserved.   Page  4
  5. 5. 2010 CS  1                2              3                                  n  i   best HIRE-­‐ASSISTANT(n) 1.  best  ←  0  => 0 2.  for  i  ←  1  to  n      Ci     n   3.    do   i 4.  if   i best    Ch     m   ( ) 5.  then  best  ←  i     6.          hire  candidate  i   p88 O(nCi  +  mCh)   m Copyright©  2010  tniky1    All  rights  reserved.   Page  5
  6. 6. 2010 CS •    •  2     1 n   •    •  n!p88-­‐89 Copyright©  2010  tniky1    All  rights  reserved.   Page  6
  7. 7. 2010 CS m( ) •    –    •  ( ) ( :      A: )   •  ( )   –  Xi   Xi   •  A i   A i     •  Xi I  {  i   } Xi I  {   i   } p90 Copyright©  2010  tniky1    All  rights  reserved.   Page  7
  8. 8. 2010 CS •    n/2 –  ( )   (1/2 n ) •  n •  1/2   •  ?   •  E  [XA]  =  Pr{A}   –  A E XA   Pr   –  A A   –  {i } {i } p91 E[Xi]  =  Pr{i }  =  ½   Copyright©  2010  tniky1    All  rights  reserved.   Page  8
  9. 9. both sides of the above equation to obtain We wish to compute the expected number of heads, so we take the expectation of2010 CS n to obtain both sides of the above equation= E E [X ] X . i n i=1 E [X ] = E i=1 Xi .The left side of the above equation is the expectation of the s ables. By Lemma 5.1, we can easily compute the expectation The left side of the above variables. is the expectation of the sum of nexpectation—it i equation By equation (C.20)—linearity of random vari- •  ables.  By Lemma 5.1, we can easily compute sum: it equals the each of the random expectation of the the expectation of sum of the expectati variables. By equation (C.20)—linearity of expectation—it is easy thecompute the variables. Linearity of expectation makes to use of indicato –  Xi  =  expectation of the sum: it equals analytical of the expectations even whenrandom d  {  i   }   powerful the sum technique; it applies of the n there is variables. Linearity of expectation makes the use ofcan easily compute the expected –  E[Xi]  =  Pr  {  i   random variables. We now indicator random variables a }  =  ½   powerful analytical technique; it applies even when there is dependence among the n Xi(0  or   E [X =  X compute i the expected number 1) heads: –  random variables. We now can] easily  E   = X of 1 n i=1 n n E [X ] = E Xi = E [X i ] i=1 i=1 n n X = E [X i ] = 1/2 ( ) i=1 i=1 [ ]   n = n/2 . = 1/2    2 i=1 Thus, compared to the method   used in equation (C.36), indic = n/2 . greatly simplify the  E[Y]    E[X+Y]  =  E[X]  + calculation. We shall use indicator rando out this book.p91-­‐92 Thus, compared to the method used in equation (C.36), indicator random variables Copyright©  2010  tniky1    All  simplify the calculation. We shall use indicator random variables through- greatly rights  reserved.   Page  9 out this book.
  10. 10. = 1 · (1/2) + 0 · (1/2)2010 CS = 1/2 . Thus the expected number of heads obtained by one flip of a fair E  [XA]  =  Pr{A}    ( ) the following lemma shows, the expected value of an indicator associated with an event A is equal to the probability that A occur •    Lemma 5.1 Given a sample space S and an event A in the sample space S, Then E [X A ] = Pr { A}. •    –  Proof   By the definition of an indicator random variable from eq •  the definition of expected value, we have E [X A ] = E [I { A}] = 1 · Pr { A} + 0 · Pr { A} = Pr { A} , p91 where A denotes S − A, the complement of A. Copyright©  2010  tniky1    All  rights  reserved.   Page  10
  11. 11. n+1 n We wish to compute the expected number of heads, so we take the2010 CS both sides the above equation to obtain f (x) dx ≤ of Approximation by integrals n m When a summation can be expressed as n k=m f (k), w k=m increasing function, we can approximate it by integra E [X ] = E Xi . i=1 f (x) dx ≤ The≤integraldxappro n f (k) f (x) . n n+1 The left side of the above equationnumber. For of the sum of is the expectation a lower b m−1 k=m m ables. By Lemma 5.1, wejustification for this approximation is shown in F The can easily compute the expectation of eac •    represented as the n of expectation—it figure, variables. By equation (C.20)—linearity 1 the rectangles in theis easy area of n+1 dx region under the curve. When fthe is a monotonically expectation of the sum: it equals the sum of≥ (k) expectations o –  Xi  =    { i   variables.}   use a similar method to provide the bounds Linearity of expectation makes the use of indicator ran n+1 k k=1 even when there is depend n n 1 x –  E[Xi]  =  Pr  {  i   powerful analytical  technique; it applies f (k) ≤ }    =    1  /  i f (x) dx ≤ f (x) dx . random variables. We m now can easily compute= expected + 1) the ln(n numbe m−1  (i 98 )   k=m Xi(0   Chapter 5 Probabilistic Analysis and Randomized Algorithms or  1) 98 Chapter 5 Probabilistic Analysis approximationAlgorithmsgives a tight e n The integral and Randomized (A.12) 1 n 98 Chapter 5 Probabilistic Analysis and Randomized Algorithms –  E [X ] =  X  =  E [X ] = iE X (by For the upper bound, E number. For equation (5.3)) a lower bound, we obtain n X i(5.5) n n+1 i=1 i=1 n 1 n dx n E [X ] = n = E E [X i X≥ (by linearity of expectation) ] i (by equation (5.3)) i=1 k x n n E [X ] = E Xi (by equation (5.3)) i n = k=1i=1 E [X ] 1 1 dx (5.5) i=1 i=1 [ = n 1/i equation 1) ]   ≤ = (byln(n +(5.4)) . For the upper bound, we derive the inequality x k   n = E [X ] (by linearity of expectation) i=1 i = ln n + O(1) (by equation (A.7)) . (5.6) n X = E [X i ] (by linearity of expectation) i=1k=2 1 = 1/2 Even though we interview n people, we only actually hire approximately ln n of nn n ( )   them, on average. We summarize n result in the following lemma. 1 dx this m i=1 n i=1 = Lemma 5.2 1/i ≤ (by equation[ln] k x (5.4)) 1 = ln n , = 1/i = (5.4)) that candidates are presented in a . Assuming k=2 1 (by equation n/2 SSISTANT hasthetotal = ln nof,equationrandom order, algorithm HIRE- i=1 A = ln n + O(1) cost O(c ln n). (A.7)) . a hiring (by method usedbound weyields  theand approx which onlyofactuallycost boun h i=1 Thus, compared to thewe interviewthe people,our definition(C.36), indicator ra in equation the hiring hire Proof The bound follows immediately from Even equation which yields n though = ln n + O(1) (by equation (A.7)) . (5.6). (5.6)p92-­‐93 them, thep324   greatly simplifyon average. We summarize this use indicator random va calculation. We shall result in the following lemma n n this book. hiring actually ≤ ln + 1 .n 1 1 Even though we interviewoutpeople, we only cost of O(nc ).hirenapproximately ln n of The expected interview cost is a significant improvement over the worst-case Page  11 result 5.2 k=1 k Copyright©  2010  tniky1    All  rights  reserved.   ≤ ln n + 1 . h them, on average. We summarize thisLemmain the following lemma. Exercises
  12. 12. m n+1 n m−1 n2010 CS k=m f (x) dx ≤ f (k) ≤ f (x) d The integral approximation (A.12) gives a m k=m m−1 ( number. For a lower bound, we obtain ) The integral approximation (A.12) giv n n+1 HIRE-­‐ASSISTANT(n) 1 number. For a dxlower bound, we obtain 1.  best  ←  0  => 0 ≥ k=1 kn 1 1 x dx n+1 2.  for  i  ←  1  to  n      Ci     n   3.    do   i ≥ = ln(n + 1) . x k=1 k 1 4.  if   i best    Ch     m   5.  then  For←  i     upper bound, + 1) . best   the = ln(n we derive the ) ( inequality 6.          hire  candidate  i   n n 1 dx For the upper bound, we derive the inequa ≤ O(nCi  +  mChk n  O(mCh1        (Cn<<Ch )  = )   x i ) k=2 1 dx m( ) ≤ =    ln n , 1 x k=2 k (^^) which yields=Ch ln n ,)   O(  the bound p93 n Copyright©  2010  tniky1    All  rights  reserved.   1 which yields the bound Page  12
  13. 13. 2010 CS   ( ) Copyright©  2010  tniky1    All  rights  reserved.   Page  13
  14. 14. 2010 CS   Copyright©  2010  tniky1    All  rights  reserved.   Page  14
  15. 15. 2010 CS •  Hire-­‐Assistant   Copyright©  2010  tniky1    All  rights  reserved.   Page  15
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×