Database Systems ( 資料庫系統 )   November 26/28, 2007 Lecture #9
Announcement <ul><li>Hand in your assignment #4 on Wed. </li></ul>
African Poverb <ul><li>Every morning, a zebra wakes up, </li></ul><ul><li>it knows that it must run faster than the fastes...
Hash-Based Indexing Chapter 11
Introduction <ul><li>Hash-based  indexes are best for  equality selections . Cannot support range searches. </li></ul><ul>...
Review: Tree Index <ul><li>ISAM vs. B+ trees. </li></ul><ul><ul><li>What is the main difference between them? </li></ul></...
Similar tradeoff for hashing <ul><li>Possible to exploit the tradeoff between dynamic structure for better performance? </...
Basic Static Hashing <ul><li># primary pages fixed, allocated sequentially, never de-allocated; overflow pages if needed. ...
Static Hashing (Contd.) <ul><li>Buckets contain data entries. </li></ul><ul><li>Hash function takes search key field of re...
What is the performance problem for static hashing? <ul><li>Long overflow chains   can develop and degrade performance.  <...
Extendible Hashing <ul><li>Simple Solution – remove overflow chain. </li></ul><ul><li>When bucket (primary page) becomes f...
How to minimize the rehashing cost? <ul><li>Use another level of indirection </li></ul><ul><ul><li>Directory of pointers t...
Extendible Hashing <ul><li>Directory much smaller than file, so doubling much cheaper.  Only one page of data entries is s...
Example <ul><li>Directory is array of size 4. </li></ul><ul><li>What info this hash data structure maintains? </li></ul><u...
Insert 20 (10100):  Double Directory <ul><li>Know the difference between  double directory  &  split bucket . </li></ul><u...
Insert 9 (1001):  only Split Bucket 19* 3 2 2 000 001 010 011 100 101 110 111 3 3 3 DIRECTORY Bucket A Bucket B Bucket C B...
Points to Note <ul><li>What is the global depth of directory? </li></ul><ul><ul><li>Max # of  bits needed to tell which bu...
Directory Doubling 00 01 10 11 2 <ul><li>Why use least significant bits in directory? </li></ul><ul><ul><li>Allows for dou...
Directory Copying via Insertion 20 ( 10100 ) 2 2 2 000 001 010 011 100 101 110 111 3 3 3 DIRECTORY Bucket A Bucket B Bucke...
Comments on Extendible Hashing <ul><li>If directory fits in memory, equality search answered with one disk access; else tw...
Skewed data distribution (multiple splits) <ul><li>Assume each bucket holds two data entries </li></ul><ul><li>Insert 2 (b...
Insert 2 (10) 00 01 LOCAL DEPTH GLOBAL DEPTH 0* 8* 1 0 1 LOCAL DEPTH GLOBAL DEPTH 0* 8* 1 1 1 10 11 2 2 2 2*
Insert 16 (10000) 00 01 LOCAL DEPTH GLOBAL DEPTH 0* 8* 1 0 1 LOCAL DEPTH GLOBAL DEPTH 0* 8* 1 1 1 10 11 2 2 2 000 001 LOCA...
Comments on Extendible Hashing <ul><li>Delete:   If removal of data entry makes bucket empty, can be merged with `split im...
Delete 10* 00 01 10 11 2 2 2 LOCAL DEPTH 2 DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket C Bucket D 1* 5* 21* 13* 32* 16...
Delete 15*, 7*, 19* 00 01 10 11 2 2 2 LOCAL DEPTH 1 DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket B2 1* 5* 21* 13* 32* 1...
Linear Hashing (LH) <ul><li>Another dynamic hashing scheme, as an alternative to Extendible Hashing. </li></ul><ul><li>Wha...
Linear Hashing (LH) <ul><li>Basic Idea:  </li></ul><ul><ul><li>Pages are split when overflows occur – but not necessarily ...
Linear Hashing Verbal Algorithm <ul><li>Initial Stage. </li></ul><ul><ul><li>The initial level distributes entries into  N...
Linear Hashing Verbal Algorithm <ul><li>Splitting buckets. </li></ul><ul><ul><li>If a bucket overflows its primary page is...
Linear Hashing Example <ul><li>Initially, the index level equal to 0 and  N 0  equals 4 (three entries fit on a page). </l...
Insert 9 (1001) 15 31 6 5 17 1 36 64 next 00 01 10 11 h 0 9 36 h 1 15 31 h 0 6 h 0 5 17 1 next h 0 64 next h 1
Linear Hashing Example 2 <ul><li>An overflow page is chained to the primary page to contain the inserted value. </li></ul>...
Insert 8 (1000), 7(111), 18(10010), 14(1100)  9 14 36 (100)H 1 7 15 31 (11)H 0 18 6 (10)H 0 5 17 1 next (01)H 0 8 64 (000)...
Before Insert 11 (1011)  9 14 36 (100)H 1 7 15 31 (11)H 0 18 6 (10)H 0 5 17 1 next (01)H 0 8 64 (000)H 1
After Insert 11 (1011)  Which hash function (h0 or h1) for searching 9? How about searching for 18? 5 (101)H 1 11 14 36 (1...
Linear Hashing <ul><li>Insert 32,  16 2 </li></ul><ul><li>Insert 10, 13,  23 3   </li></ul><ul><li>After the 2 nd . split ...
Notes for Linear Hashing <ul><li>Why linear hashing does not need a directory but extendible hashing needs one? </li></ul>...
LH Described as a Variant of EH <ul><li>Can you describe linear hashing using extendible hashing (EH)? </li></ul><ul><ul><...
 
The New World is Flat <ul><li>With computer and network, the playing field is being leveled across the world. </li></ul><u...
Outsourcing Examples <ul><li>Customer services moving to call centers in India </li></ul><ul><ul><li>Credit cards, banks, ...
Outsourcing and ME in Taiwan? <ul><li>So anyone on this planet may be replaced by anyone on this planet for cheaper and mo...
Upcoming SlideShare
Loading in …5
×

Database Systems

1,092
-1

Published on

Published in: Business, Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,092
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
29
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide
  • Database Systems

    1. 1. Database Systems ( 資料庫系統 ) November 26/28, 2007 Lecture #9
    2. 2. Announcement <ul><li>Hand in your assignment #4 on Wed. </li></ul>
    3. 3. African Poverb <ul><li>Every morning, a zebra wakes up, </li></ul><ul><li>it knows that it must run faster than the fastest lion, or it will be eaten; </li></ul><ul><li>every morning, a lion wakes up, </li></ul><ul><li>it knows that it must run faster than the slowest zebra, or it will starve to death; </li></ul><ul><li>it does not matter whether you are a zebra or a lion, </li></ul><ul><li>when the sun comes up, </li></ul><ul><li>you better run fast. </li></ul>
    4. 4. Hash-Based Indexing Chapter 11
    5. 5. Introduction <ul><li>Hash-based indexes are best for equality selections . Cannot support range searches. </li></ul><ul><ul><li>Equality selections are useful for natural join operations. </li></ul></ul>
    6. 6. Review: Tree Index <ul><li>ISAM vs. B+ trees. </li></ul><ul><ul><li>What is the main difference between them? </li></ul></ul><ul><ul><ul><li>Static vs. dynamic structure </li></ul></ul></ul><ul><ul><li>What is the tradeoff between them? </li></ul></ul><ul><ul><ul><li>Graceful table growth/shrink vs. concurrency performance </li></ul></ul></ul>
    7. 7. Similar tradeoff for hashing <ul><li>Possible to exploit the tradeoff between dynamic structure for better performance? </li></ul><ul><li>Static hashing technique </li></ul><ul><li>Two dynamic hashing techniques </li></ul><ul><ul><li>Extendible Hashing </li></ul></ul><ul><ul><li>Linear Hashing </li></ul></ul>
    8. 8. Basic Static Hashing <ul><li># primary pages fixed, allocated sequentially, never de-allocated; overflow pages if needed. </li></ul><ul><li>h ( k ) mod N = bucket to which data entry with key k belongs . (N = # of buckets) </li></ul><ul><ul><li>N = 2 ^ i (or the bucket number is the i-least significant bits) </li></ul></ul>11* h(key) mod N h key Primary bucket pages Overflow pages 3*, 5* 97* 0 1 2 3 . . N-1 4*, 6* 0*
    9. 9. Static Hashing (Contd.) <ul><li>Buckets contain data entries. </li></ul><ul><li>Hash function takes search key field of record r and distribute values over range 0 ... N-1. </li></ul><ul><ul><li>h (key) = (a * key + b) usually works well. </li></ul></ul><ul><ul><li>a and b are constants; lots known about how to tune h . </li></ul></ul><ul><li>Cost for insertion/delete/search: 2/2/1 disk page I/Os (no overflow chains). </li></ul>
    10. 10. What is the performance problem for static hashing? <ul><li>Long overflow chains can develop and degrade performance. </li></ul><ul><ul><li>Why poor performance? Scan through overflow chains linearly. </li></ul></ul><ul><li>How to fix this problem? </li></ul><ul><ul><li>Extendible and Linear Hashing : Dynamic techniques to fix this problem. </li></ul></ul>
    11. 11. Extendible Hashing <ul><li>Simple Solution – remove overflow chain. </li></ul><ul><li>When bucket (primary page) becomes full, .. </li></ul><ul><ul><li>Re-organize file by adding (doubling) buckets. </li></ul></ul><ul><li>What is the cost concern in doubling buckets? </li></ul><ul><ul><li>High cost: rehash all entries - reading and writing all pages is expensive! </li></ul></ul>h(key) mod N h key Primary bucket pages Overflow pages 3*, 5* 97* 0 1 2 . . . N-1 4*, 6* 0* N N+1 … 2N-1
    12. 12. How to minimize the rehashing cost? <ul><li>Use another level of indirection </li></ul><ul><ul><li>Directory of pointers to buckets </li></ul></ul><ul><li>Insert 0 (00) </li></ul><ul><ul><li>Double buckets only double the directory (no rehashing) </li></ul></ul><ul><ul><li>Split just the bucket that overflowed! </li></ul></ul><ul><li>What is cost here? </li></ul>00 01 10 11 Directory Bucket A Bucket B Bucket C Bucket D 10* 4* 12* 32* 8* 1* 5* 21* 13* 15* 7* 19* Doubled Directory Bucket A1 Doubled Directory Bucket A1
    13. 13. Extendible Hashing <ul><li>Directory much smaller than file, so doubling much cheaper. Only one page of data entries is split. </li></ul><ul><li>How to adjust the hash function? </li></ul><ul><ul><li>Before doubling directory </li></ul></ul><ul><ul><ul><li>h(r) -> 0..N-1 buckets. </li></ul></ul></ul><ul><ul><li>After doubling directory? </li></ul></ul><ul><ul><ul><li>h(r) -> 0 .. 2N-1 (the range is doubled) </li></ul></ul></ul>
    14. 14. Example <ul><li>Directory is array of size 4. </li></ul><ul><li>What info this hash data structure maintains? </li></ul><ul><ul><li>Need to know current #bits used in the hash function </li></ul></ul><ul><ul><ul><li>Global Depth </li></ul></ul></ul><ul><ul><li>May need to know #bits used to hash a specific bucket </li></ul></ul><ul><ul><ul><li>Local Depth </li></ul></ul></ul><ul><ul><li>Can global depth < local depth? </li></ul></ul>13* 00 01 10 11 LOCAL DEPTH GLOBAL DEPTH DIRECTORY Bucket A Bucket B Bucket C Bucket D DATA PAGES 10* 1* 21* 4* 12* 32* 16* 15* 7* 19* 2 2 2 2 2 5*
    15. 15. Insert 20 (10100): Double Directory <ul><li>Know the difference between double directory & split bucket . </li></ul><ul><li>double directory: </li></ul><ul><li>Increment global depth </li></ul><ul><li>Rehash bucket A </li></ul><ul><li>Increment local depth, why track local depth? </li></ul>19* 2 2 2 000 001 010 011 100 101 110 111 3 3 3 DIRECTORY Bucket A Bucket B Bucket C Bucket D Bucket A2 (`split image' of Bucket A) 32* 1* 5* 21* 13* 16* 10* 15* 7* 4* 20* 12* LOCAL DEPTH GLOBAL DEPTH 00 01 10 11 2 2 2 LOCAL DEPTH 2 DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket C Bucket D 1* 5* 21* 13* 32* 16* 10* 15* 7* 19* 4* 12* 2
    16. 16. Insert 9 (1001): only Split Bucket 19* 3 2 2 000 001 010 011 100 101 110 111 3 3 3 DIRECTORY Bucket A Bucket B Bucket C Bucket D Bucket A2 32* 16* 10* 15* 7* 4* 20* 12* LOCAL DEPTH GLOBAL DEPTH 19* 2 2 2 000 001 010 011 100 101 110 111 3 3 3 DIRECTORY Bucket A Bucket B Bucket C Bucket D Bucket A2 32* 1* 5* 21* 13* 16* 10* 15* 7* 4* 20* 12* LOCAL DEPTH GLOBAL DEPTH <ul><li>Only split bucket: </li></ul><ul><li>Rehash bucket B </li></ul><ul><li>Increment local depth </li></ul>1* 9* 3 21* 13* Bucket B2 (split image of Bucket B) 5*
    17. 17. Points to Note <ul><li>What is the global depth of directory? </li></ul><ul><ul><li>Max # of bits needed to tell which bucket an entry belongs to. </li></ul></ul><ul><li>What is the local depth of a bucket? </li></ul><ul><ul><li># of bits used to determine if an entry belongs to this bucket. </li></ul></ul><ul><li>When does bucket split cause directory doubling? </li></ul><ul><ul><li>Before insert, bucket is full & local depth = global depth. </li></ul></ul><ul><li>How to efficiently double the directory? </li></ul><ul><ul><li>Directory is doubled by copying it over and `fixing’ pointer to split image page. </li></ul></ul><ul><ul><li>Note that you can do this only by using the least significant bits in the directory. </li></ul></ul>
    18. 18. Directory Doubling 00 01 10 11 2 <ul><li>Why use least significant bits in directory? </li></ul><ul><ul><li>Allows for doubling via copying! </li></ul></ul>3 vs. 000 001 010 011 100 101 110 111 00 10 01 11 2 3 Least Significant Most Significant 000 001 010 011 100 101 110 111 Split buckets
    19. 19. Directory Copying via Insertion 20 ( 10100 ) 2 2 2 000 001 010 011 100 101 110 111 3 3 3 DIRECTORY Bucket A Bucket B Bucket C Bucket D Bucket A2 (`split image' of Bucket A) 10* LOCAL DEPTH GLOBAL DEPTH 1* 5* 21* 13* 15* 7* 19* 4* 12* 32* 6* 00 01 10 11 2 2 2 LOCAL DEPTH 2 DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket C Bucket D 10* 4* 12* 32* 6* 2 1* 5* 21* 13* 15* 7* 19*
    20. 20. Comments on Extendible Hashing <ul><li>If directory fits in memory, equality search answered with one disk access; else two. </li></ul><ul><li>What is the problem with extendible hashing? </li></ul><ul><ul><li>Consider if the distribution of hash values is skewed (concentrates on a few buckets) </li></ul></ul><ul><ul><li>Directory can grow large. </li></ul></ul><ul><li>Can you come up with one insertion leading to multiple splits? </li></ul>
    21. 21. Skewed data distribution (multiple splits) <ul><li>Assume each bucket holds two data entries </li></ul><ul><li>Insert 2 (binary 10) – how many times of DIR doubling? </li></ul><ul><li>Insert 16 (binary 10000) – how many times of DIR doubling? </li></ul><ul><li>How to solve this data skew problem? </li></ul>0 1 LOCAL DEPTH GLOBAL DEPTH 0* 8* 1 1 1
    22. 22. Insert 2 (10) 00 01 LOCAL DEPTH GLOBAL DEPTH 0* 8* 1 0 1 LOCAL DEPTH GLOBAL DEPTH 0* 8* 1 1 1 10 11 2 2 2 2*
    23. 23. Insert 16 (10000) 00 01 LOCAL DEPTH GLOBAL DEPTH 0* 8* 1 0 1 LOCAL DEPTH GLOBAL DEPTH 0* 8* 1 1 1 10 11 2 2 2 000 001 LOCAL DEPTH GLOBAL DEPTH 0* 8* 1 010 011 3 3 2 100 101 110 111 3
    24. 24. Comments on Extendible Hashing <ul><li>Delete: If removal of data entry makes bucket empty, can be merged with `split image’. If each directory element points to same bucket as its split image, can halve directory. </li></ul>
    25. 25. Delete 10* 00 01 10 11 2 2 2 LOCAL DEPTH 2 DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket C Bucket D 1* 5* 21* 13* 32* 16* 10* 15* 7* 19* 4* 12* 2 00 01 10 11 2 2 2 LOCAL DEPTH 1 DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket B2 1* 5* 21* 13* 32* 16* 15* 7* 19* 4* 12*
    26. 26. Delete 15*, 7*, 19* 00 01 10 11 2 2 2 LOCAL DEPTH 1 DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket B2 1* 5* 21* 13* 32* 16* 15* 7* 19* 4* 12* 00 01 10 11 2 1 LOCAL DEPTH 1 GLOBAL DEPTH Bucket A Bucket B 1* 5* 21* 13* 32* 16* 4* 12* DIRECTORY 00 01 1 1 LOCAL DEPTH 1 GLOBAL DEPTH Bucket A Bucket B 1* 5* 21* 13* 32* 16* 4* 12*
    27. 27. Linear Hashing (LH) <ul><li>Another dynamic hashing scheme, as an alternative to Extendible Hashing. </li></ul><ul><li>What are problems in static/extendible hasing? </li></ul><ul><ul><li>Static hashing: long overflow chains </li></ul></ul><ul><ul><li>Extendible hashing: data skew causing large directory </li></ul></ul><ul><li>Is it possible to come up with a more balanced solution? </li></ul><ul><ul><li>Shorter overflow chains than static hashing </li></ul></ul><ul><ul><li>No need for directory in extendible hashing </li></ul></ul><ul><ul><ul><li>How do you get rid of directory (indirection)? </li></ul></ul></ul>
    28. 28. Linear Hashing (LH) <ul><li>Basic Idea: </li></ul><ul><ul><li>Pages are split when overflows occur – but not necessarily the page with the overflow. </li></ul></ul><ul><ul><li>Splitting occurs in turn, in a round robin fashion. </li></ul></ul><ul><ul><ul><li>one by one from the first bucket to the last bucket. </li></ul></ul></ul><ul><li>Use a family of hash functions h 0 , h 1 , h 2 , ... </li></ul><ul><ul><li>Each function’s range is twice that of its predecessor. </li></ul></ul><ul><ul><li>When all the pages at one level (the current hash function) have been split, a new level is applied. </li></ul></ul><ul><ul><li>Splitting occurs gradually </li></ul></ul><ul><ul><li>Primary pages are allocated in order & consecutively. </li></ul></ul>
    29. 29. Linear Hashing Verbal Algorithm <ul><li>Initial Stage. </li></ul><ul><ul><li>The initial level distributes entries into N 0 buckets. </li></ul></ul><ul><ul><li>Call the hash function to perform this h 0 . </li></ul></ul>
    30. 30. Linear Hashing Verbal Algorithm <ul><li>Splitting buckets. </li></ul><ul><ul><li>If a bucket overflows its primary page is chained to an overflow page (same as in static hashing). </li></ul></ul><ul><ul><li>Also when a bucket overflows, some bucket is split. </li></ul></ul><ul><ul><ul><li>The first bucket to be split is the first bucket in the file ( not necessarily the bucket that overflows). </li></ul></ul></ul><ul><ul><ul><li>The next bucket to be split is the second bucket in the file … and so on until the N th. has been split. </li></ul></ul></ul><ul><ul><ul><li>When buckets are split their entries (including those in overflow pages) are distributed using h 1 . </li></ul></ul></ul><ul><ul><li>To access split buckets the next level hash function ( h 1 ) is applied. </li></ul></ul><ul><ul><li>h 1 maps entries to 2 N 0 (or N 1 )buckets. </li></ul></ul>
    31. 31. Linear Hashing Example <ul><li>Initially, the index level equal to 0 and N 0 equals 4 (three entries fit on a page). </li></ul><ul><li>h 0 range = 4 buckets </li></ul><ul><li>Note that next indicates which bucket is to split next. (Round Robin) </li></ul><ul><li>Now consider what happens when 9 (1001) is inserted (which will not fit in the second bucket). </li></ul>00 01 10 11 h 0 15 31 6 5 17 1 36 64 next
    32. 32. Insert 9 (1001) 15 31 6 5 17 1 36 64 next 00 01 10 11 h 0 9 36 h 1 15 31 h 0 6 h 0 5 17 1 next h 0 64 next h 1
    33. 33. Linear Hashing Example 2 <ul><li>An overflow page is chained to the primary page to contain the inserted value. </li></ul><ul><li>Note that the split page is not necessary the overflow page – round robin. </li></ul><ul><li>If h 0 maps a value from zero to next – 1 (just the first page in this case), h 1 must be used to insert the new entry. </li></ul><ul><li>Note how the new page falls naturally into the sequence as the fifth page. </li></ul><ul><li>The page indicated by next is split (the first one). </li></ul><ul><li>Next is incremented. </li></ul>9 36 (100)H 1 15 31 (11)H 0 6 (10)H 0 5 17 1 next (01)H 0 64 next (000)H 1
    34. 34. Insert 8 (1000), 7(111), 18(10010), 14(1100) 9 14 36 (100)H 1 7 15 31 (11)H 0 18 6 (10)H 0 5 17 1 next (01)H 0 8 64 (000)H 1
    35. 35. Before Insert 11 (1011) 9 14 36 (100)H 1 7 15 31 (11)H 0 18 6 (10)H 0 5 17 1 next (01)H 0 8 64 (000)H 1
    36. 36. After Insert 11 (1011) Which hash function (h0 or h1) for searching 9? How about searching for 18? 5 (101)H 1 11 14 36 (100)H 1 7 15 31 (11)H 0 18 6 next (10)H 0 9 17 1 next (001)H 1 8 64 (000)H 1
    37. 37. Linear Hashing <ul><li>Insert 32, 16 2 </li></ul><ul><li>Insert 10, 13, 23 3 </li></ul><ul><li>After the 2 nd . split the base level is 1 ( N 1 = 8), use h 1 . </li></ul><ul><li>Subsequent splits will use h 2 for inserts between the first bucket and next -1. </li></ul>13 5 h 1 h 1 14 6 - h 1 23 11 16 7 15 31 - - 36 h 1 h 1 7 15 31 11 next 2 h 0 h 0 14 18 6 18 10 next 1 h 0 h 1 9 17 1 h 1 h 1 32 8 64 next 3 h 1 h 1 1 2
    38. 38. Notes for Linear Hashing <ul><li>Why linear hashing does not need a directory but extendible hashing needs one? </li></ul><ul><ul><li>Consider finding i-th bucket </li></ul></ul><ul><ul><li>New buckets are created in order (sequentially, i.e., round robin) in linear hashing </li></ul></ul><ul><li>Level progression: </li></ul><ul><ul><li>Once all N i buckets of the current level (i) are split the hash function h i is replaced by h i+1 . </li></ul></ul><ul><ul><li>The splitting process starts again at the first bucket and h i+2 is applied to find entries in split buckets. </li></ul></ul>
    39. 39. LH Described as a Variant of EH <ul><li>Can you describe linear hashing using extendible hashing (EH)? </li></ul><ul><ul><li>Begin with an EH index where directory has N elements. </li></ul></ul><ul><ul><li>Use overflow pages, split buckets round-robin. </li></ul></ul><ul><ul><li>First split is at bucket 0. (Imagine directory being doubled at this point.) But elements <1, N +1>, <2, N +2>, ... are the same. So, need only create directory element N , which differs from 0, now. </li></ul></ul><ul><ul><ul><li>When bucket 1 splits, create directory element N +1, etc. </li></ul></ul></ul><ul><li>So, directory can double gradually. Also, primary bucket pages are created in order. If they are allocated in sequence too (so that finding i’th is easy), we actually don’t need a directory! Voila, LH. </li></ul>
    40. 41. The New World is Flat <ul><li>With computer and network, the playing field is being leveled across the world. </li></ul><ul><li>There is no end to what can be done by whom </li></ul><ul><ul><li>Before: manufacturing products (Made in Taiwan, Made in China) </li></ul></ul><ul><ul><li>Now: anything including service industry </li></ul></ul><ul><li>This is called outsourcing </li></ul><ul><li>The rule of Capitalist economy: </li></ul><ul><ul><li>Work goes to the place where it can be done cheapest and most efficient </li></ul></ul>
    41. 42. Outsourcing Examples <ul><li>Customer services moving to call centers in India </li></ul><ul><ul><li>Credit cards, banks, insurance, Microsoft, … </li></ul></ul><ul><li>Radiologists in India and Australia </li></ul><ul><li>E-tutoring </li></ul><ul><li>Jetblue & homesourcing </li></ul><ul><li>Tax preparation outsourcing </li></ul><ul><li>MacDonald Flatfries and burgers </li></ul><ul><li>Kenichi Ohmae ( 大前研一 ) </li></ul>
    42. 43. Outsourcing and ME in Taiwan? <ul><li>So anyone on this planet may be replaced by anyone on this planet for cheaper and more efficiency </li></ul><ul><ul><li>Yes, for any jobs – high/low tech, manufacturing/service </li></ul></ul><ul><ul><li>Scaring? Consider massive amount of smart & cheap labors in China and India </li></ul></ul><ul><li>Oops … this is happening NOW here in Taiwan! </li></ul><ul><li>So one can compete or collaborate globally with anyone on this planet -> opportunity. </li></ul>
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×