Upcoming SlideShare
×

# 2009 CSBB LAB 新生訓練

2,488 views

Published on

1 Like
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

Views
Total views
2,488
On SlideShare
0
From Embeds
0
Number of Embeds
50
Actions
Shares
0
18
0
Likes
1
Embeds 0
No embeds

No notes for slide
• We usually use the matrix to hold the cost of the edges in the bipartite graph. First we need a matrix of the costs of the workers doing the jobs.
• Let us now talk about a more sophisticated data structure : Range Trees. The 1-D case is straightforward.. Even a sorted list of the points would suffice. But a sorting wouldn’t generalize to higher dimensions, so we use binary trees instead. Build a perfectly balanced binary tree on the sorted list of points.. Input points r stored in leaves (all leaves are linked in a list), each internal node stores the highest value in it’s left subtree. Comparing the query boundary with this value will help us reach the first point falling in the query. Consider the following example … Query time is O(log n + k) for reporting case, and O(log n) for counting.
• ### 2009 CSBB LAB 新生訓練

1. 1. CSBB LAB 新生訓練 基礎計算機科學 Speaker: 黃智沂 2 nd Year Student of Ph.D. Program
2. 2. 感謝 <ul><li>感謝網路上的眾多投影片作者，本投影片由不才編譯而成。 </li></ul>
3. 3. 為什麼要談基礎計算機科學？ <ul><li>因為不是每個人都是資工出身 </li></ul><ul><li>資工人也可能觀念不清 </li></ul><ul><li>大家討論時可能會常常提到 </li></ul><ul><li>因為可能有用 </li></ul><ul><li>因為這是學術界 </li></ul><ul><li>因為你們要參加 </li></ul><ul><ul><li>LEAST Seminar . </li></ul></ul>
4. 4. 李家同師公名言錄 <ul><li>基礎最重要，不要老是想要搞很難的東西。 </li></ul><ul><li>研究生， default 就是 24 小時待在學校做研究。 </li></ul>
5. 5. Outline <ul><li>基本的演算法分析概念 </li></ul><ul><li>基本的計算理論概念 </li></ul><ul><li>常見的演算法分類 </li></ul><ul><li>生物相關問題與應用： </li></ul><ul><ul><li>字串排比 </li></ul></ul><ul><ul><li>生物網路與圖形理論 </li></ul></ul><ul><ul><li>人工智慧與機械學習 </li></ul></ul><ul><ul><li>進階資料結構 </li></ul></ul>
6. 6. 何謂演算法 <ul><li>演算法的特徵 </li></ul><ul><ul><li>輸入 ：一個演算法必須有零個或以上輸入量。 </li></ul></ul><ul><ul><li>輸出 ：一個演算法應有一個或以上輸出量，輸出量是演算法計算的結果。 </li></ul></ul><ul><ul><li>明確性 ：演算法的描述必須無歧義，以保證演算法的實際執行結果是精確地符合要求或期望，通常要求實際執行結果是確定的。 </li></ul></ul><ul><ul><li>有限性：一個演算法是能夠被任何系統類比的一串運算，演算法必須在有限個步驟內完成任務。 </li></ul></ul><ul><ul><li>有效性：又稱可行性。能夠實作，演算法中描述的操作都是可以透過已經實作的基本運算執行有限次來實作。 </li></ul></ul>
7. 7. 介紹 <ul><li>演算法分析的目的：估計演算法的效能 </li></ul><ul><li>演算法系統千奇百怪，我們多半無法精確地預測演算法的行為 </li></ul><ul><li>為了便於分析，將定義一些最重要的參數和評估標準 </li></ul><ul><li>只做到 近似的 分析，而非完美的分析 </li></ul>
8. 8. O 表示法 <ul><li>在評估演算法的速度，常忽略常數。可以使用 O 表示法來表示 </li></ul><ul><li>例如： </li></ul><ul><li>5n 2 + 15 = O(n 2 ) </li></ul><ul><li>O 表示法是代表 上限 ，因此 </li></ul><ul><li>5n 2 + 15 = O(n 3 ) 也是對的 </li></ul>
9. 9. O 表示法 <ul><li>O 表示法可以很方便的捨棄常數 </li></ul><ul><li>O(n) = O(5n+4) </li></ul><ul><li>O( log n) 不需要寫底數 </li></ul><ul><li>常數上限的表示法為 O(1) </li></ul>
10. 10. O 表示法 <ul><li>可在方程式中使用 O 表達定量的概念 </li></ul><ul><li>例如： </li></ul><ul><li>T(n) = 3n 2 + O(n) </li></ul><ul><li>S(n) = 2n log 2 n + 5n + O(1) </li></ul>
11. 11. 定理 <ul><li>定理 </li></ul><ul><li>指數函數成長得比多項式函數還快 </li></ul><ul><li>多項式函數成長得比對數函數還快 </li></ul><ul><li>例如 </li></ul><ul><li>O(n) = O(2 n ) </li></ul><ul><li>O(n 2 ) = O(2 n ) </li></ul><ul><li>O(n 3 ) = O(2 n ) </li></ul><ul><li>… </li></ul><ul><li>O(n 99 ) = O(2 n ) </li></ul>
12. 12. 引理 <ul><li>O 表示法的加法與乘法是成立的 </li></ul><ul><li>例如 </li></ul><ul><li>n 3 + n 2 = O(n 3 + n 2 ) = O(n 3 ) </li></ul><ul><li>n 3 * n 2 = O(n 3 * n 2 ) = O(n 5 ) </li></ul><ul><li>但是除法與減法是錯的 </li></ul>
13. 13. 表 <ul><li>n=1000 時，不同電腦 & 演算法的執行時間 </li></ul>10 38 10 38 10 39 10 39 1.1 n 125,000 250,000 500,000 1,000,000 n 3 125 250 500 1,000 n 2 4 8 16 32 n 1.5 1.25 2.5 5 10 nlog 2 n 0.125 0.25 0.5 1 n 0.001 0.003 0.005 0.010 log 2 n 執行時間 時間 4 8000 步驟 / 秒 時間 3 4000 步驟 / 秒 時間 2 2000 步驟 / 秒 時間 1 1000 步驟 / 秒
14. 14. O 表示法 <ul><li>O 表示法是用來指出演算法的上限 </li></ul><ul><li>課本中所有演算法的執行時間，上限都是 O(2 n ) </li></ul><ul><li>意思是說，它們所需要的執行時間，都 不會超過 指數時間 </li></ul><ul><li>但是 O(2 n ) 是 非常粗略 的估計，實際上這些演算法通常都可以作得比 O(2 n ) 快很多 </li></ul>
15. 15. O 表示法 <ul><li>我們感興趣的不只是上限，而是一道盡可能貼近實際執行時間的方程式 </li></ul><ul><li>很難求得方程式的話，至少估計上下限 </li></ul><ul><li>求下限比求上限困難多了 </li></ul>
16. 16. 上限 & 下限 演算法最短的執行時間 慢 快 這是某個演算法 因此，這問題的解法 最慢不過如此了 ( 上限 ) 所有的演算法 都不可能更快了 ( 下限 )
17. 17. Ω 表示法 <ul><li>Ω 表示法 – 演算法的下限 </li></ul><ul><li>和 O 一樣， Ω 表示法 也可忽略常數 </li></ul><ul><li>例如： </li></ul><ul><li>n 2 – 100 = Ω (n 2 ) </li></ul><ul><li>因為是表示下限，所以 </li></ul><ul><li>n 2 = Ω (n) </li></ul><ul><li>Ω (n) 對應的關係是 “大於等於” </li></ul>
18. 18. Θ 表示法 <ul><li>如果上下限一樣大的時候，就表示精確找出實際的執行速度 </li></ul><ul><li>若： </li></ul><ul><li>f(n) = O(n) ，且  上限 ( 小於等於 ) </li></ul><ul><li>f(n) = Ω (n)  下限 ( 大於等於 ) </li></ul><ul><li>則我們就用 Θ 表示法： </li></ul><ul><li>f(n) = Θ (n)  等於 </li></ul>
19. 19. 時間與空間複雜度 <ul><li>如何不用執行演算法，就知道演算法的執行時間？ </li></ul><ul><li>方法：統計演算法執行所需的指令個數 </li></ul><ul><li>但是演算法中可能有好幾種不同的指令 </li></ul><ul><li>每一種指令所耗費的時間也不同 </li></ul><ul><li>例如：除法比加法慢 </li></ul>
20. 20. 空間複雜度 <ul><li>空間複雜度 (space complexity) 指的是執行演算法時所需的儲存空間 </li></ul><ul><li>和時間複雜度一樣，空間複雜度考慮的也是最差情況 (worst case) </li></ul><ul><li>如果演算法的 空間複雜度為 O(n) ，表示每個輸入元素都分配到固定的儲存空間。如果 空間複雜度是 O(1) ，代表演算法需要固定的儲存空間，與輸入量無關 </li></ul>
21. 21. 空間複雜度 c c c c c O(1) 100K 100,000 10K 10,000 1K 1,000 100 100 10 10 O(n) 輸入量 n
22. 22. 複雜度的替換代價 <ul><li>空間複雜度與時間複雜度的替換代價 </li></ul><ul><ul><li>使用 O(n) TIME 的演算法是否一定需要用到 O(n) SPACE? </li></ul></ul><ul><ul><li>使用 O(n) SPACE 的演算法是否一定需要用到 O(n) TIME? </li></ul></ul>
23. 23. 進階的複雜度分析 <ul><li>Amortized Complexity </li></ul><ul><li>Average-Case Complexity </li></ul><ul><li>Combinatorial Complexity </li></ul><ul><li>Knowledge Complexity </li></ul><ul><li>Free-Bits Complexity </li></ul><ul><li>….etc </li></ul>
24. 24. 參考書目 <ul><li>Introduction to algorithmsT.Cormen, C.Leiserson & R.L.Rivest </li></ul>
25. 25. Outline <ul><li>基本的演算法分析概念 </li></ul><ul><li>基本的計算理論概念 </li></ul><ul><li>常見的演算法分類 </li></ul><ul><li>生物相關問題與應用： </li></ul><ul><ul><li>字串排比 </li></ul></ul><ul><ul><li>生物網路與圖形理論 </li></ul></ul><ul><ul><li>人工智慧與機械學習 </li></ul></ul><ul><ul><li>進階資料結構 </li></ul></ul>
26. 26. 分析和計算模型有關 <ul><li>如何分析？ </li></ul><ul><ul><li>你採用什麼模型？ </li></ul></ul><ul><ul><ul><li>Circuit? </li></ul></ul></ul><ul><ul><ul><li>Turing Machine? </li></ul></ul></ul><ul><ul><ul><li>Counter Machine? </li></ul></ul></ul><ul><ul><ul><li>Pointer Machine? </li></ul></ul></ul><ul><ul><ul><li>Lambda Calculus? </li></ul></ul></ul>
27. 27. What is a Turing Machine? <ul><li>Control is similar to (but not the same as) DFA </li></ul><ul><li>It has an infinite tape as memory </li></ul><ul><li>A tape head can read and write symbols and move around the tape </li></ul><ul><li>Initially, tape contains input string (on the leftmost end) and is blank everywhere </li></ul>control  = blank symbols     b a b a
28. 28. What is a TM? (2) <ul><li>Finite number of states: one for immediate accept , one for immediate reject , and others for continue </li></ul><ul><li>Based on the current state and the tape symbol under the tape head, TM then decides the tape symbol to write on the tape, goes to the next state, and moves the tape head left or right </li></ul><ul><li>When TM enters accept state, it accepts the input immediately ; when TM enters reject state, it rejects the input immediately </li></ul><ul><li>If it does not enter the accept or reject states, TM will run forever , and never halt </li></ul>
29. 29. Extensions of Turing Machine <ul><li>However, none of the facilities will increase the power of TM. </li></ul><ul><li>Church-Turing Thesis: </li></ul><ul><li>TM are the ultimate computational devices. </li></ul><ul><li>(TM = algorithms) </li></ul>
30. 30. Variants of TM <ul><li>Multiple tapes (2-tape machines). </li></ul><ul><li>Multiple heads. </li></ul><ul><li>Two-way tape. </li></ul><ul><li>Random access memory. </li></ul><ul><li>Two-dimensional memory. </li></ul><ul><li>Oracle Turing Machine </li></ul><ul><ul><li>Random Turning Machine </li></ul></ul>
31. 31. Multi-tape Turing Machines: Informal Description … We add a finite number of tapes Control … a 1 a 2  Tape 1 head 1 … a 1 a 2  Tape 2 head 2
32. 32. Multi-tape Turing Machines: Informal Description (II) <ul><li>Each tape is bounded to the left by a cell containing the symbol  </li></ul><ul><li>Each tape has a unique header </li></ul><ul><li>Transitions have the form (for a 2-tape Turing machine): </li></ul>( (p,(x 1 , x 2 )), (q, (y 1 , y 2 )) ) Such that each x i is in  and each y i is in  or is  or  . and if x i =  then y i =  or y i = 
33. 33. Multi-tape Turing Machines vs Turing Machines <ul><li>Is Multi-tape TM stronger than TM? </li></ul><ul><li>Consider the problem: </li></ul><ul><ul><li>Does string A equal to string B? </li></ul></ul>
34. 34. a b  a b  Tape 1 a b a Tape 2 State in M2: s Solve by 2-tape Turing Machine M2 : a b  a b  Tape 1 a b a Tape 2 State in M2: s’
35. 35. Using States to “Remember” Information Equivalent configuration in a Turing Machine M : a b  a b a b # a b
36. 36. Theorem <ul><li>Expression power of TM equals to Expression power of Multi-Tape TM </li></ul><ul><li>Q: Is Multi-Tape TM faster than one tape TM? </li></ul>
37. 37. Oracle Turing Machine <ul><li>An oracle is a black box. You can consider it as a special device (machine). </li></ul><ul><li>An oracle of X is a black box which can answer any instance of the problem X in O(1) time. </li></ul><ul><li>An oracle machine is a Turing machine connected to an oracle . Thus Oracle Turing Machine is also a multi-tape TM. </li></ul>
38. 38. Oracle Turing Machine <ul><li>We can extend it in many ways, e.g., to devise a TM which can run Randomized Quick-Sort. </li></ul><ul><ul><li>Use Oracle to flip the coin. </li></ul></ul><ul><ul><li>Or use Oracle to generate random bit sequence. </li></ul></ul>
39. 39. Definition: A Non-Deterministic TM is a 7-tuple T = (Q, Σ , Γ ,  , q 0 , q accept , q reject ), where: Q is a finite set of states Γ is the tape alphabet, where   Γ and Σ  Γ q 0  Q is the start state Σ is the input alphabet, where   Σ  : Q  Γ -> Pow( Q  Γ  {L,R}) q accept  Q is the accept state q reject  Q is the reject state, and q reject  q accept
40. 40. Acceptance for NTM <ul><li>If w is in L: </li></ul><ul><li>There are some computations leading the machine into the acceptance configuration . </li></ul><ul><li>If w is NOT in L: </li></ul><ul><li>The machine always rejects the string. </li></ul>
41. 41. Non-Deterministic TM is a Parallel Universe
42. 42. the set of languages decided by a O(t(n))-time non-deterministic Turing machine. Definition: NTIME(t(n)) is TIME(t(n))  NTIME(t(n))
43. 43. NTM vs. DTM <ul><li>Theorem: </li></ul><ul><li>Non-deterministic Turing machines can be converted into deterministic Turing Machines. </li></ul><ul><li>NTM = DTM </li></ul>
44. 44. Deterministic Polynomial Time P = TIME(n k )  k  N
45. 45. Non-deterministic Polynomial Time NP = NTIME(n k )  k  N
46. 46. NTM 太抽象？ <ul><li>Another Model for NP. </li></ul><ul><ul><li>Karp’s proof system. </li></ul></ul>
47. 47. Theorem: L  NP if and only if there exists a poly-time Turing machine V with L = { x |  y. |y| = poly(|x|) and V(x,y) accepts } . Proof: <ul><li>If L = { x |  y. |y| = poly(|x|) and V(x,y) accepts } </li></ul><ul><li>then L  NP. </li></ul>Because we can guess y and then run V. (2) If L  NP then L = { x |  y. |y| = poly(|x|) and V(x,y) accepts } Let N be a non-deterministic poly-time TM that decides L. Define V(x,y) to accept if y is an accepting computation history of N on x.
48. 48. A language is in NP if and only if there exist polynomial-length certificates for membership to the language. SAT is in NP because a satisfying assignment is a polynomial-length certificate that a formula is satisfiable.
49. 49. NP-Complete <ul><li>如果 L 是 NP-Hard ，則對任何問題 L’ 屬於 NP ，都可以將 L’ reduce to L 。 </li></ul><ul><li>如果 L 是 NP-Complete ，代表 L 屬於 NP-Hard 且 L 是一個 NP 問題。 </li></ul><ul><li>我們說一個問題 L’ 可以 reduce to L ，代表 L 的難度至少跟 L’ 一樣。 </li></ul><ul><li>我們知道存在很多 NP-C 的問題，像 SAT ， TSP ， Bin-Packing, Knapsack… 等的問題。 </li></ul>
50. 50. Karp’s Complete Problems
51. 51. The World by Karp P 2-SAT, Shortest-Path, Minimum-Cut, Arc-Cover ? NP-Hard NP-Complete SAT, Clique, Hamiltonian-Circuit, Chromatic Number . . . Equivalence of Regular Expression, Equivalence of ND Finite Automata, Context Sensitive Recognition Linear-Inequalities Graph-Isomorphism, Non-Primes NP ? ? In NPC In P
52. 52. NP vs. P 這代表什麼？
53. 53. How to use NPC? NP = The set of all the problems for which you can verify an alleged solution in polynomial time.
54. 54. 最好的狀況當然是證明沒有好的方法存在。例如知名的 Sorting Problem 的 Lower Bound 是 O(n lg n).
55. 55. 可是這通常比找出演算法更難
56. 58. Reference <ul><li>Computers and Intractablity: </li></ul><ul><li>A guide to the Theory of NP-completeness </li></ul><ul><li>by Mike Garey and David Johnson </li></ul>
57. 59. Outline <ul><li>基本的演算法分析概念 </li></ul><ul><li>基本的計算理論概念 </li></ul><ul><li>常見的演算法分類 </li></ul><ul><li>生物相關問題與應用： </li></ul><ul><ul><li>字串排比 </li></ul></ul><ul><ul><li>生物網路與圖形理論 </li></ul></ul><ul><ul><li>人工智慧與機械學習 </li></ul></ul><ul><ul><li>進階資料結構 </li></ul></ul>
58. 60. 解決問題的層次 ( 一 ) <ul><li>Heuristics </li></ul><ul><ul><li>如果我們找不到最佳解或者也沒法證明問題是一個 NP-C ，因此我們會提出一個可以解的方法，但又不能證明它離最佳解差幾倍。通常這類的方法需要透過實驗來驗證其可行性及優越性。 </li></ul></ul><ul><li>Approximation Algorithms </li></ul><ul><ul><li>如果證明問題是一個 NP-C ，所以我們很難找到有效率的解法，如果可以證明你的方法算出來的值可以離最佳解有特定的範圍時，我們就稱之 ; 例如 2-approximation algorithm 代表離最佳解在 2 倍的範圍。 </li></ul></ul>
59. 61. 解決問題的層次 ( 二 ) <ul><li>On-Line Algorithms </li></ul><ul><ul><li>有些問題的 input 是動態進來的 ， 因此不可能看完所有的 input 再來運算，而這些問題可能很難或是 NP-C 的問題，對於解決這些問題的演算法稱為 on-line algorithms ，跟 approximation algorithms 一樣， on-line algorithms 也需要一些指標來區別好壞。 Competitive analysis formalizes this idea by comparing the relative performance of an online and offline algorithm for the same problem instance. </li></ul></ul>
60. 62. 解決問題的層次 ( 三 ) <ul><li>Randomized algorithm </li></ul><ul><ul><li>所有在計算過程中有利用到 Random bit 的演算法，都算是 Randomized algorithm 。 </li></ul></ul><ul><ul><li>一般分為兩類 </li></ul></ul><ul><ul><ul><li>蒙地卡羅：計算出來的結果有大於五成的機率會是對的。 </li></ul></ul></ul><ul><ul><ul><li>拉斯維加斯：計算出來的結果有大於五成的機率會是對的。但是不見得每次都算得出來。 </li></ul></ul></ul>
61. 63. 解決問題的層次 ( 四 ) <ul><li>Randomized algorithm </li></ul><ul><ul><li>所有在計算過程中有利用到 Random bit 的演算法，都算是 Randomized algorithm 。 </li></ul></ul><ul><ul><li>一般分為兩類 </li></ul></ul><ul><ul><ul><li>蒙地卡羅：計算出來的結果有大於五成的機率會是對的。 </li></ul></ul></ul><ul><ul><ul><li>拉斯維加斯：計算出來的結果有大於五成的機率會是對的。但是不見得每次都算得出來。 </li></ul></ul></ul><ul><li>External Memory Algorithm </li></ul><ul><ul><li>An algorithm that is efficient when accessing most of the data is very slow, such as, on disk. </li></ul></ul>
62. 64. 解決問題的層次 ( 五 ) <ul><li>Parallel Algorithm </li></ul><ul><ul><li>使用大量的 CPU 同時計算的演算法。不同的平行電腦架構需要設計不同的演算法。最簡單的一種是 multi-threading. </li></ul></ul>
63. 65. 參考資料來源 <ul><li>Internet </li></ul><ul><li>韓永楷教授，隨機演算法 </li></ul><ul><li>王炳豐教授，平行演算法 </li></ul><ul><li>林俊淵教授，平行計算 (CUDA) </li></ul><ul><li>鐘葉青教授，平行程式設計 </li></ul><ul><li>強者正妹學姐 – 劉至善 </li></ul>
64. 66. Outline <ul><li>基本的演算法分析概念 </li></ul><ul><li>基本的計算理論概念 </li></ul><ul><li>常見的演算法分類 </li></ul><ul><li>生物相關問題與應用： </li></ul><ul><ul><li>字串排比 </li></ul></ul><ul><ul><li>生物網路與圖形理論 </li></ul></ul><ul><ul><li>進階資料結構 </li></ul></ul>
65. 67. 生物相關問題與應用 <ul><li>比較基因體學 </li></ul><ul><li>系統生物學 </li></ul><ul><li>轉譯醫學 </li></ul>
66. 68. Outline <ul><li>基本的演算法分析概念 </li></ul><ul><li>基本的計算理論概念 </li></ul><ul><li>常見的演算法分類 </li></ul><ul><li>生物相關問題與應用： </li></ul><ul><ul><li>字串排比 </li></ul></ul><ul><ul><li>生物網路與圖形理論 </li></ul></ul><ul><ul><li>進階資料結構 </li></ul></ul>
67. 69. 字串排比 <ul><li>Global and local alignments </li></ul><ul><li>Multiple sequence alignment </li></ul><ul><li>Basic Local Alignment Search Tool (BLAST) </li></ul>
68. 70. Global Alignment vs. Local Alignment <ul><li>global alignment : </li></ul><ul><li>local alignment : </li></ul>
69. 71. 兩個序列的分析 <ul><li>在 1970 年代，分子生物學家 Needleman 及 Wunsch [15] 以動態程式設計技巧 (dynamic programming) 分析了氨基酸序列的相似程度； </li></ul><ul><li>有趣的是，在同一時期，計算機科學家 Wagner 及 Fisher [22] 也以極相似的方式來計算兩序列間的編輯距離 (edit distance) ，而這兩個重要創作當初是在互不知情下獨立完成的。 </li></ul><ul><li>雖然分子生物學家看的是兩序列的相似程度，而計算機科學家看的是兩序列的差異，但這兩個問題已被證明是對偶問題 (dual problem) ，它們的值是可藉由公式相互轉換的。 </li></ul>
70. 72. Homology Search Tools <ul><li>Smith-Waterman (Smith and Waterman, 1981; Waterman and Eggert, 1987) </li></ul><ul><li>FASTA (Wilbur and Lipman, 1983; Lipman and Pearson, 1985) </li></ul><ul><li>BLAST (Altschul et al., 1990; Altschul et al., 1997) </li></ul><ul><li>BLAT (Kent, 2002) </li></ul><ul><li>PatternHunter (Li et al., 2004) </li></ul>
71. 73. 三種常用的序列分析方法 <ul><li>目前序列分析工具可說是五花八門 ，僅管如此，有三種構想是較受大家所青睬的： </li></ul><ul><li>第一種是 Smith-Waterman 的方法，這種方法很精細地計算兩序列間最好的 k 個區域排比 (local alignment) ，雖然這個方法很精確，但因耗時較久，所以多半應用在較短序列間的比較，然而，也有一些學者試著去改善它的一些計算複雜度，使它在長序列的比較上也有一些實際的應用。 </li></ul>
72. 74. 三種常用的序列分析方法 ( 續 ) <ul><li>第二種是 Pearson 的 FASTA 方法，這種方法先以較快方式找到一些有趣的區域，然後再以 Smith-Waterman 的方法應用在那些區域中。如此一來，它的計算速度就比 Smith-Waterman 快，而且在很多情況下，它的精細程度也不差。 </li></ul><ul><li>第三種是 Altschul 等人所製作的 BLAST ，它的最初版本完全沒有考慮間隔 (gap) ，所以在計算上比其他方式快了許多。雖然它不夠經細，但它的計算速度使得它在生物序列資料庫的搜尋上有很大的優勢，也因此它可說是目前最受歡迎的序列分析工具。此外， 1997 年剛出爐的 Gapped BLAST 已針對精細程度做了很大的改進，且在計算速度上仍維持相當的優勢。 </li></ul>
73. 75. 為什麼要用排比 (alignment) ？ <ul><li>早期的序列分析通常是以點矩陣 (dot matrix) 方法來進行的，這種方法是以二維平面將兩序列間相同的地方點出來，從而藉由目視的方式看看兩序列有那些相似的地方。這種方法最大的優點是一目了然且計算簡單； </li></ul><ul><li>然而，當序列較長的時候，藉由目視方法去分析它們是一種很沒有效率的方式 </li></ul><ul><li>況且有些生物序列 ( 如蛋白質序列 ) 並不是只有相同字符才相似，這時候點矩陣方法就無法看出整體的相似程度。 </li></ul><ul><li>於是有人建議以排比 (alignment) 來顯示兩序列的相似程度 </li></ul>
74. 76. 排比 (alignment) <ul><li>給定兩序列 ， 它們的整體排比 (global alignment) 是在兩序列內加入破折號 (dash) ，使得兩序列變得等長且相對的位置不會同時都是破折號。 </li></ul><ul><li>例如：假設兩序列為ＣＴＴＧＡＣＴＡＧＡ及ＣＴＡＣＴＧＴＧＡ，下圖列出了它們的一種排比。 </li></ul><ul><li>　　　　　 ＣＴＴＧＡＣＴ－ＡＧＡ </li></ul><ul><li>　　　　　ＣＴ－－ＡＣＴＧＴＧＡ </li></ul><ul><li>圖 : ＣＴＴＧＡＣＴＡＧＡ及ＣＴＡＣＴＧＴＧＡ的一種可能排比。 </li></ul>
75. 77. 排比的評分方式 <ul><li>有那麼多種不同的排比組合，到底要挑那一個排比呢？為了要挑出較理想的排比，通常我們需要一些評分方式來做篩選工作。 </li></ul><ul><li>最簡單的評分方式是將每一個配對基底 (aligned pair) 都給一個分數，再看看那一種排比的總分最高。令 w(a,b) 代表 a 與 b 配對所得到的分數 ( 通常 w(*,-) 及 w(-,*) 是負值； mismatch 也是負值；只有 match 是正值，而蛋白質序列分析則採用 PAM 矩陣或 BLOSUM 矩陣來決定這些值 ) </li></ul><ul><li>在上述的簡單評分原則下，前圖的排比所得到的分數為 w(C,C) + w(T,T) + w(T,-) +…+ w(A,A) </li></ul>
76. 78. 最佳排比演算法
77. 79. 「同盟線性評分法」 (affine gap penalties) <ul><li>我們可以用動態規畫技巧由小到大依序將 S(i, j) 算出，並且記錄最佳值的由來，如此一來，在計算完了之後，我們也能一舉將最佳排比回溯出來。 </li></ul><ul><li>在比較生物序列時，我們通常會對每個破折號區域另外扣一個懲罰分數 ( 令其為 α) ，破折號區域也就是我們常說的「間隔」 (gap) ，如果破折號發生在第一個序列我們稱之為「插入間隔」 (insertion gap) ；如果發生在第二個序列我們稱之為「刪除間隔」 (deletion gap) 。 </li></ul><ul><li>例如在前圖的排比中，我們有一個長度為 2 的刪除間隔及一個長度為 1 的插入間隔，所以在排比分數上還要扣去兩個間隔的分數 (2α) 。我們通常稱這樣的評分方式為「同盟線性評分法」 (affine gap penalties) </li></ul>
78. 80. 最佳排比演算法
79. 81. 最佳排比演算法 ( 續 )
80. 82. 區域排比 (local alignment) <ul><li>在做生物序列排比時，有時更有趣的是找出局部區域的相似程度，此時我們考慮的是所謂的區域排比 (local alignment) ，也就是我們不必從頭到尾排比整個序列，而只要找出序列一的某個區段和序列二的某個區段之最佳排比即可。 </li></ul><ul><li>我們在此以最簡單的評分方式 ( 也就是以每一個配對基底 (aligned pair) 分數的總和為排比分數 ) 來說明如何計算最佳區域排比。 </li></ul>
81. 83. 最佳區域排比的演算法
82. 84. 為什麼要加個 0 ？ <ul><li>和整體排比 (global alignment) 的遞迴關係相比較，你會發現這裡的遞迴關係只多了 0 這一項，原因是 整體排比要從序列前端開始排起，而區域排比卻是任一個地方都可能是個起點 ，如果往前連接分數小於 0 ，我們就不該往前串聯，而以此點做為一個起點 ( S(i, j)=0 ) 試試看。 </li></ul>
83. 85. 多個最佳區域排比 <ul><li>有些人感興趣的是找出 k 個最好的區域排比或是分數至少有設定值那麼高的所有區域排比，這樣的計算在你熟悉動態規畫技巧後應不至於難倒你的。 </li></ul><ul><li>上述的方式也就是一般俗稱的 Smith-Waterman 方法 ( 實際上，整體排比問題是由 Needleman 及 Wunsch[15] 所提出；而區域排比問題則是由 Smith 及 Waterman[21] 所提出 ) ，它基本上需要與兩序列長度乘積成常數正比的時間與空間。 </li></ul><ul><li>在序列很長時，這種計算時間及空間都是很難令人接受的！ </li></ul>
84. 86. BLAST <ul><li>Basic Local Alignment Search Tool (by Altschul, Gish, Miller, Myers and Lipman) </li></ul><ul><li>The central idea of the BLAST algorithm is that a statistically significant alignment is likely to contain a high-scoring pair of aligned words. </li></ul>
85. 87. The maximal segment pair measure <ul><li>A maximal segment pair (MSP) is defined to be the highest scoring pair of identical length segments chosen from 2 sequences. (for DNA: Identities: +5; Mismatches: -4) </li></ul>the highest scoring pair <ul><li>The MSP score may be computed in time proportional to the product of their lengths. (How?) An exact procedure is too time consuming. </li></ul><ul><li>BLAST heuristically attempts to calculate the MSP score. </li></ul>
86. 88. A matrix of similarity scores PAM 120
87. 89. A maximum-scoring segment
88. 90. BLOSUM62 versus PAM250 (For Protein)
89. 91. BLAST <ul><li>Build the hash table for Sequence A. </li></ul><ul><li>Scan Sequence B for hits. </li></ul><ul><li>Extend hits. </li></ul>
90. 92. BLAST Step 1: Build the hash table for Sequence A. (3-tuple example) For DNA sequences: Seq. A = AGATCGAT 12345678 AAA AAC .. AGA 1 .. ATC 3 .. CGA 5 .. GAT 2 6 .. TCG 4 .. TTT For protein sequences: Seq. A = ELVIS Add xyz to the hash table if Score(xyz, ELV) ≧ T; Add xyz to the hash table if Score(xyz, LVI) ≧ T; Add xyz to the hash table if Score(xyz, VIS) ≧ T;
91. 93. BLAST Step2: Scan sequence B for hits.
92. 94. BLAST Step2: Scan sequence B for hits. Step 3: Extend hits. hit Terminate if the score of the sxtension fades away. (That is, when we reach a segment pair whose score falls a certain distance below the best score found for shorter extensions.) BLAST 2.0 saves the time spent in extension, and considers gapped alignments.
93. 95. Gapped BLAST (I) The two-hit method
94. 96. Gapped BLAST (II) Confining the dynamic-programming
95. 97. BLAT
96. 98. 多重序列排比 <ul><li>多序列的分析一直是計算生物學上很重要的課題，但是它的問題複雜度卻令人沮喪。粗略地說，比較兩個長度皆為 n 的序列所需的時間 ( 也就是動態規畫矩陣點 ) 是和 n 的平方成常數正比的；而比較 k 個長度皆為 n 的序列所需的時間則和 n 的 k 次方成常數正比。 </li></ul><ul><li>試想如果我們要同時比較 10 個長度只有 200 的序列所需的時間是多少呢？它基本上會和 200 的 10 次方成常數正比，而這卻是很龐大的數目。 </li></ul>
97. 99. 多重序列排比的計算方法 <ul><li>因此，在計算方法上有兩種不同的流派：一種是 Lipman 等人提出的方式，他們做的是同時比較多個序列，但試著去降低計算時所用的動態規畫矩陣點，據他們的論文指出，這種方式比較 10 個長度為 200 的序列也不會遭遇太大的問題； </li></ul><ul><li>另一種方式是 Feng 及 Doolittle 所採用的，它根據序列遠近程度的演化樹來做序列排比，一旦 gap 在某個比較中出現後，它就會被保留到最後，這種方法用來比較 k 個長度皆為 n 的序列所需時間約略與成常數正比，所以非常廣受歡迎。 </li></ul>
98. 100. 多重序列排比的評分方式 <ul><li>最廣為接受的一種方式稱為 SP ( Sum-of –Pairs ) 分數，這種方式將多重序列排比投影到每一對序列上所得的排比分數總和起來，做為該多重排比的分數。 </li></ul><ul><li>這種方式若要直接採用「同盟線性評分方式」，則會滋生非常多的動態規畫設計表，但若稍稍放鬆一下，「類似同盟線性評分方式」 (quasi affine gap penalties) 雖然不夠精準，但卻可較有效率地計算多重排比的分數，是最常被用到的變形評分方式。 </li></ul><ul><li>此外，有些人建議某些序列組合應加權計分；也有人根據演化樹來計算分數 </li></ul>
99. 101. 參考資料來源 <ul><li>Internet </li></ul><ul><li>盧錦隆教授：計算生物學 </li></ul><ul><li>Mount, David W. Bioinformatics: Sequence and Genome Analysis. Cold Spring Harbor, N.Y.: Cold Spring Harbor Laboratory Press, 2001. </li></ul>
100. 102. Outline <ul><li>基本的演算法分析概念 </li></ul><ul><li>基本的計算理論概念 </li></ul><ul><li>常見的演算法分類 </li></ul><ul><li>生物相關問題與應用： </li></ul><ul><ul><li>字串排比 </li></ul></ul><ul><ul><li>生物網路與圖形理論 </li></ul></ul><ul><ul><li>進階資料結構 </li></ul></ul>
101. 103. 生物網路與圖形理論 <ul><li>Protein-protein interaction networks, gene regulation networks, etc. </li></ul><ul><li>Basic Graph Theory </li></ul><ul><li>Networks Motif, Community Detection </li></ul><ul><li>Some Graph algorithms </li></ul>
102. 104. Bio-Map protein-gene interactions protein-protein interactions PROTEOME GENOME Citrate Cycle METABOLISM Bio-chemical reactions
103. 105. Boehring-Mennheim
104. 106. An Introduction to Graph Theory Definitions and Examples Undirected graph Directed graph isolated vertex adjacent loop multiple edges simple graph : an undirected graph without loop or multiple edges degree of a vertex: number of edges connected (indegree, outdegree) G =( V , E )
105. 107. x y path : no vertex can be repeated a-b-c-d-e trail : no edge can be repeat a-b-c-d-e-b-d walk : no restriction a-b-d-a-b-c closed if x = y closed trail: circuit (a-b-c-d-b-e-d-a, one draw without lifting pen) closed path: cycle (a-b-c-d-a) a b c d e length : number of edges in this (path,trail,walk)
106. 108. a x b remove any cycle on the repeated vertices Def 11.4 Let G =( V , E ) be an undirected graph. We call G connected if there is a path between any two distinct vertices of G . a b c d e a b c d e disconnected with two components
107. 109. 雙分圖 Bipartite graphs <ul><li>A graph that can be decomposed into two partite sets but not fewer is bipartite </li></ul><ul><li>It is a complete bipartite if its vertices can be divided into two non-empty groups, A and B. Each vertex in A is connected to B, and vice-versa </li></ul>Complete bipartite graph K 2,3 The graph is bipartite
108. 110. Def. 11.6 multigraph of multiplicity 3 multigraphs
109. 111. Subgraphs, Complements, and Graph Isomorphism a b c d e a b c d e b c d e a c d spanning subgraph V 1 = V induced subgraph include all edges of E in V 1
110. 112. Subgraphs, Complements, and Graph Isomorphism Def. 11.11 complete graph: K n a b c d e K 5 Def. 11.12 complement of a graph G G a b c d e a b c d e
111. 113. Subgraphs, Complements, and Graph Isomorphism Graph Isomorphism 1 2 3 4 a b c d w x y z
112. 114. Subgraphs, Complements, and Graph Isomorphism Ex. 11.8 q r w z x y u t v a b c d e f g h i j a-q c-u e-r g-x i-z b-v d-y f-w h-t j-s, isomorphic Ex. 11.9 degree 2 vertices=2 degree 2 vertices=3 Can you think of an algorithm for testing isomorphism?
113. 115. Module
114. 116. Module
115. 117. Network Motif
116. 118. Graph Alignment: NetworkBLAST/PathBLAST
117. 119. Centralities <ul><li>Degree centrality : number of direct neighbors of node v </li></ul><ul><ul><li>where N(v) is the set of direct neighbors of node v . </li></ul></ul><ul><li>Stress centrality : the simple accumulation of a number of shortest paths between all node pairs </li></ul><ul><ul><li>where ρ st (v) is the number of shortest paths passing through node v. </li></ul></ul>
118. 120. Centralities <ul><li>Closeness centrality : reciprocal of the total distance from a node v to all the other nodes in a network </li></ul><ul><ul><li>δ (u,v) is the distance between node u and v . </li></ul></ul><ul><li>Eccentricity : the greatest distance between v and any other vertex </li></ul>
119. 121. Centralities <ul><li>Shortest path based betweenness centrality : ratio of the number of shortest paths passing through a node v out of all shortest paths between all node pairs in a network </li></ul><ul><ul><li>σ st is the number of shortest paths between node s and t and σ st (v) is the number of shortest paths passing on a node v out σ st </li></ul></ul><ul><li>Current flow based betweenness centrality : the amount of current that flows through v in a network </li></ul><ul><ul><li>Random walk based betweenness centrality </li></ul></ul>
120. 122. Centralities <ul><li>Subgraph centrality : accounts for the participation of a node in all sub graphs of the network. </li></ul><ul><li>the number of closed walks of length k starting and ending node v in the network is given by the local spectral moments μ k (v). </li></ul>
121. 123. Weighted Bipartite Matching
122. 124. Weighted Bipartite Matching Given a weighted bipartite graph, find a matching with maximum total weight. Not necessarily a maximum size matching. A B
123. 125. History <ul><li>Example of the assignment problem </li></ul><ul><li>Say you have three workers: Jim , Steve & Allan . You need to have one of them clean the bathroom, another sweep the floors & the third wash the windows. What’s the best (minimum-cost) way to assign the jobs? </li></ul>
124. 126. Hungarian algorithm (Augmenting Path Algorithm) <ul><li>Orient the edges (edges in M go up, others go down) </li></ul><ul><li>edges in M having positive weights, otherwise negative weights </li></ul>Find a shortest path M-augmenting path at each step
125. 127. Example <ul><li>One company assigns 5 types of jobs to 5 persons (Alice, Bob, Chris, Dirk, Emma). Each person has different ability to do each job. The different profits of the person assigned to specific job are shown below (Actually this is the cost matrix). </li></ul>Job 1 Job 2 Job 3 Job 4 Job 5 Alice 1\$ 2\$ 3\$ 4\$ 5\$ Bob 6\$ 7\$ 8\$ 7\$ 2\$ Chris 1\$ 3\$ 4\$ 4\$ 5\$ Dirk 3\$ 6\$ 2\$ 8\$ 7\$ Emma 4\$ 1\$ 3\$ 5\$ 4\$
126. 128. Example <ul><li>Step 0 : Initialization. Let </li></ul><ul><li>Form an excess matrix (using ) </li></ul>Cost matrix Excess matrix
127. 129. Example <ul><li>Step 1 : Construct equality subgraph </li></ul>Excess matrix
128. 130. Example <ul><li>Step 2 Maximum Matching in subgraph </li></ul><ul><li>Find a maximum matching in it. If is a perfect matching, stop and report as a maximum weight matching and as a minimum cost cover. </li></ul>
129. 131. Example <ul><li>Step 2 (continue..) </li></ul><ul><li>Choose Job 3, Job 4 and Job 5 as vertex cover with size equal to </li></ul>
130. 132. Example Excess matrix <ul><li>Step 3 Dual Change </li></ul><ul><li>is not a cover of </li></ul><ul><li>Find , using </li></ul>is a edge of not covered by
131. 133. Example <ul><li>Step 3 (continue..) </li></ul><ul><li>Update , and excess matrix, using </li></ul>Cost matrix Excess matrix
132. 134. Example <ul><li>Step 1 : Construct equality subgraph </li></ul>Excess matrix
133. 135. Example <ul><li>Step 2 Maximum Matching in subgraph </li></ul>
134. 136. Example <ul><li>Step 2 (continue..) </li></ul><ul><li>Choose Bob, Job 1, Job 4 and Job 5 as vertex cover with size equal to </li></ul>
135. 137. Example Excess matrix <ul><li>Step 3 Dual Change </li></ul><ul><li>is not a cover of </li></ul><ul><li>Find , using </li></ul>is a edge of not covered by
136. 138. Set Cover <ul><li>Definition of set cover problem </li></ul><ul><ul><li>Given a set of elements B and its subset S 1 , S 2 ,…, S n (i.e. ) </li></ul></ul><ul><ul><li>Find a selection of subsets such that the union of picked sets is exactly B </li></ul></ul><ul><ul><li>Cost of selection is defined as the number of picked sets </li></ul></ul>
137. 139. Set Cover <ul><li>Greedy solution is extremely natural and intuitive to set cover problem </li></ul><ul><ul><li>Pick the subset with largest number of uncovered elements </li></ul></ul><ul><ul><li>Until all elements of B are covered </li></ul></ul><ul><li>Can such a greedy strategy find optimal solution (selection with minimized cost)? </li></ul>
138. 140. Set Cover <ul><li>Example </li></ul><ul><ul><li>The dots in figure represent towns in country, and the edges are paths between towns </li></ul></ul><ul><ul><li>Now we’re planning to build schools </li></ul></ul><ul><ul><li>Students should go to school within one move on path </li></ul></ul><ul><ul><li>Then, what’s the minimum number of schools needed to be build in towns? </li></ul></ul>
139. 141. Set Cover <ul><li>Example (cont.) </li></ul><ul><ul><li>Our greedy solution would select town a first (since it covers six adjacencies b, d, e, h, i, k ) </li></ul></ul><ul><ul><li>Then uncovered towns f, c, j are chosen one-by-one </li></ul></ul><ul><ul><li>Totally four schools are built in town a, c, f, and j </li></ul></ul>Optimal?
140. 142. Set Cover <ul><li>Example </li></ul><ul><ul><li>There exists a solution with just three schools, at b, e, and i </li></ul></ul><ul><ul><li>The greedy solution is not optimal! </li></ul></ul>
141. 143. Set Cover <ul><li>Greedy fail? </li></ul><ul><ul><li>In fact, our greedy algorithm has found an approximation </li></ul></ul><ul><ul><li>It claims that the greedy algorithm will use at most k *ln( n ) sets if optimal solution pick k sets for n -element set cover problem </li></ul></ul><ul><ul><li>The approximation factor of the greedy algorithm is k *ln( n ) / k = ln( n ), means we’re not too far from the optimal </li></ul></ul>
142. 144. Example: A C B D G E F Karger’s Min-Cut Algorithm
143. 145. Example: A C B D G E F contract
144. 146. Example: A C B D G E F contract A C B D E FG
145. 147. Example: A C B D G E F contract A C B D E FG contract
146. 148. Example: A C B D G E F contract A C B D E FG contract A C B E FGD
147. 150. Is output min-cut? <ul><li>Not necessarily. </li></ul><ul><li>Is it a cut? </li></ul>
148. 152. 參考資料來源 <ul><li>Internet </li></ul><ul><li>唐傳義教授：系統生物學導論 </li></ul><ul><li>蔡明哲教授：圖形理論 </li></ul><ul><li>強者正妹學姐 – 劉至善 </li></ul><ul><li>Graph Theory with Applications J.A. Bondy and U.S.R. Murty </li></ul><ul><li>Graph Theory, by Reinhard Diestel </li></ul>
149. 153. Outline <ul><li>基本的演算法分析概念 </li></ul><ul><li>基本的計算理論概念 </li></ul><ul><li>常見的演算法分類 </li></ul><ul><li>生物相關問題與應用： </li></ul><ul><ul><li>字串排比 </li></ul></ul><ul><ul><li>生物網路與圖形理論 </li></ul></ul><ul><ul><li>人工智慧與機械學習 </li></ul></ul><ul><ul><li>進階資料結構 </li></ul></ul>
150. 154. 人工智慧與機械學習 <ul><li>演算法 vs. 機械學習 </li></ul><ul><li>＝ </li></ul><ul><li>工人智慧 vs. 人工智慧 </li></ul>常見的工具： Decision Tree, SVM, Neural Networks, Random Forest
151. 155. <ul><li>Draws ideas from machine learning/AI, pattern recognition, statistics, and database systems </li></ul><ul><li>Traditional Techniques may be unsuitable due to </li></ul><ul><ul><li>Enormity of data </li></ul></ul><ul><ul><li>High dimensionality of data </li></ul></ul><ul><ul><li>Heterogeneous, distributed nature of data </li></ul></ul>Origins of Data Mining Machine Learning/ Pattern Recognition Statistics/ AI Data Mining Database systems
153. 157. Decision Tree
154. 158. Example of a Decision Tree Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Splitting Attributes Training Data Model: Decision Tree categorical categorical continuous class
155. 159. Another Example of Decision Tree categorical categorical continuous class MarSt Refund TaxInc YES NO NO Yes No Married Single, Divorced < 80K > 80K There could be more than one tree that fits the same data! NO
156. 160. Decision Tree Classification Task Decision Tree
157. 161. Apply Model to Test Data Test Data Start from the root of tree. Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K
158. 162. Apply Model to Test Data Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K
159. 163. Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Test Data
160. 164. Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Test Data
161. 165. Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Test Data
162. 166. Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Test Data Assign Cheat to “No”
163. 167. Support Vector Machine
164. 168. S upport Vector Machines Which hyperplane? y=1 y=-1
165. 169. S upport Vector Machines Margin Margin = |d + |+|d - | y=1 y=-1 d + d - d -
166. 170. S upport Vector Machines Maximum Margin d + d + d - d - y=1 y=-1 Support vectors
167. 171. S upport Vector Machines Margin d b i1 b i2 y=1 y=-1 d x 1 x 2 x 1 -x 2 Page:261 5.32 5.33 5.34
168. 172. S upport Vector Machines Objective function y=1 y=-1 d <ul><li>The learning task in SVM can be formalized as the following constrained optimization problem : </li></ul>Page:262 Definition 5.1
169. 173. Artificial Neural Network
170. 174. Perceptron (1) <ul><li>Single neuron model (linear threshold unit) </li></ul><ul><ul><li>Input: a linear combination  W i X i </li></ul></ul><ul><ul><li>Output: threshold function </li></ul></ul>w 1 w 2 w n  x 1 x 2 x n x 0 = 1 w 0
171. 175. Perceptron (2) <ul><li>Multiple real-valued inputs: ( x 1 , x 2 , x 3 , ..., x n ) (= ) </li></ul><ul><li>Single output (labeled +1/-1): o ( x 1 , x 2 , x 3 , ..., x n ) </li></ul><ul><li>Weights (real-valued constants): ( w 0 , w 1 , w 2 , w 3 , ..., w n ) (= ) </li></ul><ul><ul><li>Real-valued constants to be determined and to be fit in learning problem ( i.e ., the space H of candidate hypothesis is the set of all possible real-valued weight vectors ) </li></ul></ul><ul><ul><li>In order to output a +1 for the percepton, the weighted combination w 1 x 1 +…+ w n x n must surpass (- w 0 ) </li></ul></ul><ul><li>Input-output relationship: </li></ul><ul><li>In vector form, with x 0 = 1 </li></ul><ul><ul><ul><li>Where sgn ( ) is 1 if argument is positive, -1 otherwise </li></ul></ul></ul>
172. 176. Decision Surface of a Perceptron (1) <ul><li>Represents some useful functions </li></ul><ul><ul><li>For example, Boolean functions </li></ul></ul><ul><ul><ul><li>Both inputs and output are Boolean values </li></ul></ul></ul><ul><ul><ul><li>Assume Boolean values of +1 (true) and –1 (false) </li></ul></ul></ul><ul><ul><li>What weights represent AND (x 1 , x 2 )? </li></ul></ul><ul><ul><ul><li>w 0 = -0.8, w 1 = w 2 = 0.5 </li></ul></ul></ul><ul><ul><ul><li>o ( x 1 ,x 2 ) = sgn ( -0.8 + 0.5x 1 + 0.5 x 2 ) </li></ul></ul></ul>
173. 177. Decision Surface of a Perceptron (2) <ul><li>Similarly </li></ul><ul><ul><li>OR (x 1 , x 2 ) </li></ul></ul><ul><ul><ul><li>w 0 = 0.3, w 1 = w 2 = 0.5 </li></ul></ul></ul><ul><ul><ul><li>o ( x 1 ,x 2 ) = sgn ( 0.3 + 0.5x 1 + 0.5 x 2 ) </li></ul></ul></ul><ul><ul><li>NOT (x 1 ) : </li></ul></ul><ul><ul><ul><li>w 0 =0.0, w 1 = -1.0 </li></ul></ul></ul><ul><ul><ul><li>o ( x 1 ) = sgn( 0.0 –1.0x 1 ) </li></ul></ul></ul>
174. 178. Sigmoid Unit x 1 x 2 x n w 1 w 2 w n  x 0 = 1 w 0
175. 179. Multilayer Networks (1) <ul><li>Much greater representational power </li></ul><ul><li>Can find nonlinear decision surfaces </li></ul><ul><li>Multilayer network is made up of many simple interconnected units </li></ul><ul><ul><li>Feedforword networks are acyclic, directed graphs </li></ul></ul><ul><ul><li>output of unit passed to inputs of successive units </li></ul></ul>o 1 o 2 w 43 Output Layer x 1 x 2 x 3 Input Layer w 11 h 1 h 2 h 3 h 4 Hidden Layer
176. 180. 參考資料來源 <ul><li>Internet </li></ul><ul><li>資工系：統計學習理論 , 資料探勘 , 人工智慧 , 樣式辨別 </li></ul><ul><li>電機系：樣式辨別 </li></ul><ul><li>佳揚學姐，筌敬學長。 </li></ul><ul><li>Machine Learning, Tom Mitchell, McGraw Hill, 1997. </li></ul><ul><li>R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification. 2ed John Wiley, 2001. </li></ul>
177. 181. Outline <ul><li>基本的演算法分析概念 </li></ul><ul><li>基本的計算理論概念 </li></ul><ul><li>常見的演算法分類 </li></ul><ul><li>生物相關問題與應用： </li></ul><ul><ul><li>字串排比 </li></ul></ul><ul><ul><li>生物網路與圖形理論 </li></ul></ul><ul><ul><li>人工智慧與機械學習 </li></ul></ul><ul><ul><li>進階資料結構 </li></ul></ul>
178. 182. Advanced Data Structure <ul><li>Suffix Tree </li></ul><ul><li>Bloom filter </li></ul><ul><li>Randomized Search Trees </li></ul><ul><li>Priority Search Trees. </li></ul>
179. 183. Indexing <ul><li>Using a sparse representation , a database can be preprocessed in linear time to allow locating all instances of a short string. </li></ul><ul><li>Major limitation: search is restricted to fixed length strings . </li></ul>
180. 184. S = M A L A Y A L A M \$ 1 2 3 4 5 6 7 8 9 10 \$ YALAM\$ M \$ ALAYALAM\$ \$M YALAM\$ \$M YALAM\$ \$M YALAM\$ A AL LA 6 2 8 4 7 3 1 9 5 10 Suffix Trees Paths from root to leaves represent all suffixes of S
181. 185. M A L A Y A L A M \$ 1 2 3 4 5 6 7 8 9 10 \$ YALAM\$ M \$ ALAYALAM\$ \$M YALAM\$ \$M YALAM\$ \$M YALAM\$ A AL LA 6 2 8 4 7 3 1 9 5 10 Suffix Tree
182. 186. Suffix tree properties <ul><li>For a string S of length n , there are n+1 leaves and at most n internal nodes. </li></ul><ul><ul><li>therefore requires only linear space, </li></ul></ul><ul><ul><li>provided edge labels are O(1) space </li></ul></ul><ul><li>Each leaf represents a unique suffix. </li></ul><ul><li>Concatenation of edge labels from root to a leaf spells out the suffix. </li></ul><ul><li>Each internal node represents a distinct common prefix to at least two suffixes. </li></ul>
183. 187. Application: Finding a short Pattern in a long String <ul><li>Build a suffix tree of the string. </li></ul><ul><li>Starting from the root, traverse a path matching characters of the pattern. </li></ul><ul><li>If stuck, pattern not present in string. </li></ul><ul><li>Otherwise, each leaf below gives a position of the pattern in the string. </li></ul>
184. 188. Finding a Pattern in a String Find “ALA” \$ YALAM\$ M \$ ALAYALAM\$ M\$ YALAM\$ M\$ YALAM\$ M\$ YALAM\$ A AL LA 6 2 8 4 7 3 1 9 5 10 Two matches - at 6 and 2
185. 189. (10, 10) (5, 10) (1, 1) (10, 10) (2, 10) (3, 4) (5, 10) (9, 10) (2, 2) (5, 10) (9, 10) (3, 4) (9, 10) (5, 10) 6 2 8 4 7 3 1 9 5 10 Edge Encoding S = M A L A Y A L A M \$ 1 2 3 4 5 6 7 8 9 10
186. 190. N äive Suffix Tree Construction Before starting: Why exactly do we need this \$ , which is not part of the alphabet? \$ 10 M\$ 9 AM\$ 8 LAM\$ 7 ALAM\$ 6 YALAM\$ 5 AYALAM\$ 4 LAYALAM\$ 3 ALAYALAM\$ 2 MALAYALAM\$ 1
187. 191. N äive Suffix Tree Construction \$MALAYALAM LAYALAM\$ 1 2 LAYALAM\$ 3 A 2 3 4 4 YALAM\$ etc. \$ 10 M\$ 9 AM\$ 8 LAM\$ 7 ALAM\$ 6 YALAM\$ 5 AYALAM\$ 4 LAYALAM\$ 3 ALAYALAM\$ 2 MALAYALAM\$ 1
188. 192. Is Suffix Tree good? <ul><li>Yes, because optimal search time </li></ul><ul><li>No, because of space requirement… </li></ul><ul><ul><li>The space can be much larger than the text </li></ul></ul><ul><ul><li>E.g., Text = DNA of Human </li></ul></ul><ul><ul><li>To store the text, we need 0.8 Gbyte </li></ul></ul><ul><ul><li>To store the suffix tree, we need 64 Gbyte! </li></ul></ul>
189. 193. Something Wrong?? <ul><li>Both the suffix tree and the text has n things, so they both need O(n) space… </li></ul><ul><li>How come there is a big difference?? </li></ul><ul><ul><li>Let us have a better analysis </li></ul></ul><ul><li>Let A be the alphabet (i.e., the set of distinct characters) of a text T </li></ul><ul><ul><li>E.g., in DNA, A = {a,c,g,t} </li></ul></ul>
190. 194. Something Wrong?? (2) <ul><li>To store T, we need only n log |A| bits </li></ul><ul><li>But to store the suffix tree, we will need n log n bits </li></ul><ul><li>When n is very large compared to |A| , there is a huge difference </li></ul><ul><li>Question: Is there an index that supports fast searching, but occupies O( n log |A| ) bits only?? </li></ul>
191. 195. Suffix Array – Reducing Space M A L A Y A L A M \$ 1 2 3 4 5 6 7 8 9 10 Suffix Array : Lexicographic ordering of suffixes Derive Longest Common Prefix array Suffix 6 and 2 share “ALA” Suffix 2,8 share just “A”. lcp achieved for successive pairs . \$ 10 YALAM\$ 5 M\$ 9 MALAYALAM\$ 1 LAYALAM\$ 3 LAM\$ 7 AYALAM\$ 4 AM\$ 8 ALAYALAM\$ 2 ALAM\$ 6 10 5 9 1 3 7 4 8 2 6 - 0 0 1 0 2 0 1 1 3
192. 196. Example Text Position Suffix Array 3 1 1 0 2 0 1 0 0 lcp Array M M A L A Y A L A \$ 1 2 3 4 5 6 7 8 9 10 3 7 4 10 5 8 9 1 2 6 \$ 10 YALAM\$ 5 M\$ 9 MALAYALAM\$ 1 LAYALAM\$ 3 LAM\$ 7 AYALAM\$ 4 AM\$ 8 ALAYALAM\$ 2 ALAM\$ 6
193. 197. Pattern Search in Suffix Array <ul><li>All suffixes that share a common prefix appear in consecutive positions in the array. </li></ul><ul><li>Pattern P can be located in the string using a binary search on the suffix array. </li></ul><ul><li>Naïve Run-time = O (|P|  log n). </li></ul><ul><li>Improved to O (|P| + log n) [Manber&Myers93], and to O(|P|) [Abouelhoda et al. 02]. </li></ul>
194. 198. Known (amazing) Results <ul><li>Suffix tree can be constructed in O ( n ) time and O ( n  | ∑ |) space [Weiner73, McCreight76, Ukkonen92]. </li></ul><ul><li>Suffix arrays can be constructed without using suffix trees in O ( n ) time [Pang&Aluru03]. </li></ul>
195. 199. More Applications <ul><li>Suffix-prefix overlaps in fragment assembly </li></ul><ul><li>Maximal and tandem repeats </li></ul><ul><li>Shortest unique substrings </li></ul><ul><li>Maximal unique matches [MUMmer] </li></ul><ul><li>Approximate matching </li></ul><ul><li>Phylogenies based on complete genomes </li></ul>
196. 200. Approximate set membership problem <ul><li>Suppose we have a set </li></ul><ul><li>S = {s 1 ,s 2 ,...,s m }  universe U </li></ul><ul><li>Represent S in such a way we can quickly answer “ Is x an element of S ?” </li></ul><ul><li>To take as little space as possible ,we allow false positive (i.e. x  S , but we answer yes ) </li></ul><ul><li>If x  S , we must answer yes . </li></ul>
197. 201. Bloom filters <ul><li>Consist of an arrays A[n] of n bits (space) , and k independent random hash functions </li></ul><ul><li>h 1 ,…,h k : U --> {0,1,..,n-1} </li></ul><ul><li>1. Initially set the array to 0 </li></ul><ul><li>2.  s  S, A[h i (s)] = 1 for 1  i  k </li></ul><ul><li>(an entry can be set to 1 multiple times, only the first times has an effect ) </li></ul><ul><li>3. To check if x  S , we check whether all location A[h i (x)] for 1  i  k are set to 1 </li></ul><ul><li>If not, clearly x  S. </li></ul><ul><li>If all A[h i (x)] are set to 1 ,we assume x  S </li></ul>
198. 202. 0 0 0 0 0 0 0 0 0 0 0 0 Initial with all 0 1 1 1 1 1 x 1 x 2 Each element of S is hashed k times Each hash location set to 1 1 1 1 1 1 y To check if y is in S, check the k hash location. If a 0 appears , y is not in S 1 1 1 1 1 y If only 1s appear, conclude that y is in S This may yield false positive
199. 203. The probability of a false positive <ul><li>We assume the hash function are random. </li></ul><ul><li>After all the elements of S are hashed into the bloom filters ,the probability that a specific bit is still 0 is </li></ul>
200. 204. <ul><li>To simplify the analysis ,we can assume a fraction p of the entries are still 0 after all the elements of S are hashed into bloom filters. </li></ul><ul><li>In fact,let X be the random variable of number of those 0 positions. By Chernoff bound </li></ul><ul><li> </li></ul><ul><li>It implies X/n will be very close to p with a very high probability </li></ul>
201. 205. <ul><li>The probability of a false positive f is </li></ul><ul><li>To find the optimal k to minimize f . </li></ul><ul><li>Minimize f iff minimize g=ln(f) </li></ul><ul><li>k=ln(2)*(n/m) </li></ul><ul><li>f = (1/2) k = (0.6185..) n/m </li></ul><ul><li>The false positive probability falls exponentially in n/m ,the number bits used per item !! </li></ul>
202. 206. <ul><li>A Bloom filters is like a hash table ,and simply uses one bit to keep track whether an item hashed to the location. </li></ul><ul><li>If k=1 , it’s equivalent to a hashing based fingerprint system. </li></ul><ul><li>If n=cm for small constant c,such as c=8 ,then k=5 or 6 ,the false positive probability is just over 2% . </li></ul><ul><li>It’s interesting that when k is optimal </li></ul><ul><li>k=ln(2)*(n/m) , then p= 1/2. </li></ul><ul><li>An optimized Bloom filters looks like a random bit-string </li></ul>
203. 211. Deterministic Tools <ul><li>AVL Tree </li></ul><ul><li>Red-Black Tree </li></ul><ul><li>Fib. Heap </li></ul><ul><li>Splay Tree </li></ul><ul><li>Soft Heap </li></ul><ul><ul><li>NOT EASY TO IMPLEMENT </li></ul></ul>
204. 212. Range Searching <ul><li>S = set of geometric objects </li></ul><ul><li>Q = query object </li></ul><ul><li>Report/Count objects in S that intersect Q </li></ul>Query Q Report/Count answers
205. 213. Single-shot Vs Repeatitive <ul><li>Query may be: </li></ul><ul><li>Single-shot (one-time). No need to preprocess </li></ul><ul><li>Repeatitive-Mode . Many queries are expected. Preprocess S into a Data Structure so that queries can be answered fast </li></ul>
206. 214. Orthogonal Range Searching in 1D <ul><li>S: Set of points on real line. </li></ul><ul><li>Q= Query Interval [a,b] </li></ul>a b Which query points lie inside the interval [a,b]?
207. 215. Orthogonal Range Searching in 2D <ul><li>S = Set of points in the plane </li></ul><ul><li>Q = Query Rectangle </li></ul>
208. 216. <ul><li>Build a balanced search tree where all data points are stored in the leaves . </li></ul>1D Range Query 7 7 19 15 12 8 2 4 5 2 4 5 8 12 15 2 4 5 7 8 12 15 19 query: O(log n+k) space: O(n) 6 17
209. 217. Querying Strategy <ul><li>Given interval [a,b], search for a and b </li></ul><ul><li>Find where the paths split, look at subtrees inbetween </li></ul>Paths split a b Problem: linking leaves do not extends to higher dimensions. Idea: if parents knew all descendants, wouldn’t need to link leaves.
210. 218. Efficiency <ul><li>Preprocessing Time: O(n log n) </li></ul><ul><li>Space: O(n) </li></ul><ul><li>Query Time: O(log n + k) </li></ul><ul><li>k = number of points reported </li></ul><ul><li>Output-sensitive query time </li></ul><ul><li>Binary search tree can be kept balanced in O(log n) time per update in dynamic case </li></ul>
211. 219. 1D Range Counting <ul><li>S = Set of points on real line </li></ul><ul><li>Q= Query Interval [a,b] </li></ul><ul><li>Count points in [a,b] </li></ul><ul><li>Solution: At each node, store count of number of points in the subtree rooted at the node. </li></ul><ul><li>Query: Similar to reporting but add up counts instead of reporting points. </li></ul><ul><li>Query Time: O(log n) </li></ul>
212. 220. 2D Range queries <ul><li>How do you efficiently find points that are inside of a rectangle? </li></ul><ul><ul><li>Orthogonal range query ([ x 1 , x 2 ], [ y 1 ,y 2 ]): find all points ( x, y ) such that x 1 <x<x 2 and y 1 <y<y 2 </li></ul></ul>x y x 1 x 2 y 1 y 2
213. 221. Range trees <ul><ul><li>Canonical subset P ( v ) of a node v in a BST is a set of points (leaves) stored in a subtree rooted at v </li></ul></ul><ul><ul><li>Range tree is a multi-level data structure: </li></ul></ul><ul><ul><ul><li>The main tree is a BST T on the x -coordinate of points </li></ul></ul></ul><ul><ul><ul><li>Any node v of T stores a pointer to a BST T y ( v ) ( associated structure of v ), which stores canonical subset P ( v ) organized on the y -coordinate </li></ul></ul></ul><ul><ul><ul><li>2D points are stored in all leaves! </li></ul></ul></ul>BST on y-coords P ( v ) T y ( v ) T P ( v ) v BST on x-coords
214. 222. <ul><li>For each internal node v  T x let P( v ) be set of points stored in leaves of subtree rooted at v. </li></ul><ul><li> </li></ul><ul><li>Set P( v ) is stored with v as another balanced binary search tree T y ( v ) (descendants by y) on y-coordinate. (have pointer from v to T y ( v )) </li></ul>Range trees T x v P( v ) T y ( v ) P( v ) p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 1 p 2 p 3 p 4 p 5 p 6 p 7 v T 4 p 7 p 5 p 6 T y ( v )
215. 223. <ul><li>The diagram below shows what is stored at one node. Show what is stored at EVERY node. Note that data is only stored at the leaves. </li></ul>Range trees T x v P( v ) T y ( v ) P( v ) p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 1 p 2 p 3 p 4 p 5 p 6 p 7 v T 4 p 7 p 5 p 6 T y ( v )   x y p1 1 2.5 p2 2 1 p3 3 0 p4 4 4 p5 4.5 3 p6 5.5 3.5 p7 6.5 2
216. 224. Range trees The query time : Querying a 1D-tree requires O(log n+k) time. How many 1D trees (associated structures) do we need to query? At most 2  height of T = 2 log n Each 1D query requires O(log n+k’) time.  Query time = O(log 2 n + k) Answer to query = Union of answers to subqueries: k = ∑k’ . Query: [x,x’] x x’
217. 225. Size of the range tree <ul><li>Size of the range tree : </li></ul><ul><ul><li>At each level of the main tree associated structures store all the data points once (with constant overhead): O ( n ). </li></ul></ul><ul><ul><li>There are O (log n ) levels. </li></ul></ul><ul><ul><li>Thus, the total size is O ( n log n ). </li></ul></ul>
218. 226. Building the range tree <ul><li>Efficient building of the range tree: </li></ul><ul><ul><li>Sort the points on x and on y (two arrays: X , Y ). </li></ul></ul><ul><ul><li>Take the median v of X and create a root, build its associated structure using Y. </li></ul></ul><ul><ul><li>Split X into sorted X L and X R , split Y into sorted Y L and Y R (s.t. for any p  X L or p  Y L , p.x < v.x and for any p  X R or p  Y R , p.x  v.x ). </li></ul></ul><ul><ul><li>Build recursively the left child from X L and Y L and the right child from X R and Y R. </li></ul></ul><ul><li>The running time is O ( n log n ). </li></ul>
219. 227. Generalizing to higher dimensions <ul><li>d-dimensional Range Tree can be build recursively from (d-1) dimensional range trees. </li></ul><ul><li>Build a Binary Search Tree on coordinates for dimension d. </li></ul><ul><li>Build Secondary Data Structures with (d-1) dimensional Range Trees. </li></ul><ul><li>Space O(n log d-1 n). </li></ul><ul><li>Query Time O(log d n + k). </li></ul>
220. 228. 參考資料來源 <ul><li>Internet </li></ul><ul><li>盧錦隆教授：計算生物學 </li></ul><ul><li>韓永楷教授：隨機演算法 </li></ul><ul><li>潘雙洪老師：計算幾何 </li></ul>