PyCon 2011 talk - ngram assembly with Bloom filters
Upcoming SlideShare
Loading in...5
×
 

PyCon 2011 talk - ngram assembly with Bloom filters

on

  • 5,251 views

 

Statistics

Views

Total Views
5,251
Views on SlideShare
5,251
Embed Views
0

Actions

Likes
2
Downloads
45
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Funding: MSU startup, USDA NIFA, DOE, BEACON, Amazon.

PyCon 2011 talk - ngram assembly with Bloom filters PyCon 2011 talk - ngram assembly with Bloom filters Presentation Transcript

  • Handling ridiculous amounts of data with probabilistic data structures
    C. Titus Brown
    Michigan State University
    Computer Science / Microbiology
  • Resources
    http://www.slideshare.net/c.titus.brown/
    Webinar: http://oreillynet.com/pub/e/1784
    Source: github.com/ctb/
    N-grams (this talk): khmer-ngram
    DNA (the real cheese): khmer
    khmer is implemented in C++ with a Python wrapper, which has been awesome for scripting, testing, and general development. (But man, does C++ suck…)
  • Lincoln Stein
    Sequencing capacity is outscaling Moore’s Law.
  • Hat tip to Narayan Desai / ANL
    We don’t have enough resources or people to analyze data.
  • Data generation vs data analysis
    It now costs about $10,000 to generate a 200 GB sequencing data set (DNA) in about a week.
    (Think: resequencing human; sequencing expressed genes; sequencing metagenomes, etc.)
    …x1000 sequencers
    Many useful analyses do not scale linearly in RAM or CPU with the amount of data.
  • The challenge?
    Massive (and increasing) data generation capacity, operating at a boutique level, with algorithms that are wholly incapable of scaling to the data volume.
    Note: cloud computing isn’t a solution to a sustained scaling problem!!
    (See: Moore’s Law slide)
  • Life’s too short to tackle the easy problems – come to academia!
    Easy stuff like Google Search
    Awesomeness
  • A brief intro to shotgun assembly
    It was the best of times, it was the wor
    , it was the worst of times, it was the
    isdom, it was the age of foolishness
    mes, it was the age of wisdom, it was th
    It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness
    …but for 2 bn+ fragments.
    Not subdivisible; not easy to distribute; memory intensive.
  • Define a hash function (word => num)
    def hash(word):
    assert len(word) <= MAX_K
    value = 0
    for n, ch in enumerate(word):
    value += ord(ch) * 128**n
    return value
  • class BloomFilter(object):
    def __init__(self, tablesizes, k=DEFAULT_K):
    self.tables = [ (size, [0] * size)
    for size in tablesizes ]
    self.k = k
    def add(self, word): # insert; ignore collisions
    val = hash(word)
    for size, ht in self.tables:
    ht[val % size] = 1
    def __contains__(self, word):
    val = hash(word)
    return all( ht[val % size]
    for (size, ht) in self.tables )
  • class BloomFilter(object):
    def __init__(self, tablesizes, k=DEFAULT_K):
    self.tables = [ (size, [0] * size)
    for size in tablesizes ]
    self.k = k
    def add(self, word): # insert; ignore collisions
    val = hash(word)
    for size, ht in self.tables:
    ht[val % size] = 1
    def __contains__(self, word):
    val = hash(word)
    return all( ht[val % size]
    for (size, ht) in self.tables )
  • class BloomFilter(object):
    def __init__(self, tablesizes, k=DEFAULT_K):
    self.tables = [ (size, [0] * size)
    for size in tablesizes ]
    self.k = k
    def add(self, word): # insert; ignore collisions
    val = hash(word)
    for size, ht in self.tables:
    ht[val % size] = 1
    def __contains__(self, word):
    val = hash(word)
    return all( ht[val % size]
    for (size, ht) in self.tables )
  • Storing words in a Bloom filter
    >>> x = BloomFilter([1001, 1003, 1005])
    >>> 'oogaboog' in x
    False
    >>> x.add('oogaboog')
    >>> 'oogaboog' in x
    True
    >>> x = BloomFilter([2])
    >>> x.add('a')
    >>> 'a' in x # no false negatives
    True
    >>> 'b' in x
    False
    >>> 'c' in x # …but false positives
    True
  • Storing words in a Bloom filter
    >>> x = BloomFilter([1001, 1003, 1005])
    >>> 'oogaboog' in x
    False
    >>> x.add('oogaboog')
    >>> 'oogaboog' in x
    True
    >>> x = BloomFilter([2]) # …false positives
    >>> x.add('a')
    >>> 'a' in x
    True
    >>> 'b' in x
    False
    >>> 'c' in x
    True
  • Storing text in a Bloom filter
    class BloomFilter(object):

    def insert_text(self, text):
    for i in range(len(text)-self.k+1):
    self.add(text[i:i+self.k])
  • def next_words(bf, word): # try all 1-ch extensions
    prefix = word[1:]
    for ch in bf.allchars:
    word = prefix + ch
    if word in bf:
    yield ch
    # descend into all successive 1-ch extensions
    def retrieve_all_sentences(bf, start):
    word = start[-bf.k:]
    n = -1
    for n, ch in enumerate(next_words(bf, word)):
    ss = retrieve_all_sentences(bf,start + ch)
    for sentence in ss:
    yield sentence
    if n < 0:
    yield start
  • def next_words(bf, word): # try all 1-ch extensions
    prefix = word[1:]
    for ch in bf.allchars:
    word = prefix + ch
    if word in bf:
    yield ch
    # descend into all successive 1-ch extensions
    def retrieve_all_sentences(bf, start):
    word = start[-bf.k:]
    n = -1
    for n, ch in enumerate(next_words(bf, word)):
    ss = retrieve_all_sentences(bf,start + ch)
    for sentence in ss:
    yield sentence
    if n < 0:
    yield start
  • Storing and retrieving text
    >>> x = BloomFilter([1001, 1003, 1005, 1007])
    >>> x.insert_text('foo bar bazbif zap!')
    >>> x.insert_text('the quick brown fox jumped over the lazy dog')
    >>> print retrieve_first_sentence(x, 'foo bar ')
    foo bar bazbif zap!
    >>> print retrieve_first_sentence(x, 'the quic')
    the quick brown fox jumped over the lazy dog
  • Sequence assembly
    >>> x = BloomFilter([1001, 1003, 1005, 1007])
    >>> x.insert_text('the quick brown fox jumped ')
    >>> x.insert_text('jumped over the lazy dog')
    >>> retrieve_first_sentence(x, 'the quic')
    the quick brown fox jumpedover the lazy dog
    (This is known as the de Bruin graph approach to assembly; c.f. Velvet, ABySS, SOAPdenovo)
  • Repetitive strings are the devil
    >>> x = BloomFilter([1001, 1003, 1005, 1007])
    >>> x.insert_text('nanana, batman!')
    >>> x.insert_text('my chemical romance: nanana')
    >>> retrieve_first_sentence(x, "my chemical")
    'my chemical romance: nanana, batman!'
  • Note, it’s a probabilistic data structure
    Retrieval errors:
    >>> x = BloomFilter([1001, 1003]) # small Bloom filter…
    >>> x.insert_text('the quick brown fox jumped over the lazy dog’)
    >>> retrieve_first_sentence(x, 'the quic'),
    ('the quick brY',)
  • Assembling DNA sequence
    Can’t directly assemble with Bloom filter approach (false connections, and also lacking many convenient graph properties)
    But we can use the data structure to grok graph properties and eliminate/break up data:
    Eliminate small graphs (no false negatives!)
    Disconnected partitions (parts -> map reduce)
    Local graph complexity reduction & error/artifact trimming
    …and then feed into other programs.
    This is a data reducing prefilter
  • Right, but does it work??
    Can assemble ~200 GB of metagenome DNA on a single 4xlarge EC2 node (68 GB of RAM) in 1 week ($500).
    …compare with not at allon a 512 GB RAM machine.
    Error/repeat trimming on a tricky worm genome: reduction from
    170 GB resident / 60 hrs
    54 GB resident / 13 hrs
  • How good is this graph representation?
    V. low false positive rates at ~2 bytes/k-mer;
    Nearly exact human genome graph in ~5 GB.
    Estimate we eventually need to store/traverse 50 billion k-mers (soil metagenome)
    Good failure mode: it’s all connected, Jim! (No loss of connections => good prefilter)
    Did I mention it’s constant memory? And independent of word size?
    …only works for de Bruijn graphs 
  • Thoughts for the future
    Unless your algorithm scales sub-linearly as you distribute it across multiple nodes (hah!), oryour problem size has an upper bound, cloud computing isn’t a long-term solution in bioinformatics
    Synopsis data structures & algorithms (which incl. probabilistic data structures) are a neat approach to parsing problem structure.
    Scalable in-memory local graph exploration enables many other tricks, including near-optimal multinode graph distribution.
  • Groxel view of knot-like region / ArendHintze
  • Acknowledgements:
    The k-mer gang:
    Adina Howe
    Jason Pell
    RosangelaCanino-Koning
    Qingpeng Zhang
    ArendHintze
    Collaborators:
    Jim Tiedje (Il padrino)
    Janet Jansson, Rachel Mackelprang, Regina Lamendella, Susannah Tringe, and many others (JGI)
    Charles Ofria (MSU)
    Funding: USDA NIFA; MSU, startup and iCER; DOE; BEACON/NSF STC; Amazon Education.