Efficient representation of de Bruijn Graphs
Assembly data is big
For very large datasets, even after filtering, a hash table over all k-mers may be too big.
Why is a hash table big?
How can we do better?
What if we just wanted “approximate” occurrence?
What if we just want to know “if” a k-mer is present?
Bloom FiltersOriginally designed to answer probabilistic membership queries:
Is element e in my set S?
If yes, always say yes
If no, say no with large probability
False positives can happen; false negatives cannot.
Bloom Filters
For a set of size N, store an array of M bits Use k different hash functions, {h0, …, hk-1} To insert e, set A[hi(e)] = 1 for 0 < i < kTo query for e, check if A[hi(e)] = 1 for 0 < i < k
Image by David Eppstein - self-made, originally for a talk at WADS 2007
Bloom Filters
Image by David Eppstein - self-made, originally for a talk at WADS 2007
If hash functions are good and sufficiently independent, then the probability of false positives is low and controllable.
How low?
False Positives
*analysis of Mitzenmacher and Upfal
Let q be the fraction of the m-bits which remain as 0 after n insertions.
The probability that a randomly chosen bit is 1 is 1-q.
But we need a 1 in the position returned by k different hash functions; the probability of this is (1-q)k
We can derive a formula for the expected value of q, for a filter of m bits, after n insertions with k different hash functions:
E[q] = (1 - 1/m)kn
False Positives
*analysis of Mitzenmacher and Upfal
Mitzenmacher & Unfal used the Azuma-Hoeffding inequaltiy to prove (without assuming the probability of setting each bit is independent) that
Pr(|q � E [q]| � �
m) 2exp(�2
�2
m)
That is, the random realizations of q are highly concentrated around E[q], which yields a false positive prob of:X
t
Pr(q = t)(1� t)k ⇡ (1� E[q])k =
1�
1� 1
m
�kn!k
⇡ (1� e�knm )k
False PositivesX
t
Pr(q = t)(1� t)k ⇡ (1� E[q])k =
1�
1� 1
m
�kn!k
⇡ (1� e�knm )k
This lets us choose optimal values to achieve a target false positive rate. For example, assume m & n are given. Then we can derive the optimal k
k = (m/n) ln 2 ⇒ 2-k ≈ 0.6185 m/n
We can then compute the false positive prob
p = (1� e�(mn ln 2) n
m )(mn ln 2) =)
ln p = �m
n(ln 2)2 =)
m = � n ln p
(ln 2)2
False PositivesX
t
Pr(q = t)(1� t)k ⇡ (1� E[q])k =
1�
1� 1
m
�kn!k
⇡ (1� e�knm )k
This lets us choose optimal values to achieve a target false positive rate. For example, assume m & n are given. Then we can derive the optimal k
k = (m/n) ln 2 ⇒ 2-k ≈ 0.6185 m/n
We can then compute the false positive prob
p = (1� e�(mn ln 2) n
m )(mn ln 2) =)
ln p = �m
n(ln 2)2 =)
m = � n ln p
(ln 2)2
given an expected # elems
and a desired false positive rate
we can compute the optimal size and # of has functions
Detour: Bloom Filters & De Bruijn Graphs
How could this data structure be useful for representing a De Bruijn graph?
A given (k-1)-mer can only have 2*|Σ| neighbors; |Σ| incoming and |Σ| outgoing neighbors — for
genomes |Σ| = 4
To navigate in the De Bruijn graph, we can simply query all possible successors, and see which are
actually present.
Bloom Filters & De Bruijn Graphs
How could this data structure be useful for representing a De Bruijn graph?
Say we have a bloom filter B, for all of the k-mers in our data set, and say I give you one k-mer that is truly present.
We now have a “navigational” representation of the De Bruijn graph (can return the set of neighbors of a node, but not select/iterate over nodes); why?
Bloom Filters & De Bruijn GraphsBut, a Bloom filter still has false-positives, right?
May return some neighbors that are not actually present.
Pell et al., PNAS 2012, use a lossy Bloom filter directly
Chikhi & Rizk, WABI 2012, present a lossless datastructure based on Bloom filters
Salikhov et al., WABI 2013 extend this work and introduce the concept of “cascading” Bloom filters
First, some bounds
Critical False Positives
Chikhi & Rizk
Critical False Positives
Chikhi & Rizk
“true” k-mers
encoded in BF, but not “reachable”
critical FP
Idea of Chkhi and Rizk
* slide courtesy of Salikhov, Sacomoto & Kucherov
Assume we want to represent specific set T0 of k-mers with a Bloom filter B1
Key observation: in assembly, not all k-mers can be queried, only those having k-1 overlap with k-mers known to be in the graph.
The set T1 of “critical false positives” (false neighbors of true k-mers) is much smaller than the set of all false positives and can be stored explicitly
Storing B1 and T1 is much more space efficient that other exact methods for storing T0. Membership of w in T0 is tested by first querying B1, and if w ∈ B1, check that it is not in T1.
false positives of B1 T0
� Represent T0 by Bloom filter B1
* slide courtesy of Salikhov, Sacomoto & Kucherov
false positives of B1 T0
T1
� Represent T0 by Bloom filter B1
� Compute T1 (‘critical false positives’) and represent it e.g. by a hash table
* slide courtesy of Salikhov, Sacomoto & Kucherov
false positives of B1 T0
T1
� Represent T0 by Bloom filter B1
� Compute T1 (‘critical false positives’) and represent it e.g. by a hash table
� Result (example): 13.2 bits/node for k=27 (of which 11.1 bits for B1 and 2.1 bits for T1)
* slide courtesy of Salikhov, Sacomoto & Kucherov
Improving on Chikhi and Rizk’s method
� Main idea: iteratively apply the same construction to T1 i.e. encode T1 by a Bloom filter B2 and set of ‘false-false positives’ T2, then apply this to T2 etc.
� ☞ cascading Bloom filters
* slide courtesy of Salikhov, Sacomoto & Kucherov
false positives of B1 T0
T1
* slide courtesy of Salikhov, Sacomoto & Kucherov
false positives of B1 T0
T1
� further encode T1 via a Bloom filter B2 and set T2, where T2⊆T0 is the set of k-mers stored in B2 by mistake (‘false2 positives’)
T2
* slide courtesy of Salikhov, Sacomoto & Kucherov
false positives of B1 T0
T1
� further encode T1 via a Bloom filter B2 and set T2, where T2⊆T0 is the set of k-mers stored in B2 by mistake (‘false2 positives’)
� iterate the construction on T2 � we obtain a sequence of sets T0, T1, T2, T3, … encode by
Bloom filters B1, B2, B3, B4, … respectively � T0⊇T2⊇T4⊇… , T1⊇T3⊇T5⊇
T2 T3 T4 T5
* slide courtesy of Salikhov, Sacomoto & Kucherov
Correctness
Lemma [correctness]: For a k-mer w, consider the smallest i such that w∉Bi+1. Then w∈T0 if i is odd and w∉T0 if i is even.
� if w∉B1 then w∉T0 � if w∈B1, but w∉B2 then w∈T0 � if w∈B1, w∈B2, but w∉B3 then w∉T0 � etc.
false positives of B1 T0
T1 T2 T3 T4 T5
* slide courtesy of Salikhov, Sacomoto & Kucherov
Assuming infinite number of filters
Let N=|T0| and r=mi/ni is the same for every Bi. Then the total size is
rN + 6rNcr + rNcr + 6rNc2r + rNc2r +... =N(1+6cr)
r1− cr
|B1| |B2| |B3| |B4| |B5|
The minimum is achieved for r=5.464, which yields the memory consumption of 8.45 bits/node
* slide courtesy of Salikhov, Sacomoto & Kucherov
Infinity difficult to deal with ;)
- In practice we will store only a small finite number of filters B1, B2,…, Bt together with the set Tt stored explicitely
- t=1 ➟ Chkhi&Rizk’s method - The estimation should be adjusted, optimal value of r has to be
updated, example for t=4
Table: Estimations for t=4. Optimal r and corresponding memory consumption
* slide courtesy of Salikhov, Sacomoto & Kucherov
Compared to Chikhi&Rizk’s method
Table: Space (bits/node) compared to Chikhi&Rizk for t=4 and different values of k.
* slide courtesy of Salikhov, Sacomoto & Kucherov
We can cut down a bit more …
- Rather than using the same r for all filters B1, B2,…, we can use different properly chosen coefficients r1,r2, …
- This allows saving another 0.2 – 0.4 bits/k-mer
* slide courtesy of Salikhov, Sacomoto & Kucherov
Experiments I: E.Coli, varying k
- 10M E.Coli reads of 100bp - 3 versions compared: 1 Bloom (=Chikhi&Rizk), 2
Bloom (t=2) and 4 Bloom (t=4)
* slide courtesy of Salikhov, Sacomoto & Kucherov
Experiments II: Human dataset
- 564M Human reads of 100bp (~17X coverage)
* slide courtesy of Salikhov, Sacomoto & Kucherov
Experiments I (cont)
* slide courtesy of Salikhov, Sacomoto & Kucherov
Efficiently enumerating cFP
Chicki & Rizk (2013) : https://almob.biomedcentral.com/articles/10.1186/1748-7188-8-22
Note: Requires having the full set on disk, and being able to
make multiple passes over it.
Bloom filters & De Bruijn GraphsSo, we can make very small representation of the dBG. But it’s navigational! We can also make them:
Dynamic & membership
and even weighted
Other AMQs (the CQF)Approximate Multiset Representation
Works based on quotienting* & fingerprinting keys
Clever encoding allows low-overhead storage of element counts (use key slots to store values in base 2r-1; smaller values ⇒ fewer bits)
Careful engineering & use of efficient rank & select to resolve collisions leads to a fast, cache-friendly data structure
Let k be a key and h(k) a p-bit hash value
h(k)
p-bits=
* Idea goes back at least to Knuth (TACOP vol 3)
Other AMQs (the CQF)Approximate Multiset Representation
Works based on quotienting* & fingerprinting keys
Clever encoding allows low-overhead storage of element counts (use key slots to store values in base 2r-1; smaller values ⇒ fewer bits)
Careful engineering & use of efficient rank & select to resolve collisions leads to a fast, cache-friendly data structure
Let k be a key and h(k) a p-bit hash value
h(k) }q-bits
p-bits
Determines position in array of size 2q r-bit slots
=
* Idea goes back at least to Knuth (TACOP vol 3)
Other AMQs (the CQF)Approximate Multiset Representation
Works based on quotienting* & fingerprinting keys
Clever encoding allows low-overhead storage of element counts (use key slots to store values in base 2r-1; smaller values ⇒ fewer bits)
Careful engineering & use of efficient rank & select to resolve collisions leads to a fast, cache-friendly data structure
Let k be a key and h(k) a p-bit hash value
h(k) }q-bits
}r-bits
p-bits
Determines position in array of size 2q r-bit slots
Value stored inr-bit slot (fingerprint)
=
* Idea goes back at least to Knuth (TACOP vol 3)
The CQF
Approximate Multiset Representation
Works based on quotienting & fingerprinting keys
Careful encoding allows low-overhead storage of element counts
Careful engineering & use of efficient rank & select leads to a fast, cache-friendly data structure
Other efficient representations as well
In addition to the theoretical bounds, this paper introduced an algorithm for constructing the contigs of the compacted dBG efficiently (bcalm), and an efficient representation based on building the FM-index over these contigs (dbgFM).
CMSC423: Wrap-up and FAQ
End of Semester FAQ1. When is the final?
• Thurs. May. 14 (8-10AM)
2. Where is the final? • It will be made available on ELMS. I am working to optimize the
format.
3. What content will be on the final? • Technically, you are responsible for all material • The final will cover content we have covered since the midterm
4. What will the format of the exam be? • Same as the midterm. Short answer & longer-form “thinking”
questions. The final will not be proportionally longer —you will have more time per-question than the midterm.
5. How can I prepare for the final? • Go over the lectures, go over your projects, go over the relevant
chapters in the book, google about material you still don’t get, ask us questions on piazza. STUDY AND BE COMFORTABLE WITH DYNAMIC PROGRAMMING!
End of Semester FAQ
6. What grade will I get? • I don’t know (yet) • The class will be curved so that the median grade is a B, with
+/- grades going in ~3-4 point increments from there. • The P/F system for the semester is OPT-OUT, if you don’t opt-
out you get a P or F. • A P is anything D- or above
7. Other questions?
What we didn’t cover.
Most of bioinformatics and computational biology:
• all of “long read” technology and method development
• metagenomics • biological network analysis • “systems” biology (e.g. regulatory inference) • biostatistics and statistical interpretation of
genomics results • modern approaches of machine learning in
bioinformatics (deep learning) • much, much more.