A Java-implementation of a stable Bloom filter for filtering duplicates out of data streams.
Bloom Filters can be used to detect duplicates in streams of data elements.
The most obvious way of doing this would be retaining a list of IDs for all elements that have been seen. For large numbers of elements this is not feasible since a large amount of memory would be necessary to retain all IDs. The Bloom Filter is a compromise working on a limited amount of memory, therefore possibly yielding false positives.
The false positive rate (FPR) for a Bloom Filter is given by: . K is number or hash functions, m is the size of the filter, n is the number of added elements and p is probability of a field in the filter still being unused after n elements have been added. It can be seen that with increasing n, p converges against 0 and therefore FPR converges against 1.
Stable Bloom Filters continuously "reset" random fields in the filter. Deng and Rafiei have shown that by doing this, the FPR can be stabilised [1]. The disadvantage of this approach is that it introduces false negatives.
BloomFilter<String> filter = BloomFilterBuilder.get().buildFilter();
filter.add("foo");
assert filter.mightContain("foo");
assert !filter.mightContain("bar");
CountingBloomFilter<String> filter = BloomFilterBuilder.get().buildCountingFilter();
filter.add("foo");
filter.add("bar");
filter.remove("foo");
assert !filter.mightContain("foo");
assert filter.mightContain("bar");
[1] Fan Deng and Davood Rafiei. 2006. Approximately detecting duplicates for streaming data using stable bloom filters. In Proceedings of the 2006 ACM SIGMOD international conference on Management of data (SIGMOD '06). ACM, New York, NY, USA, 25-36. DOI=http://dx.doi.org/10.1145/1142473.1142477