You are currently browsing the monthly archive for September 2009.
My Biased Coin recently discussed a new paper extending some work I’d done a few years back. I’ll briefly mention this work at the end of the post but first here’s another bite-sized result from [Alon, Matias, Szegedy] that is closely related.
Consider a numerical stream that defines a frequency vector where
is the frequency of
in the stream. Here’s a simple sketch algorithm that allows you to approximate
Let
be a random vector where the
are 4-wise independent and unbiased. Consider
and note that
can be computed incrementally as the stream arrives (given that
is implicitly stored by the algorithm.)
By the weaker assumption of 2-wise independence, we observe that if
and so:
By the assumption of 4-wise independence, we also observe that unless
or
or
and so:
Hence, if we repeat the process with independent copies of
, it’s possible to show (via Chebyshev and Chernoff bounds) that by appropriately averaging the results, we get a value that is within a factor
of
with probability at least
. Note that it was lucky that we didn’t need full coordinate independence because that would have required
bits just to remember
. It can be shown that remembering
bits is sufficient if we only need 4-wise independence.
BONUS! The recent work… Having at least 4-wise independence seemed pretty important to getting a good bound on the variance of . However, it was observed in [Indyk, McGregor] that the following also worked. First pick
where
are independent and the coordinates of each are 4-wise independent. Then let the coordinate values of
be
. It’s no longer the case that the coordinates are 4-wise independent but it’s still possible to show that
and this is good enough for our purposes.
In follow up work by [Braverman, Ostrovsky] and [Chung, Liu, Mitzenmacher], it was shown that you can push this idea further and define based on
random vectors of length
. The culmination of this work shows that the variance increases to at most
and the resultant algorithm uses
space.
At this point, you’d be excused for asking why we all cared about such a construction. The reason is that it’s an important technical step in solving the following problem: given a stream of tuples from , can we determine if the
coordinates are independent? In particular, how “far” is the joint distribution from the product distribution defined by considering the frequency of each value in each coordinate separately. When “far” is measured in terms of the Euclidean distance, a neat solution is based on the above analysis. Check out the papers for the details.
If you want to measure independence in terms of the variational distance, check out [Braverman, Ostrovsky]. In the case , measuring independence in terms of the KL-divergence gives the mutual information. For this, see [Indyk, McGregor] again.
We’re presented with a numerical stream and we want to compute some small-space approximation of
where
is the frequency of
. In particular, we want to be able to return estimates
that satisfy
where is not known ahead of time. CR-Precis [Ganguly, Majumder] is a neat little deterministic stream algorithm that achieves this in
space. (Also check out [pg. 31, Gasieniec, Muthukrisnan] for a special case.)
Let be the first
prime numbers and define “hash” functions
. Based on these functions, the CR-Precis algorithm maintains
counters
which are initially zero. When we observe
in the stream we increment each of
for
.
Say we’re interested in the frequency of . At the end of the day, we end up with
overestimates for
, i.e.,
. Hence, it makes sense to return
. How much of an overestimate can this be?
Note that for any , there are at most
functions
under which
collide, i.e.,
. This follows from the Chinese Remainder Theorem. Hence, we deduce that,
and therefore . Setting
gives the desired result.
But wait, there’s more! There are actually other deterministic algorithms for this problem that use only space: e.g. “Misra-Gries”, “Space-Saving”, and “Lossy-Counting” algorithms (see [Cormode, Hadjieleftheriou] for a survey and experimental comparison). What makes CR-Precis more useful is that it’s a sketch algorithm, i.e., it’s possible to transform the hash functions into a
matrix
in a natural way such that the algorithm described above is simply computing
. This confers numerous advantages to the algorithm including the ability to handle “deletes”, i.e., we consider a stream
and define
.
Assuming that all , the above algorithm is easily adapted to this new setting by decrementing the appropriate counters when processing
and incrementing when processing
.
As noted here, here, and here, the SODA accepts have been announced. There are numerous cool looking papers. Particularly relevant to this blog are:
- On the Exact Space Complexity of Sketching and Streaming Small Norms [Kane, Nelson, Woodruff]
- Streaming Algorithms for extent problems in high dimensions [Agarwal, Sharathkumar]
- Efficiently Decodable Non-adaptive Group Testing [Indyk, Ngo, Rudra]
- Coresets and Sketches for High Dimensional Subspace Approximation Problems [Feldman, Monemizadeh, Sohler, Woodruff]
- 1-pass Relative-Error L_p-Sampling with Applications [Monemizadeh, Woodruff]
- Lower Bounds for Sparse Recovery [Do Ba, Indyk, Price]
- A Model of Computation for MapReduce [Karloff, Suri, Vassilvitskii]
(If you spot a version of any of the above papers online, please send me a link so I can add it. Cheers.)
UPDATE: Abstracts have now been posted. Also, the Geomblog has a “behind the scenes” look at the workings of this year’s SODA committee.
Recent Comments