You are currently browsing the monthly archive for August 2011.

A few weeks ago I attended a workshop on Coding, Complexity, and Sparsity at the University of Michigan. Thanks to the organizers (Anna Gilbert, Martin Strauss, Atri Rudra, Hung Ngo, Ely Porat, and S. Muthukrishnan) for not only putting together a great program but also for treating the speakers like celebrities! The place swarmed with autograph hunters, roads were closed, and security kept the paparazzi at bay… at least I think this was all for our benefit.

I gave a brief tutorial on data streams and my slides can be found here if you’re interested. One of the results I went through was for the \ell_0-sampling problem introduced by [Cormode, Muthukrishnan, Rozenbaum] and [Frahling, Indyk, Sohler]. See also [Monemizadeh and Woodruff] and [Jowhari, Saglam, Tardos]. Here the set-up is that you see a sequence of m updates to a length n vector v. These updates can increment or decrement entries of v although for the talk I assumed that the entires themselves always remained non-negative. E.g., for m=5, n=6 the sequence

(6,+),~ (3,+),~ (6,-),~ (5,+),~ (2,+)

would result in the vector v=(0,1,1,0,1,0). An algorithm for \ell_0-sampling should return an element chosen uniformly at random from the set \{i: v_i> 0\}. See the slides (or the original papers) for an algorithm using \textrm{polylog} (m,n) space. Anyhow, during the talk I mentioned there was a simple trick to determine whether

\ell_0(v):=|\{i: v_i> 0\}|=1 .

But I decided to leave it as an easy puzzle for which I’d give the answer later. Of course, I forgot. See the comments to this post for the answer.

Other things to check out:



A research blog about data streams and related topics.

Recently Tweeted

Error: Twitter did not respond. Please wait a few minutes and refresh this page.