You are currently browsing the monthly archive for July 2010.

A rather long time ago, I mentioned that Krzysztof Onak and I were compiling a list of open research problems for data stream algorithms and related topics. We’re starting with some of the problems that were explicitly raised at the IITK Workshop on Algorithms for Processing Massive Data Sets but we’d also like to add additional questions from the rest of the community. Please email (mcgregor at if you have a question that you’d like us to include (see the previous list for some examples). I’ll also be posting some of the problems here while we work on compiling the final document. Here’s one now…

Given a stream \langle a_1, \ldots, a_m \rangle, how much space is required to approximate the length of the longest increasing subsequence up to a factor 1+\epsilon?

Background. [Gopalan, Jayram, Krauthgamer, Kumar] presented a single-pass algorithm that uses \tilde{O}((m/\epsilon)^{0.5}) space and a matching (in terms of m) lower bound for deterministic algorithms was proven by [Gal, Gopalan] and [Ergun, Jowhari]. Is there a randomized algorithm that uses less space or can the lower bound be extended to randomized algorithms? Very recently, [Chakrabarti] presented an “anti-lowerbound”, i.e., he showed that the randomized communication complexity of the communication problems used to establish the lower bounds is very small. Hence, if it is possible to extend the lower bound to randomized algorithms, this will require new ideas.

Note that solving the problem exactly is known to require \Omega(m) space [Gopalan, Jayram, Krauthgamer, Kumar] and [Sun,Woodruff]. The related problem of finding an increasing subsequence of length k has been resolved: \tilde{\Theta}(k^{1+1/(2^p-1)}) space is known to be necessary [Guha, McGregor] and sufficient [Liben-Nowell, Vee, Zhu] where p is the number of passes permitted. However, there are no results on finding “most” of the elements.

Here’s another bite-sized stream algorithm for your delectation. This time we want to simulate a random walk from a given node u in a graph whose edges arrive as an arbitrarily-ordered stream. I’ll allow you multiple passes and semi-streaming space, i.e., \tilde{O}(n) space where n is the number of nodes in the graph. You need to return the final vertex of a length t=o(n) random walk.

This is trivial if you take t passes: in each pass pick a random neighbor of the node picked in the last pass. Can you do it in fewer passes?

Well, here’s an algorithm from [Das Sarma, Gollapudi, Panigrahy] that simulates a random walk of length t in \tilde{O}(n) space while only taking O(\sqrt{t}) passes. As in the trivial algorithm, we build up the random walk sequentially. But rather than making a single hop of progress in each pass, we’ll construct the random walk by stitching together shorter random walks.

  1. We first compute short random walks from each node. Using the trivial algorithm, do a length \sqrt{t} walk from each node v and let T[v] be the end point.
  2. We can’t reuse short random walks (otherwise the steps in the random walk won’t be independent) so let S be the set of nodes from which we’re already taken a short random walk. To start, let S\leftarrow \{u\} and v\leftarrow T[u], \ell\leftarrow\sqrt{t} where v is the vertex that is reached by the random walk constructed so far and \ell is the length of this random walk.
  3. While \ell < t-\sqrt{t}
    1. If v\not \in S then set v\leftarrow T[v], \ell \leftarrow \sqrt{t}+\ell, S\leftarrow S\cup \{v\}
    2. Otherwise, sample \sqrt{t} edges (with replacement) incident on each node in S. Find the maximal path from v such that on the i-th visit to node x, we take the i-th edge that was sampled for node x. The path terminates either when a node in S is visited more than \sqrt{t} times or we reach a node that isn’t in S. Reset v to be the final node of this path and increase \ell by the length of the path. (If we complete the length t random walk during this process we may terminate at this point and return the current node.)
  4. Perform the remaining O(\sqrt{t}) steps of the walk using the trivial algorithm.

So why does it work? First note that the maximum size of S is O(\sqrt{t}) because |S| is only incremented when \ell increases by at least \sqrt{t} and we know that \ell \leq t. The total space required to store the vertices T is \tilde{O}(n). When we sample \sqrt{t} edges incident on each node in S, this requires \tilde{O}(|S|\sqrt{t})=\tilde{O}(t) space. Hence the total space is \tilde{O}(n). For the number of passes, note that when we need to take a pass to sample edges incident on S, we make O(\sqrt{t}) hops of progress because either we reach a node with an unused short walk or the walk uses \Omega(\sqrt{t}) samples edges. Hence, including the O(\sqrt{t}) passes used at the start and end of the algorithm, the total number of passes is O(\sqrt{t}).

Das Sarma et al. also present a trade-off result that reduces the space to \tilde{O}(n\alpha+\sqrt{t/\alpha}) for any \alpha\in (0,1] at the expense of increasing the number of passes to \tilde{O}(\sqrt{t/\alpha}). They then use this for estimating the PageRank vector of the graph.

Luca has just announced the accepted papers for FOCS. Papers that have direct or indirect connections to streaming and communication complexity include:

Abstracts can be found here. More pdfs can be found at My Brain is Open.


A research blog about data streams and related topics.

Recently Tweeted