You are currently browsing the category archive for the ‘conference report’ category.

Continuing the report from the Dortmund Workshop on Algorithms for Data Streams, here are the happenings from Day 3. Previous posts: Day 1 and Day 2.

Michael Kapralov started the day with new results on computing matching large matchings in the semi-streaming model, one of my favorite pet problems. You are presented with a stream of unweighted edges on n nodes and want to approximate the size of the maximum matching given the constraint that you only have O(n polylog n) bits of memory. It’s trivial to get a 1/2 approximation by constructing a maximal matching greedily. Michael shows that it’s impossible to beat a 1-1/e factor even if the graph is bipartite and the edges are grouped by their right endpoint. In this model, he also shows a matching (no pun intended) 1-1/e approximation and an extension to a 1-e^{-p}p^{p-1}/(p-1)! approximation given p passes.

Next up, Mert Seglam talked about \ell_p sampling. Here the stream consists of a sequence of updates to an underlying vector \mathbf{x}\in {\mathbb R}^n and the goal is to randomly select an index where i is chosen with probability proportional to |x_i|^p. It’s a really nice primitive that gives rise to simple algorithms for a range of problems including frequency moments and finding duplicates. I’ve been including the result in recent tutorials. Mert’s result simplifies and improves an earlier result by Andoni et al.

The next two talks focused on communication complexity, the evil nemesis of the honest data stream algorithm. First, Xiaoming Sun talked about space-bounded communication complexity. The standard method to prove a data stream memory lower bound is to consider two players corresponding to the first and second halves of the data stream. A data stream algorithm gives rise to a communication protocol where the players emulate the algorithm and transmit the memory state when necessary. In particular, multi-pass stream algorithms give rise to multi-round communication protocols. Hence a communication lower bound gives rise to a memory lower bound. However, in the standard communication setting we suppose that the two players may maintain unlimited state between rounds. The fact that stream algorithms can’t do this may lead to suboptimal data stream bounds. To address this, Xiaoming’s work outlines a communication model where the players may maintain only a limited amount of state between the sending of each message and establishes bounds on classical problems including equality and inner-product.

In the final talk of the day, Amit Chakrabarti extolled the virtues of Talagrand’s inequality and explained why every data stream researcher should know it. In particular, Amit reviewed the history on proving lower bounds for the Gap-Hamming communication problem (Alice and Bob each have a length n string and wish to determine whether the Hamming distance is less than n/2-√n or greater than n/2+√n) and ventured that the history wouldn’t have been so long if the community had had a deeper familiarity with Talagrand’s inequality. It was a really gracious talk in which Amit actually spent most of the time discussing Sasha Sherstov’s recent proof of the lower bound rather than his own work.

BONUS! Spot the theorist… After the talks, we headed off to Revierpark Wischlingen to contemplate some tree-traversal problems. If you think your observation skills are up to it, click on the picture below to play “spot the theorist.” It may take some time, so keep looking until you find him or her.

This week, I’m at the Workshop on Algorithms for Data Streams in Dortmund, Germany. It’s a continuation in spirit of the great Kanpur workshops from 2006 and 2009.

The first day went very well despite the widespread jet lag (if only jet lag from those traveling from the east could cancel out with those traveling from the west.) Sudipto Guha kicked things off with a talk on combinatorial optimization problems in the (multiple-pass) data stream model. There was a nice parallel between Sudipto’s talk and a later talk by David Woodruff and both were representative of a growing number of papers that have used ideas developed in the context of data streams to design more efficient algorithms in the usual RAM model. In the case of Sudipto’s talk, this was a faster algorithm to approximate b-matchings while David’s result was a faster algorithm for least-squares regression.

Other talks included Christiane Lammersen presenting a new result for facility location in data streams; Melanie Schmidt talking about constant-size coresets for k-means and projective clustering; and Dan Feldman discussing the data stream challenges that arise when trying to transform real-time GPS data from your smart-phone into a human-readable diary of your life. I spoke about work on constructing a combinatorial sparsifier for an n^2-dimensional graph via a single random linear projection into roughly n dimensions. Rina Panigrahy wrapped things up with an exploration of different distance measures in social networks, i.e., how to quantify how closely-connected you are to your favorite celebrity. This included proposing a new measure based on the probability that two individuals remained connected if every edge was deleted with some probability. He then related this to electrical resistance and spectral sparsification. He refused to be drawn on which of his co-authors had the closest connection to the Kardashians.

To be continued… Tomorrow, Suresh will post about day 2 across at the Geomblog.

In the second lecture, we discussed algorithms for graph streams. Here we observe a sequence of edges and the goal is to estimate properties of the underlying graph without storing all the edges. The slides are here:

We covered:

  1. Spanners: If you judiciously store only a subset of the edges, you can ensure that the distance between any two nodes in the subgraph is only a constant factor larger than the distance in the entire graph.
  2. Sparsifiers: Here you also remember a subset of edges but also apply weights to these edges such that the capacity of every cut is preserved up to a small factor. We show how to do this incrementally.
  3. Sketching: All this talk of remembering a subset of the edges is all very well but what if we’re dealing with a fully dynamic graph where edges are both added and deleted. Can we still compute interesting graph properties if we can’t be sure the edges we remember wouldn’t subsequently be deleted? You’ll have to see the slides to find out. Or see this SODA paper.

Lunch Break! If you want to fully recreate the experience of a French workshop, I suggest you now find yourself a plate of oysters and break for lunch. Lectures will resume shortly.

Just back from EPIT 2012 where I gave four lectures on data streams and a bit of communication complexity. Other lecturers discussed semi-definite programming, inapproximability, and quantum computing. Thanks to Iordanis Kerenidis, Claire Mathieu, and Frédéric Magniez for a great week.

Anyhow, I wanted to post my slides along with some editorial comments that may or may not be of interest. Here’s the first lecture:

The goal was to cover some of the basic algorithmic techniques such as different forms of sampling and random linear projections. The basic narrative was:

  1. Sampling: Sure, you can uniformly sample from the stream and try to make some inferences from the sample. But there are better ways to use your limited memory.
  2. Sketching: You can compute random linear projections of the data. Hash-based sketches like Count-Min and Count-Sketch are versatile and solve many problems including quantile estimation, range queries, and heavy hitters.
  3. Sampling Again: A useful primitives enabled by sketching, is non-uniform sampling such as \ell_p sampling where the probability of returning a stream element of type i is proportional to the p-th power of the frequency of i.

I bookended the lecture with algorithms for frequency moments, first a suboptimal result via AMS sampling (from Alon et al. STOC 1996) and then a near-optimal result via \ell_p sampling (from Andoni et al. FOCS 2011 and Jowhari et al. PODS 2011). I was tempted to use another example but that seemed churlish given the importance of frequency moments in the development of data streams theory. To keep things simple I assumed algorithms had an unlimited source of random bits. Other than that I was quite happy that I managed to present most of the details without drowning in a sea of definitions and notation.

A few weeks ago I attended a workshop on Coding, Complexity, and Sparsity at the University of Michigan. Thanks to the organizers (Anna Gilbert, Martin Strauss, Atri Rudra, Hung Ngo, Ely Porat, and S. Muthukrishnan) for not only putting together a great program but also for treating the speakers like celebrities! The place swarmed with autograph hunters, roads were closed, and security kept the paparazzi at bay… at least I think this was all for our benefit.

I gave a brief tutorial on data streams and my slides can be found here if you’re interested. One of the results I went through was for the \ell_0-sampling problem introduced by [Cormode, Muthukrishnan, Rozenbaum] and [Frahling, Indyk, Sohler]. See also [Monemizadeh and Woodruff] and [Jowhari, Saglam, Tardos]. Here the set-up is that you see a sequence of m updates to a length n vector v. These updates can increment or decrement entries of v although for the talk I assumed that the entires themselves always remained non-negative. E.g., for m=5, n=6 the sequence

(6,+),~ (3,+),~ (6,-),~ (5,+),~ (2,+)

would result in the vector v=(0,1,1,0,1,0). An algorithm for \ell_0-sampling should return an element chosen uniformly at random from the set \{i: v_i> 0\}. See the slides (or the original papers) for an algorithm using \textrm{polylog} (m,n) space. Anyhow, during the talk I mentioned there was a simple trick to determine whether

\ell_0(v):=|\{i: v_i> 0\}|=1 .

But I decided to leave it as an easy puzzle for which I’d give the answer later. Of course, I forgot. See the comments to this post for the answer.

Other things to check out:

The sun is setting on La Rocca di Bertinoro. The pasta has run dry and the wifi signal is fading. Theorists are scurrying towards taxis and preparing themselves for their overnight flights. Sublinear Algorithms 2011 is all over. But fear not! There are still three more talks to report…

First Robi Krauthgamer discussed “Polylogarithmic Approximation for Edit Distance” [Andoni, Krauthgamer, Onak]. Given two length n sequences, the edit distance is the minimum number of character operations (inserts, deletions, substitutions) that are sufficient to transform the first string into the latter. Masek and Paterson showed that the edit distance could be computed exactly in O(n^2/\log^2 n) time. Recent work has focused on approximation in near-linear time. Robi presented an algorithm that achieves a (\log n)^{O(\epsilon^{-1})} approximation in n^{1+\epsilon} time. This was a significant improvement over the previous best of 2^{\tilde{O}(\sqrt{\log n})} approximation. [Andoni, Krauthgamer, Onak] is also the paper that sowed the seeds for the precision-sampling technique that Alex presented on Wednesday (one of my favorite talks from the workshop).


Next, Artur Czumaj gave a talk entitled “Planar Graphs: Random Walks and Bipartiteness Testing” [Czumaj, Monemizadeh, Onak, Sohler]. This is one of the few results for property testing in general graphs, i.e., not dense graphs or graphs with constant degree. In the dense graph model you can query entries of the adjacency matrix and need to accept input graphs with the property of interest and reject graphs if \epsilon n^2 entries of the adjacency matrix would need to be changed in order to have a graph that has the property. After about ten years of research we have a very good understanding of what is possible in this model. Attention has since focussed on the bounded-degree adjacency list model where every node has degree at most d and we need to reject graphs if \epsilon dn changes would be necessary to satisfy the property. In both models, testing bipartiteness has been an important problem: in the dense graph model the complexity is constant (we’re treating \epsilon as constant) and in the bounded-degree model the complexity is \Theta(\textrm{polylog}(n) \sqrt{n}). In the general graph model, there is no bound on the max-degree and we may query the graph by a) asking for the i-th neighbor of a specific node or b) asking for the degree of a specific node. In practice it seems like asking for a random neighbor of a specific node is sufficiently powerful. Artur showed that it was possible to test bipartiteness in planar graphs with constant queries.

The very last talk was given Ronitt Rubinfeld on “Testing Collections of Properties” [Levi, Ron, Rubinfeld]. Previous work on property testing of distributions considered single distributions or pairs of distributions: we’d take iid samples from the distributions and try to determine if, e.g., the distributions where identical or far from identical (i.e., the \ell_1 distance between the distributions is at least \epsilon.). Ronitt considered a more general model in which there are m distributions D_1, \ldots, D_m over [n] and a (known or unknown) distribution \nu over [m]. On querying the distributions we get (i,x) where i\sim \nu and x \sim D_i. The goals included determining whether all D_i are identical or whether they can be clustered into k clusters such that within a cluster, all distributions are close. Results included a tight bound if n>m of \tilde{\Theta}(n^{2/3}m^{1/3}) for the question of testing if all distributions where identical. Using the observation that (i,x) are independent iff all D_i‘s are equal, this also gives a tight lower bound for independence testing, resolving an open question from previous work.

BONUS! More Photos… Here are a few more photos from the workshop.

By day we’d listen to theory talks in the Fresco room…

By night we’d talk about theory over dinner…

On our afternoon off, we admired cathedrals and talked more theory.

(Piotr reports on the last two talks of Day 4.)

After lunch, Christian Sohler served new results on property testing in hyperfinite graphs. A family of graphs is (k,\epsilon)-hyperfinite if in any graph we can remove \epsilon n edges to obtain connected components of size at most k. For example, by planar separator theorem, planar graphs qualify for this distinction. In the talk, Christian focused on graphs of bounded degree and k.

Christian presented a number of results, including the following (and surprising) one: every property is testable on hyperfinite graphs. The basic idea is: for any graph G in the family, if we remove some \epsilon n edges from G, then the resulting graph G’ can be fully described as a distribution over graphs of size at most k and one can approximate this distribution by random sampling. One interesting implication of his work is that computing statistics of “graph motifs” in large networks (a popular tool in network analysis) is justified, since such statistics identify the graph itself.

The final talk of the day was by Oded Goldreich, who talked about sub-linear algorithms in graphs of bounded degree. He considered a model where one can ask for a specific (i.e., the i-th) neighbor of a given node. He considered the problem of *finding* (not just testing existence of) cycles and trees in a given graph. The first of them was a (roughly) \sqrt{N}-time algorithm for finding cycles in N-vertex graphs that are eps-far from being cycle free. One can also generalize this result to finding cycles of length at least k, for a fixed k. He also showed that one cannot do (much) better than \sqrt{N} is optimal. In contrast, he also gave constant time algorithms for finding trees of size at least k (for constant k).

The algorithms are based on the following observation linking one-sided property testing and finding structures with that property. Specifically, if such tester rejects a graph, it must have some “evidence” that the property does not hold. E.g., if we test for bipartiteness, then the violating structure is a non-bipartite graph. This leads to an algorithm that finds a desired structure.

To obtain an algorithm for finding cycles (or testing cycle freeness), he used a reduction to bipartite-ness testing. The idea is simple and very cute: for each each edge one replaces it by a 2-path with prob. 1/2, and keeps it intact otherwise. Clearly, any cycle-free graph is mapped into a cycle-free (and therefore bipartite) graph. The key observation is that a graph that is far from being cycle free is mapped into a graph that is far from being bipartite.

(Piotr Indyk reports from Day 4 at Sublinear Algorithms. Includes contributions from Sariel Har Peled.)

The first session of the day was dedicated to distributed algorithms. It was headlined by Roger Wattenhofer, who gave a very nice overview of the area. Distributed algorithms have recently found applications in sub-linear world, e.g., see the talks on Monday. Distributed algorithms communicate along the edges of a graph, and the algorithm complexity is measured in rounds of communication.

The central character in Roger’s talk was the Maximal Independent Set (MIS) problem. Despite many years of research on this problem, some basic questions are still open (e.g., is there a deterministic algorithm with \textrm{polylog} n rounds?). Roger described a randomized algorithm that runs in \log n rounds. The algorithm is local: a decision made at a given node depends only on actions of the nodes in its small neighborhood. He then proceeded to show that any algorithm for MIS requires at least \log^* n rounds. The proof used Ramsey theory, which was a nice coincidence, given that a workshop on this topic was held in parallel. The lower bound in fact holds even for the line graph. Unfortunately (if you are a pessimist), for the line graph, there is a matching upper bound of \log^* n. Therefore, to improve the lower bound one needs a richer class of instances. Roger showed a construction of graphs for which one can show a lower bound of \sqrt{\log n} or \log \Delta (for a closely related problem of vertex cover). The graph was highly unusual, in that it was “highly symmetric” and “asymmetric” at the same time.

The next talk, by Pierre Fraigniaud, was on local distributed decisions (i.e., decision problems). An example of such problem is coloring validation: given a coloring, is it legal ? Other examples include: checking whether there is at most one marked node, consensus, etc. In general, one can define the notion of distributed language, which is a decidable collection of configurations. A configuration is accepted iff it is accepted by all nodes. In other words, the configuration is not accepted if someone says it is not accepted. Pierre presented several results on randomized algorithms for decision making, including a strong derandomization result (under certain conditions).

In the next session, Ning Xie talked about Local Computation Algorithms. Here the idea is that you are given the input, and problem at hand generates a large output. For example, the input is a graph and the task at hand is computing a maximal independent set (MIS). Let us assume that we want to output only one specific bit of the output. The question is whether we can do it in sublinear time, and in a way that is consistent, if we decide to read i bits of the output in some arbitrary fashion (i.e., think about the algorithm here is a query data-structure that outputs the desired bit). Since we need to be consistent across different queries, if we are going to use randomness, it needs to be compactly represented. To this end, one uses k-pairwise independent sequence of random bits, which can be specified compactly with a “few” random bits, and one can generate a long sequence implicitly that can “cheat” many algorithms. Secondary, to get a local computation, we need to be able to break up the input quickly into small fragments, such that the fragment that defines the desired output (i.e., it contains the vertex we need to output if it is in the MIS or not). The main idea is now to use Beck’s constructive version of the Lovasz Local Lemma. For example, you run a randomized MIS algorithm (parallel version) for a small number of iterations, and then you run a greedy algorithm on each remaining connected component. The randomized algorithm here chooses a vertex v to the independent set with probability 1/2d (d is the degree in the given regular graph), and insert it if there are no collisions. After O(d \log d) iterations, one proves that each remaining connected components is of size O( \log n) with high probability. Thus, we can simulate this algorithm locally given a query vertex, and run on the relevant connected component directly, getting a sublinear running time.

Yuichi Yoshida talked about constant-time approximation algorithms. He started from recalling the result of Raghavendra who showed that if the Unique Games Conjecture is true, then for every constraint satisfaction problem (CSP), the best approximation ratio is attained by a certain simple semidefinite programming and a rounding scheme for it. He then showed that similar results hold for constant- time approximation algorithms in the bounded-degree model. He obtained the results in three steps. First, he showed that for every CSP, one can construct an oracle that serves an access, in constant time, to a nearly optimal solution to a basic LP relaxation of the CSP. Using the oracle, he then showed a constant-time rounding scheme that achieves an approximation ratio coincident with the integrality gap of the basic LP. Finally, he gave a generic conversion from integrality gaps of basic LPs to hardness results.

The last talk before lunch was by Vladimir Braverman. He focused on the following class of streaming problems: given a function G, estimate a statistic that is equal to the sum, over all i, of terms G(m_i), where m_i is the number of times an element i appears in the stream. Alon,Matias and Szegedy asked for which functions G it is possible to approximate this statistic using a single pass over the data and using poly-logarithmic memory. Vladimir showed a precise characterization for all monotonically increasing functions G such that G(0)=0. He also introduced the method of recursive sketching that can be used, e.g., for approximating frequency moments.

(It’s Day 3 at Sublinear Algorithms and today Seshadhri is taking on blogging duties. Thanks Sesh!)

How to win Netflix: The survey of the day was “Compressed sensing to matrix completion and beyond” by Benjamin Recht. We started by discussing the basic matrix completion problem. Think of a matrix with the rows being movies and columns being Netflix users. The entries are the respective ratings. Suppose we got few of those ratings. Can we reasonably guess what the remaining entries are (and thereby win $1,000,000)? Formally, we have some k \times n matrix M with some entries that are provided. We assume that the matrix is of low rank, meaning that M = LR, where L is k \times r and R is r \times n. The total number of free variables is r(k+n), which for small r is much smaller than the trivial bound of kn. Ben showed how techniques from compressed sensing could be used to attack this problem. His work gave a strong mathematical basis for certain heuristics that people used to solve such problems.

The importance of being coherent: That segued into the next talk, the only board talk of the workshop. Michael Mahoney talked about “Fast Approximations for Matrix Coherence and Statistical Leverage”. The concept of matrix coherence was introduced by compressed sensing folks (and the related notion of statistical leverage was present in earlier work by Michael). Consider a low-rank matrix M with the SVD M = U \Sigma V^* (so U is an n \times r matrix). The coherence is the maximum length of the projection of a standard basis vector onto the subspace spanned by the r column vectors of U. Formally,

\mu(U) = \frac{n}{r}\max_{i \leq n}\| {\bf P}e_i\|^2

where e_i‘s are the standard basis vectors and {\bf P} is the projection matrix onto U. Michael talked about fast algorithms to compute this quantity, involving many linear algebraic manipulations and the recent fast Johnson-Lindenstrauss algorithms.

Big brother is watching you(r net traffic): After that, it was time for more traditional theory CS. Ely Porat talked about the classic problem of pattern matching in the streaming model. We are given a text T that we stream through, and a fixed pattern P of length m. The fingerprinting algorithm of Karp-Rabin solves this problem, but requires O(m) space (since it basically maintains a sliding window of that size). Ely’s aim was to get this down to \textrm{polylog} m space, so that it would be appropriate to be blogged about here. He discussed some clever tricks to break this pattern into smaller sub-patterns, and using recursive fingerprinting schemes. One open problem was the get space lower bounds for this rather fundamental problem. He also mentioned the open problem of arranging his sabbatical for next year! Contact him for details.

How to find your Dacians: Alexandr Andoni gave a talk on “Precision and Sampling Algorithms”. The title was changed in real time to “Sublinear Algorithms via Precision Sampling” (and hence us attendees will never know what his original talk was about). After giving the motivation of the emperor of ancient Rome attempting to estimate the total number of Dacians in various provinces, Alex described a very cute sampling problem. We want to estimate the values of S = \sum_{1 \leq i \leq n} a_i, where a_i \in [0,1]. Unfortunately, we can only get an estimate \widetilde{a_i} with a precision u_i (so |\widetilde{a_i} - a_i| \leq u_i). Our power is that we can choose the precisions u_i but have to pay a cost of (1/n)\sum_i 1/u_i. How do we get an estimate \widetilde{S} satisfying

S-\epsilon < \widetilde{S} < (1+\epsilon)S + O(1)

for constant \epsilon? Alex gave a nifty algorithm that has a cost of O(\epsilon^{-3}\log n). This had applications in approximating the edit distance and also in streaming algorithms for large frequence moments.

How to bake your cake and eat it too: Nir Halman talked about “FPTASs for Stochastic Optimization Problems”. He introduced his talk using a baking analogy. First, he told us why the result (or cake) was going to be tasty, and then gave us the ingredients and recipe for the result. The problem was a stochastic version of the classic discrete knapsack problem. We have T objects with the ith object having volume v_i and profit p_i. The volume of the knapsack is B, and we can solve this problem using dynamic programming is O(BT) time. This is, of course, not polynomial, and the aim to get a (1+\epsilon)-approximate solution and only pay an overhead of poly(1/\epsilon). The general framework of this result is strong enough to generate FPTASs for various versions of knapsack. One of the important cases Nir discussed was the stochastic version, where the volume v_i comes from some explicitly known distribution. Another interesting case was the non-linear knapsack, where we can choose integer copies of each element.

The Arthur Cayley approach to odd-cycles: Asaf Shapira was next, telling us about “Testing Odd-Cycle Freeness of Boolean Functions”. Given a boolean function f, an odd cycle a set of domain points x_1, x_2, \ldots, x_t such that t is odd, f(x_1) = \ldots = f(x_t) = 1 and \sum_i x_i = 0. This is a property that falls into the broad class of linear-invariant properties defined by simple constraints, but uncharacteristically, does not have an bounded constraint size. The main result is that this is testable in O(1/\epsilon^2) queries. Previous testers used general characterizations that gave the standard “tower of (1/\epsilon)” query complexity (Arnab talked about this result on the previous day). Asaf gave an excellent talk on relating this problem to testing bipartiteness of the Cayley graph of f, which was really the heart of the result. He gave details on how to combine Fourier analysis ideas with a spectral approach, connecting the odd cycles in f to the bipartiteness (or lack thereof) of the Cayley graph. This was done extremely well, so even though he walked us through the technical approach, the main ideas were very clear (a rather tough thing to do, IMHO). Finally, he also explained that there exists a canonical poly(1/\epsilon)-query tester for this problem. This tester randomly samples x_1, \ldots, x_{\log (1/\epsilon)} and only queries the subspace spanned by them. It is an interesting open question whether for any linear invariant property that is testable in q(\epsilon) queries, there exists a canonical tester running in poly(q(\epsilon)) time.

After lunch, it was time for the excursion to the ancient town of Ravenna. The highlights were the mosaics of the Basilica of San Vitale and the final resting place of Dante Aligheri. These involved little to no new theoretical results, and hence the blog post shall end here.

(Krzysztof Onak continues with Tuesday’s report for Sublinear Algorithms 2011.)

The second part of the day started with a session devoted to the Johnson-Lindenstrauss lemma. Nir Ailon presented different definitions of the problem. In the embedding version, one is given a set of n Euclidean vectors in \mathbb R^m and wants to map them to \mathbb R^d, where d is small and the lengths of vectors are preserved up to a factor of 1\pm \epsilon. It is known that a random projection on d=O(\epsilon^{-2} \log n) dimensions is likely to produce such vectors. The downsides of using a random projection are that both the running time and the required amount of randomness are \Omega(md). Nir has addressed the challenge of reducing these requirements in his papers [Ailon, Chazelle] and [Ailon, Liberty]. His papers employ the Fourier transform and error correcting codes. Nir also discussed a more recently discovered connection between this problem and the restricted isometry property in compressed sensing. This connection resulted in new algorithms for efficiently computing the Johnson-Lindenstrauss transform.

Jelani Nelson presented his recent paper with Daniel Kane. The goal is to give a distribution over sparse matrices that provides the properties required in the Johnson Linderstrauss lemma and can be computed efficiently for sparse vectors. Jelani’s work is a followup to the paper of [Dasgupta, Kumar, Sarlos]. Jelani showed two extremely simple methods for constructing such matrices. In the first, one selects a small number of entries in each column and sets them at random to -1 and 1. In the other one, one partitions the rows into a number of buckets corresponding to the number of desired non-zero entries in each column. Then in each column, one chooses one non-zero entry in each bucket. The main advantage over the approach of Dasgupta et al. is that this way collisions between non-zero entries are avoided. One of the main open questions is to both derandomize the Johnson-Lindenstrauss transform and make it run fast for sparse vectors. Jelani also mentioned a recent follow-up by [Braverman, Ostrovsky, Rabani].

You can spend the following coffee break admiring one of the well preserved frescoes:

Sofya Raskhodnikova talked about the problems of testing and reconstructing Lipschitz functions [Jha, Raskhodnikova 2011]. A function f:X \to \mathbb R is c-Lipschitz if for every x,y \in X, it holds |f(x) - f(y)| \le c \cdot |x-y|. In the testing problem, one wants to verify that a function is approximately Lipschitz (for some fixed c). In the reconstruction problem, one wants to design a “local filter” through which the question to a function are asked. If the distance of f to the property of being Lipschitz is \Delta, then the filter provides access to a function f' such that f' is Lipschitz, and the distance between f and f' is not much larger than \Delta. Sofya mentioned applications of this research to program analysis and data privacy. In her presentation, she focused on the case when the domain X is a hypercube. She gave both positive and negative results, where she mentioned that the latter were greatly simplified by techniques from a recent paper of [Blais, Brody, Matulef].

C. Seshadhri talked about his recent result on submod… Well, Sesh eventually decided to talk about approximating the length of the longest increasing subsequence [Saks, Seshadhri]. Sesh showed that an additive \delta n-approximation to the length of the longest increasing subsequence can be computed in (1/\delta)^{O(1/\delta)} \cdot (\log n)^c time, where n is the length of the sequence, \delta is an arbitrary parameter, and c is a constant. Before his work, the best known \delta for which a sublinear algorithm was known was 1/2. This result also implies a better approximation algorithm for approximating the distance to monotonicity. Sesh showed a simplified version of his algorithm that runs in (\log n)^{O(1/\delta)} time. The algorithm mimics a dynamic program, recursively searches for good splitters, and applies some boosting to obtain a better approximation than n/2. Sesh also complained that working on the paper and making everything work was painful. He even explicitely asked: “Was it really worth it?” Sesh, I think it was. This is a really nice result!

At night we experienced earthquakes, with the strongest of magnitude 3.1. It is pretty clear that they were caused by the impression left by today’s talks.

A screenshot from www.emsc-csem.org:

About

A research blog about data streams and related topics.

Recently Tweeted

Follow

Get every new post delivered to your Inbox.

Join 124 other followers