Algorithms and Computation Theory (ACT) Seminar, Spring 2005



Spring 2004 seminar schedule can be found here.

The seminar meets on Fridays at 11 a.m. in ACES 3.408.

SPRING 2005 SCHEDULE

Fri, Jan. 21 Peter Bro Miltersen, University of Aarhus
Topologically Constrained Bounded Width Computation
Mon, Jan. 24 Amnon Ta-Shma, Tel-Aviv University
If NP Languages are Hard in the Worst-Case Then it is Easy to Find Their Hard Instances
Fri, Jan. 28 Seth Pettie, Max Planck Institut fuer Informatik
Implicit and Low-Redundancy Data Structures
Fri, Feb. 4 Organizational Meeting
Fri, Feb. 11 Embeddings Talk Adam Klivans
Johnson-Lindenstrauss Lemma and Applications
Fri, Feb. 18 Embeddings Talk Anup Rao
Indyk's Survey on Embeddings and Algorithmic Applications
Fri, Feb. 25 Embeddings Talk Ned Dimitrov
Embedding Graphs into Subtrees
Fri, Mar. 4 Matteo Frigo, IBM Austin Research Lab
Cache Oblivious Stencil Computations
Fri, Mar. 11 Embeddings Talk Mitul Tiwari
Discussing "A Tight Bound on Approximating Arbitrary Metrics by Tree Metrics (FRT 2003)"
Fri, Mar. 18 SPRING BREAK
Fri, Mar. 25 Andrew Mills
Discussing "Locally Decodable Codes with 2 Queries and Polynomial Identity Testing for Depth 3 Circuits" by Z. Dvir and A. Shpilka.
Fri, Apr. 1 Embeddings Talk Yu Sun
Discussing "Approximate Nearest Neighbours: Towards Removing the Curse of Dimensionality" by Indyk and Motwani
Fri, Apr. 8 Vladimir Trifonov
An O(log n log log n) Space Algorithm for Undirected st-Connectivity
Tue, Apr. 12 Prahladh Harsha, Microsoft/TTI-Chicago
Short PCPs verifiable in polylogarithmic time
Fri, Apr. 15 Adam Kalai, TTI-Chicago
Blind online optimization: gradient descent without a gradient
Fri, Apr. 22 Bruno Codenotti, IIT-Pisa
Market Equilibrium and Optimization (or, When Does The "Invisible Hand" Help Computation?)
Fri, Apr. 29 Anindya Patthak, UT-Austin
Dinur's PCP via Gap Amplification


For more information on the ACT seminar please contact Adam Klivans (klivans "at" cs.utexas.edu) or Vijaya Ramachandran (vlr "at" cs.utexas.edu).


Peter Bro Miltersen, University of Aarhus: "Topologically Constrained Bounded Width Computation"

Abstract: We survey characterizations of subclasses of NC^1 defined by bounded depth circuits (such as AC^0 and ACC^0) in terms of bounded width circuits with topological constraints (such as upwards planarity and planarity).


Amnon Ta-Shma, Tel-Aviv University: "If NP Languages are Hard in the Worst-Case Then it is Easy to Find Their Hard Instances"

Abstract: We prove that if NP is not contained in BPP, i.e., if some NP-complete language is worst-case hard, then for every probabilistic algorithm trying to decide the language, there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errs on inputs from this distribution. This is the first worst-case to average-case reduction for NP of any kind. We stress however, that this does not mean that there exists one fixed samplable distribution that is hard for all polynomial time probabilistic algorithms, which is a prerequisite assumption needed for OWF and cryptography (even if not a sufficient assumption). We elaborate on the difference between these two notions. We also show, under a plausible complexity assumption, how to derandomize the samplable distribution and to generate deterministically a short list of instances that is guaranteed to contain a hard instance for the algorithm.


Seth Pettie, Max Planck Institut fuer Informatik: "Implicit and Low-Redundancy Data Structures"

Abstract: The value system of many algorithm designers today can be neatly summarized as "Asymptotic running time trumps all!" Other measures of interest, such as space consumption, use of random bits, empirical running time, and simplicity, are of decidedly lesser value. In reality, of course, it is frequently those other measures that are most important. In this talk I'll discuss the field of low-redundancy data structures, where the chief priority is space consumption and the main theoretical question is: to what extent can space be minimized without affecting (asymptotic) running time? Our main technical contribution is a tight time-redundancy tradeoff for priority queues supporting the decreasekey operation. Joint work with Christian W. Mortensen


Adam Klivans "Johnson-Lindenstrauss Lemma and Applications."

I will present two proofs of the Johnson-Lindenstrauss lemma which states that points in Euclidean space can be projected into a much lower dimensional Euclidean space with little distortion. The first proof by Dasgupta and Gupta uses a random projection matrix where each entry is chosen independetly from a Gaussian; the second is due to Achlioptas showing that the projection matrix can be a random {-1,+1} matrix. I will sketch applications in machine learning.


Anup Rao, "Indyk's Survey on Embeddings and Algorithmic Applications."

I will try to give a survey of the work to date on low-distortion geometric embeddings, following the writeup by Indyk (link at the end of the abstract). I will show a few easy constructions that give non-trivial bounds. I will try to give a feel for several ways in which low-distortion embeddings can be used to solve algorithmic problems and also talk about the progress made so far in finding these embeddings in various scenarios.


Ned Dimitrov, "Embedding Graphs into Subtrees"

Abstract: We will define the concept "average stretch," which is closely related to distortion. In the main part of the talk we will show that all n vertex weighted multigraphs have optimum average stretch at most exp(sqrt(log n log log n)). This result implies there exist probabilistic embeddings of any n vertex weighted multigraph into a set of subtrees with distrortion exp(sqrt(log n log log n)). If we have time, we will also show a lower bound on probabilistic embeddings of graphs into subtrees. At the end of the talk, we will talk about applications and do a quick survey of current related results. The talk is based on the paper "A Graph-Theoretic Game and its Applications to the k-Server Problem" by Alon, Karp, Peleg, and West that can be found in the SIAM Journal on Computing, Volume 24, No 1.


Matteo Frigo, "Cache Oblivious Stencil Computations"

Abstract: A stencil describes the computation of a grid point at time T+1 as a function of neighboring grid points at time T. This computational pattern arises frequently in scientific computing, for example in explicit finite-difference methods for solving differential equations.

In this talk, we discuss a novel algorithm for stencil computations that applies to arbitrary stencils in n-dimensional spaces. On a machine with an ``ideal cache'' of size Z, for sufficiently large problems, the algorithm computes P grid elements while incurring O(P / Z^{1/n}) cache misses, which matches the lower bound of [Hong and Kung, 1981]. The algorithm is ``cache oblivious'': it does not contain the cache size Z as a parameter.

The algorithm was applied to LBMHD, a hydrodynamics program from the HPCC suite of DARPA benchmarks, obtaining a speedup of 7x with respect to the original code.

Joint work with Volker Strumpen.


Mitul Tiwari, discussing "A Tight Bound on Approximating Arbitrary Metrics by Tree Metrics (FRT 2003)."

Fakcheroenphol, Rao, and Talwar (STOC 2003) showed that any n point metric space can be embedded into a distribution of tree metrics with the expected stretch of O(log n). They exploited graph cutting technique given by Calinescu, Karloff and Rabani to improve upon Bartal's result of O(log n loglog n) stretch. The distribution of tree constructed is not a spanning tree (or a sub-tree) of the original graph/metric.

In this talk we discuss the graph cutting technique by Calinescu, Karloff, and Rabani and show how this technique was used to generate a distribution of trees over the given n points with desired properties.


Andy Mills, discussing "Locally Decodable Codes with 2 Queries and Polynomial Identity Testing for Depth 3 Circuits" by Z. Dvir and A. Shpilka.

In this work they study two, seemingly unrelated, notions. Locally Decodable Codes (LDCs) are codes that allow the recovery of each message bit from a constant number of entries of the codeword. Polynomial Identity Testing (PIT) is one of the fundamental problems of algebraic complexity: we are given a circuit computing a multivariate polynomial and we have to determine whether the polynomial is identically zero. They improve known results on locally decodable codes and on polynomial identity testing and show a relation between the two notions.


Yu Sun, discussing "Approximate Nearest Neighbours: Towards Removing the Curse of Dimensionality" by Indyk and Motwani.

The nearest neighbor search (NNS) problem is the following: given a set of n points P in some metric space X, answer queries which require finding the point in P closest to a query point q in X. The problem is of major importance to a variety of applications, such as data mining, information retrieval, machine learning and pattern recognization. In those applications, the dimensionality of the metric space usually ranges from tens to thousands. Recently, there has been an increasing interest in avoiding the curse of dimensionality by resorting to the eps-approximate nearest neighbors searhing (eps-NNS): Given a query q in P, find a point p in P such that for all p' in P, d(p,q) \leq (1 + eps) d(p',q).

This paper focuses on the d-dimension Euclidan space and the approximate version of the problem. It presents two algorithmic results for the eps-NNS that significantly improve the known bounds: 1)Preprocess cost ploynomial in n and d, and a truly sublinear query time (for eps > 1); and 2) query time polynomial in log n and d, and only a mildly exponential preprocessing cost.

To obtain the above results, the authors first use a data structure called ring-cover trees to reduce eps-NNS to a new problem called point location in equal balls (PLEB). Then they give two solutions to PLEB, namely, the bucketing method and locality-sensitive hashing. In this talk we discuss the ring-cover data structure and how it was used to reduce eps-NNS to PLEB. We also discuss the bucketing method and locality-sensitive hashing technique.


Vladimir Trifonov, An O(log n log log n) Space Algorithm for Undirected st-Connectivity

Undirected st-connectivity is the problem of checking whether two given vertices of an undirected graph are connected by a path. Solving this problem in linear time and space is easy by using depth-first search. Reducing the space complexity to o(n) is a more challenging task. Savitch showed an algorithm which solves the problem in space O(log^2 n), by using what is essentially repeated squaring. More recently Nisan et al. showed an algorithm which uses O(log^{3/2} n) and Armoni et al. reduced this to O(log^{4/3} n) space. Both of this algorithms rely on ideas from the O(log n) space randomized algorithm of Aleliunas et al. and the PRG for space-bounded computation of Nisan.

In this talk we present an algorithm which solves undirected st-connectivity in space O(log n log log n) and which employs a non-trivial simulation of an efficient parallel algorithm for undirected st-connectivity due to Chong and Lam. Our algorithm is a significant step towards the optimal O(log n) bound, which was obtained recently by Reingold with techniques involving the conversion of the given graph into an expander.


Prahladh Harsha, Short PCPs verifiable in polylogarithmic time

ABSTRACT: The study of efficient probabilistic methods for verifying proofs was initiated in the works of Babai et. al. [BFLS] and Feige at. al. [FGLSS] with very different motivation and emphases. The work of Babai et. al. considered the direct motivation of verifying proofs highly efficiently. Their motivation led them to emphasize the time taken by the verifier and the length of the proof in the new format. In contrast, Feige et. al. established a dramatic connection between efficient probabilistically checkable proofs (PCPs) and the inapproximability of optimization problems. Most succeeding works have focused on the latter. On the other hand, very few works have focused on the length of the PCP and in fact no later work seems to have returned to the extreme efficiency of the verifier. This is unfortunate because the latter efficiency parameters are significant in the context of proof-verification.

Motivated by the recent progress in the length of PCP due to Ben-Sasson et. al [BGHSV] and Ben-Sasson and Sudan [BS], in this work we revisit the study of efficient PCPs. We show that every language in NP has a probabilistically checkable proof of proximity (i.e., proofs asserting that an instance is ``close'' to a member of the language), where the verifier's running time is polylogarithmic in the input size and the length of the probabilistically checkable proof is only polylogarithmically larger that the length of the classical proof. If the verifier is restricted further in its query complexity and only allowed q queries, then the proof size blows up by a factor of exp((log n)^{c/q}) where the constant c depends only on the language. Our results thus give efficient (in the sense of running time) versions of the shortest known PCPs.

Of technical interest in our proof is a new complete problem for NEXP based on constraint satisfaction problems with very low complexity constraints, and techniques to arithmetize such constraints over fields of small characteristic.

Joint work with Eli Ben-Sasson, Oded Goldreich, Madhu Sudan and Salil Vadhan


Adam Kalai, Blind online optimization: gradient descent without a gradient

Abstract: We consider a fun problem in repeated optimization with extremely little feedback. We must choose a sequence of points x_1, x_2, ..., from some fixed convex feasible set. After we make the t'th decision, we find out *only* our reward, f_t(x_t), and no other information about the reward function f_t. Moreover, the functions f_1, f_2, ..., may potentially be completely unrelated.

We give a very simple randomized algorithm that achieves a regret, with respect to the best single decision in hindsight, approaching 0. Specifically (f_1(x_1)+...+f_t(x_t))/t approaches max_x (f_1(x)+...+f_t(x))/t at an inverse-polynomial rate, assuming only that the functions f_i are concave and bounded. The key idea is to approximate the gradient of a function by evaluating it at a *single* point.

These types of limited-feedback decision-making problems occur in a variety of settings. For example, Toyota would like to decide how many cars to produce (of each type) or how much to spend advertising (on each of a number of channels), but only finds out its profit which may be affected by external factors such as the economy. Other examples include minimizing one's cholesterol by changing the parameters of one's diet, or, more importantly, designing paper airplanes that will fly as far as possible.

Joint work with Abie Flaxman and Brendan McMahan.


Bruno Codenotti, Market Equilibrium and Optimization (or, When Does The "Invisible Hand" Help Computation?)

Abstract: The computation of equilibria in several game theoretic and economic settings is an emerging topic in theoretical computer science. This talk will address the problem of computing the market equilibrium, and will analyze its interplay with optimization, in the light of Adam Smith's "invisible hand" property. Indeed, certain optimization characterizations of the equilibrium can be seen as formal expressions of the invisible hand. I will discuss conditions under which the market equilibrium problem becomes a tractable optimization problem, and interpret several recent algorithmic results in relationship to these conditions.

(The talk will be based on joint works with K. Varadarajan and S. Pemmaraju)


Anindya Patthak, Dinur's PCP via Gap Amplification

Probablistically Cheackable Proofs (PCPs) are NP witnesses that allow efficient probablistic verification based on probing few bits of the NP witness. The celebrated result of Arora and Safra (AS92), and Arora, Lund, Motwani, Sudan, Szegedy (ALMSS92) assert that probing a constant bits suffices and this gives an alternate characterization of NP. PCP theorem is known for its applications to inapproximability resuls and have several other uses in coding theory and in cryptography. In spite of the importance of PCP theorem, the original proof is quite long and more algebraic in nature. In this talk we will see how to get a hardness result for a Gap Constrint Graph Satisfiablity Problem (which implies the PCP theorem). Informally speaking, a Constraint Graph Satisfiablity Problem is the following: Given a graph $(V,E)$ and constraints on its edges over a finite alphabet $\Sigma$ (i.e., for each $e \in E$, constraints are defined as $e: \Sigma^2 \rightarrow \set{0,1}$), find an assignment $V \rightaorrow \Sigma$ that satisfies all the constraint. The gap version comes with the promise that either all the constraints are satisfied or at most a fraction $\alpha <1 $ of them are satisfied. We will show that there exists $\alpha= \Omega(1)$, for which the gap version is $NP-Hard$. The analysis is more combinatorial in flavor and relies on walk on expander graphs.

The talk is based on a recent paper by Irit Dinur.

Click here for the paper.