In the second semester, the seminar is on Thursdays,
This year we also had Omer Reingold’s Pseudorandomness seminar.
November
|
Extractors and Condensers from Univariate Polynomials |
|
12/11/2006 [at |
|
Lower Bounds for Multilinear Arithmetic Formulas |
|
|
No meeting due to the STOC’07
deadline |
|
|
December
3/12/2006
[at Zi 261!] |
Shachar Lovett |
New
Locally Decodable Codes and Private Information Retrieval Schemes |
Yael Tauman Kalai |
||
[at Zi 261!] |
|
|
[at Zi 261!] |
|
Derandomized Curve Samplers (Cont’d) |
Or Sheffet |
On the Randomness
Complexity of Samplers, and the Usefulness of Expanders |
January
|
||
[at Zi 261] |
|
|
[at Zi 261] |
Asaf Nussbaum |
If NP languages are hard on the worst-case, then it is easy to find their hard instances |
[at Zi 261] |
Asaf Nussbaum |
Zero-One Laws |
February
[at Zi 261] |
Alex Samorodnitsky |
|
|
Relativization,
diagonalization and stuff |
|
[at Zi 1] |
|
Relativization,
diagonalization and stuff (Cont’d) |
25/2/2007 |
|
March
4/3/2007 |
|
PURIM |
|
||
[at Zi 261!] |
|
|
[at Zi 261!] |
|
No
better ways to generate hard NP instances than picking uniformly at random
(+bonus talk on April 1st) |
April
5/4/2007 |
|
Passover |
Shachar Lovett |
|
|
19/4/2007 |
|
|
26/4/2007 |
|
Faculty Excursion |
May
[at Zi 261!] |
Sanjeev Arora |
Geometry and
Approximation Algorithms for NP-hard Optimization Problems |
[at Zi 261!] |
Avi Wigderson |
|
18/5/2007 |
|
|
24/5/2007 |
|
|
31/5/2007 |
|
Random Structures and Algorithms,
May 28th – June 1st (Tel-Aviv University) |
June
7/6/2007 |
|
No meeting this week |
14/6/2007 |
|
|
Different
Approaches to Complexity in Mathematics (Technion, June 12-19) |
||
Eden Chlamtac |
||
28/6/2007 |
|
July
Eden Chlamtac |
Coloring,
Independent Sets and SDP Hierarchies |
|
|
August
|
Multipoint
evaluation of multivariate polynomials in small characteristic |
Title: |
|
Speaker: |
|
In this talk we will discuss the new extractor and condenser constructions of Guruswami, Umans, and Vadhan. The condenser, which is based on Parvaresh-Vardy codes, is relatively simple to describe and is mostly self-contained. It is the first lossless condenser that is optimal up to constant factors in both the seed and output lengths, and can thus be used directly to construct extractors that match the current best known construction of Lu, Reingold, Vadhan, and Wigderson.
Title: |
Lower Bounds for Multilinear
Arithmetic Formulas |
Speaker: |
|
Arithmetic circuits are the standard model for computing polynomials.
Arithmetic formulas are circuits that don't use the same computation twice.
We will try to go over the main ideas of two papers [1,2] of Raz concerning multilinear arithmetic formulas.
[1] proves a super-polynomial lower bound for the size of any multilinear arithmetic formula for the determinant (or the permanent).
[2] proves a super-polynomial separation between multilinear formula and circuit size.
More specifically, [2] constructs a polynomial f in F[x1,...,xN] that is computed by poly(N)-size O(log2N)-depth multilinear circuit, but isn't computed by any poly(N)-size multilinear arithmetic formula.
[1] Multi-Linear
Formulas for Permanent and Determinant are of Super-Polynomial
Size,
R.Raz,
[2] Separation
of Multilinear Circuit and Formula Size,
R.Raz,
Title: |
|
Speaker: |
|
I will present a part of a beautiful paper by Guruswami and Indyk from STOC’03.
This paper deals with list decoding and focuses on the running time of the decoder.
The authors construct error correcting codes that can be encoded and list decoded (from fraction of errors arbitrarily close to 1) in linear time in the length of the message.
Prior to this work, it was only known how to construct codes that can be:
· unique decoded in linear time
· list decoded in n×polylogn time [e.g., Reed-Solomon codes]
· list decoded from erasures in linear time
The construction is based on expanders.
Title: |
New
Locally Decodable Codes and Private Information Retrieval Schemes |
Speaker: |
Shachar Lovett |
The paper shows how to construct 3-query Locally Decodable Codes (LDCs), encoding n bits to N=exp(n1/t), for values t such that 2t-1 is prime (such primes are called Marsenne primes). The largest known Marsenne prime is 232582657-1. Under the conjecture that the number of Marsenne primes is infinite, the construction gives N=exp(nO(1/loglogn)).
The best known constructions up to this paper were exp(nloglogq/qlogq) for q-query LDC.
Title: |
|
Speaker: |
Yael Tauman Kalai |
I am going to present a paper by Vitaly Feldman, Subhash Khot, Parikshit Gopalan and Ashok Kumar Ponnuswami, from FOCS 2006.
This paper addresses well-studied problems concerning the learnability of parities and halfspaces in the presence of classification noise. In this talk, I will focus only on one of the results in the paper. The result is that under the uniform distribution, learning parities with adversarial classification noise reduces to learning parities with random classification noise. In particular, together with the parity learning algorithm of Blum, Kalai, and Wasserman, this gives the first nontrivial algorithm for learning parities with adversarial noise.
Title: |
|
Speaker: |
|
Let Fn be some vector space and A be any subset of Fn.
Loosely speaking, a sampler is a randomized procedure that outputs a set of points s1,…,sm such
that, with high probability, the number of these points that are inside of A are proportional to A's size, i.e., about m|A|/|Fn|.
A curve sampler is a sampler with the additional property that the points outputted by the sampler are a low-degree curve in Fn. For a long time, the only known construction of a curve sampler was to simply pick a random low-degree curve in Fn. Recently, Amnon Ta-Shma and Chris Umans gave an incredibly elegant and simple construction of a curve sampler that is much more randomness efficient (in fact, it is close to optimal in the number of random bits).
We will show their construction of randomness efficient curve samplers.
Remark: In their paper, Ta-Shma and Umans use these new curve samplers to construct objects known as lossless condensers. We will not go into the connection between lossless condensers and curve samplers in the lecture.
Title: |
On the Randomness
Complexity of Samplers, and the Usefulness of Expanders |
Speaker: |
Or Sheffet |
In this talk we will discuss several known results regarding the randomness complexity of samplers, derived by the use of expanders. Mainly, they are derived from the two main applications of expanders: the Expander Mixing Lemma (EML) and the Expander Random Walk theorem (ERW). Given a function f:[N]®[0,1] we wish to estimate its average, E[f]. Samplers are probabilistic Turing Machines with oracle access to f that with high probability produce a close estimation for E[f] using only a few queries to the values of f.
We discuss the importance of the randomness complexity of samplers, i.e., the number of random bits they use. As presented in [Gol06], the randomness complexity affects the number of queries samplers do, given an access to one weak random source.
We discuss the [CEG95] lower bound on the randomness complexity of samplers. Using the EML, we present a sampler that uses exactly log N random bits (a version of this sampler appears in [Gol97]). Using the ERW, we present Zuckerman's construction of a disperser (and an extractor).
Students that are new to samplers and/or expanders and the two main tools (EML, ERW) are particularly invited to come and learn. Students that already know the subject of samplers and expanders, are welcome to come, help me in the lecture, and feast (I am in charge also of the refreshments for this talk, and I can guarantee that they will be
good...)
All results are taken from:
[CEG95] Lower bounds for sampling algorithms for estimating the average
[Gol97] A sample of samplers - a computational perspective on sampling (survey)
[Gol06] Another motivation for reducing the randomness complexity of algorithms
[Zuc06] Linear degree extractors and the inapproximability of max clique and chromatic number
Title: |
|
Speaker: |
|
In the last few years there have been few papers focusing on
derandomization of AM, that is, proving that AM = NP given some hardness
assumption. Such a result would make it easier to prove that a given problem is
in NP, as what will be required in order to prove that the problem is in NP
will be the design of an interactive proof of constant number of rounds instead
of designing a (non-interactive) NP-proof. In particular, it would imply that
the Graph-Non-isomorphism problem is in NP, and thus Graph-Isomorphism is in NPÇcoNP.
In this talk we will show one such derandomization result of AM, of Miltersen
and Vinodchandran. Their proof is particularly interesting, since it takes
a different approach to such derandomization than the usual one. While usually
such derandomization results use the hardness assumption to construct strong enough
pseudorandom generators and then apply them to obtain the required
derandomization, Miltersen and Vinodchandran use Hitters instead of
pseudorandom generators. They also show that this approach could be used to
show that BPP=P, although it would require stronger hardness assumptions than the known result of
Impagliazzo and Wigderson.
The paper is by P. B. Miltersen and N.V. Vinodchandran, originally from FOCS’99.
Title: |
|
Speaker: |
|
I will show an approach suggested by Valiant in 1977 for proving super linear circuit lower bounds. The approach relies on the notion of matrix rigidity. I will define this notion and show how it is related to circuit lower bounds.
Title: |
If NP languages
are hard on the worst-case, then it is easy to find their hard instances |
Speaker: |
Asaf Nussbaum |
The paper is by Danny
Gutfreund, Ronen Shaltiel and Amnon Ta-Shma.
Assuming that P¹NP, then NP-complete problems are hard to solve on worst case instances, but are there any heuristics that successfully solve NP-complete problems on (most) inputs that occur 'in practice'?
Due to the difficulty of analyzing real life distributions, such heuristics are required to perform well under arbitrary 'reasonable' distributions D, where the latter are modeled as (arbitrary) distributions that can be sampled efficiently.
The main result of [GSTS] rules out the existence of such heuristics: given an arbitrary algorithm A, they explicitly construct a simple distribution DA that produces hard instances for A with constant probability. Their result holds for both deterministic heuristics (if NP¹P) and for probabilistic ones (assuming NP¹RP).
Title: |
Zero-One Laws |
Speaker: |
Asaf Nussbaum |
Random graphs exhibit
remarkable combinatorial structure. In particular, many natural graph properties
are satisfied either almost surely or hardly ever (namely, with probability
tending either to 1 or to 0, as the size of the random graph gets larger). The
famed 0/1-law of the random graphs establishes such a phenomenon, not for
specific prescribed properties, but rather for an entire class of graph
properties: first-order properties. This means that any graph property
specified by a first-order formula (where variables stand for vertices and the
only relation is adjacency) holds either almost surely or hardly ever on random
G(n,p) graphs for any constant choice of p. This unexpected connection between
combinatorics and logic was independently proved by Fagin (in the west) and by
Glebskii, Kogan, Liagonkii &
Title: |
|
Speaker: |
Alex Samorodnitsky |
We will discuss the notion of Gowers Uniformity norms, and argue that they are natural extensions of the "usual" notion of maximal Fourier coefficient of a function. The latter notion quantifies the best approximation of the function by a linear function. Gowers norms (presumably) do the same for low-degree polynomials.
Title: |
Relativization, diagonalization and stuff |
Speaker: |
|
In the talk we will try to understand the basic concept of relativization. We will start with a review of the notion of diagonalization, and present some non-trivial diagonalizations: the nondeterministic hierarchy theorem and Ladner's theorem (if P¹NP then there is a set in NP-P that is not NP-complete). We will then see the famous result of Baker-Gill-Solovay: P vs. NP cannot be solved via diagonalization. We will see how to determine that a proof relativizes. Finally, we will prove IP=PSPACE. This result surprised the community, since it was shown previous to the proof that such a result cannot be shown via relativizing techniques. The proof was the first truly non-relativizing proof.
Title: |
|
Speaker: |
|
It turns out that several interesting hard problems, particularly the sparsest cut problem, can be reduced to the problem of finding the best embedding of a metric in L1. In recent years, approximation algorithms for these problems were obtained by taking a metric that corresponds to an instance, embedding it in an L22 metric using an SDP, and then proving that any L22 metric can be embedded into L1 with small distortion. This kindled interest in the question of how well L22 metrics can be embedded in L1.
It was conjectured by Goemans and Linial that any L22 metric embeds into L1 with constant distortion (namely the distortion does not depend on the number of points in the metric space). I will discuss a paper of Khot and Vishnoi that disproves the conjecture by showing a non-constant integrality gap for the sparsest cut problem. Interestingly, the paper first shows a hardness result for the sparsest cut problem, and then applies the reduction to a special instance to get the integrality gap. By applying other hardness reductions (such as for max-cut) to this instance, the paper shows tight integrality gaps for other interesting problems.
Title: |
No
better ways to generate hard NP instances than picking uniformly at random |
Speaker: |
|
In his seminal paper from '84 Levin showed that there exist 'average case complete problems'. These are NPC problems, such that if some problem in NP is hard on the average with respect to some distribution taken from a particular family of distributions (namely, P-computable distributions), then the former problem is also hard on the average with respect to some distribution taken from that family of distributions.
One weakness of this result was that this family of distributions is quite restricted. In other words, in order to establish the hardness of such a complete problem, one has to assume the hardness of NP under some distribution taken from a restricted family of distributions, while it is possible that NP is hard on the average only with respect to distributions taken from outside of this restricted family.
In a paper titled as this talk from '90, Impagliazzo and Levin show that in fact it is sufficient to assume the hardness of NP under samplable distributions (presumably a much wider family of distributions), in order to establish the hardness of these complete problems. That is, a problem complete for NP with P-computable distributions is also complete for NP with samplable distributions.
Title: |
|
Speaker: |
Shachar Lovett |
PRIMES is the following decision problem: Is the given number prime or composite?
The first deterministic polynomial time algorithm for PRIMES was found in 1976 by Gary Miller, but his algorithm assumed the Generalized Riemann Hypothesis, an unproven conjecture in number theory.
In 1980, Michael Rabin found a randomized variant of Miller's algorithm that works under no unproven conjectures, thus constructing the Miller-Rabin algorithm, and showing that PRIMES is in RP.
In 1983, a breakthrough was achieved, when Adleman, Pomerance and Rumley gave a deterministic algorithm based on Miller's algorithm that, given input n, runs in time (logn)O(logloglog n).
Yet, no algorithm running in time polylog(n) was found.
This was the state of affairs until 2002, when Manindra Agrawal, Neeraj Kayal and Nitin Saxena discovered a deterministic polynomial time algorithm for PRIMES that does not depend on any unproven conjectures, thus settling that PRIMES is in P.
Title: |
Geometry and
Approximation Algorithms for NP-hard Optimization Problems |
Speaker: |
Sanjeev Arora, Princeton |
Semidefinite programming (SDP) has been key to design of approximation algorithms for NP-hard problems.
Analysis of SDP seems to call for geometric arguments, old and new.
In the first half of the talk I will survey this area broadly, focusing on what role geometry plays.
In the second half of the talk, I will give details of the main result from my joint paper with Rao and Vazirani in 2004 that resulted in new approximation algorithms for graph partitioning problems such as sparsest cut. I will also outline how this result has proved useful in other applications, including a new embedding of l1 into l2 that improves Bourgain's bound of O(log n) from 1985 to almost O(Ölog n).
The presentation will be self-contained.
Title: |
|
Speaker: |
Avi Wigderson, Institute for Advanced Study |
In the talk I'll discuss the "number-on-forehead" model of communication, in which k players attempt to jointly compute a function of k arguments, where each player sees k-1 of them. This nontrivial generalization of the standard 2-party communication model, introduced by Chandra, Furst and Lipton, captures many computational models, and lower bounds for it had (and will have) important implications. I'll describe some of this motivation, old results, and some new work with Emanuele Viola, especially on the 1-way and bounded rounds versions
of this model, as well as the possibility of strong direct-product theorem for it.
Title: |
|
Speaker: |
Eden Chlamtac, Princeton |
The Unique Games Problem is a constraint satisfaction problem over arbitrary domains which can be viewed as a generalization of MAX-CUT and of systems of two-variable linear equations over finite fields.
The Unique Games Conjecture (UGC) states that it is NP-hard to distinguish between UG instances where almost all constraints are simultaneously satisfiable and ones where very few constraints are satisfiable. This conjecture has been shown to imply a number of inapproximability results which are not known to follow from more standard complexity assumptions.
I will present a recent paper (STOC 06) of Charikar, Makarychev and Makarychev, in which they give two approximation algorithms for Unique Games. The approximation guarantees in both algorithms come very close to hardness results which follow from UGC, as was shown by Khot, Kindler, Mossel and O'Donnell.
Title: |
Coloring, Independent Sets and
SDP Hierarchies |
Speaker: |
Eden Chlamtac, Princeton |
In this talk I will discuss two related problems. First, given a graph which has chromatic number k (but without an explicit k-coloring), find a legal coloring which uses as few colors as possible. Second, given a graph containing an independent set of size gn (for positive constant g), find a maximum size independent set. Both problems may also be considered in the hypergraph setting.
Much of the work on these problems has involved the use of semidefinite programming (SDP). While this approach seems to have inherent limitations (as demonstrated by lower bounds on integrality gaps), there is hope that hierarchies of SDP relaxations may lead to better approximation algorithms. (This is not clear a priori-- for other problems, such as Vertex Cover, this possibility has been strongly ruled out.)
I will discuss some classical SDP-based algorithms for both problems in the graph and hypergraph setting, as well as recent work where I have shown some improvement over previous algorithms (in certain settings) using a constant level of the Lasserre SDP hierarchy.
Title: |
|
Speaker: |
|
Pseudorandom Generators are often viewed as the "computational analogue" of extractors. In fact, it is known that every "black-box" construction of a PRG yields an extractor. This raises the natural question of whether every construction of an extractor yields a PRG. In this talk we will give an answer to this question for Hitting Set Generators and dispersers, which are the one-sided error versions of PRGs and extractors. More specifically, the paper defines a broad family of dispersers (called "Reconstructive Dispersers"), such that any construction of a reconstructive disperser yields an HSG.
Title: |
Multipoint
evaluation of multivariate polynomials in small characteristic |
Speaker: |
|
I will present a new algorithm
by Chris Umans that solves the following problem: Given a multivariate
polynomial f(X1,...,Xm) with individual degrees < d
and N=dm evaluation points a1,...,ad^m in Fqm
, compute all the N values f(aj). The new algorithm runs in time
almost linear in N when the characteristic of the field is sufficiently small
(previous algorithms for this problem required time >> N1.5).
One application of this new algorithm is an optimal (up to logarithmic factors)
algorithm for modular composition of univariate polynomials (computing f(g(x))
mod h(x) for univariate f,g,h). This in turn improves the running time of the
best polynomial factorization algorithm.
The nice thing about this algorithm is that it uses the same trick that was
used in both the list decoding algorithm of Guruswami and Rudra and also in the
recent extractor construction of Guruswami, Umans and Vadhan. This makes it the
3rd time this trick is used to give an almost optimal solution for a long
standing open problem. The presentation of the algorithm will assume certain
familiarity with field extensions of Fq.