The seminar is every Sunday 14:00 – 15:30 at the
Pekeris room.
During the fall semester the seminar was on Tuesdays, 11:00 – 13:00.
During the spring semester the seminar was on Sundays, 16:00 – 17:30.
Note: The seminar has a Google Calendar, called
Weizmann Students’ Theory Seminar.
November
Shachar Lovett |
||
Ariel Yadin |
Suen's inequality |
December
Ariel Gabizon |
An introduction to function fields and algebraic geometric codes |
|
11/12/2007 |
|
HANUKA |
Klim Efremenko |
Approximating General Metric Distances Between a Pattern and a Text |
|
Zeev Dvir |
Hardness-Randomness Tradeoffs for Bounded Depth Arithmetic Circuits |
January
Swastik Kopparty |
The Homomorphism Domination Exponent |
|
8/1/2007 |
|
No meeting |
Amir Yehudayoff |
Representing an Arithmetic Circuit as a Sum of `Structured Polynomials' |
|
Vinod Vaikuntanathan |
||
Chandan Dubey |
Approximating the bandwidth via volume respecting embeddings |
February
Chandan Dubey |
Approximating the bandwidth via volume respecting embeddings (Cont’d) |
|
12/2/2008 |
|
IPAM Expanders Workshop |
Or Meir |
||
Shachar Lovett |
March
Or Meir |
||
[At Zi261!] |
Dana Moshkovitz |
|
Jakob Nordström |
Towards an Optimal Separation of Space and Length in Resolution |
|
Adi Shraibman |
Disjointness is hard in the multi-party number-on-the-forehead model |
|
Adi Shraibman |
Disjointness is hard in the multi-party number-on-the-forehead model (Cont’d) |
|
Madhu Sudan |
April
Dave Xiao |
Barriers
to Proving Hardness of Improper Learning from Worst-case Assumptions |
|
13/4/2008 |
|
|
20/4/2008 |
|
Passover |
Noam Livne |
Lower Bounds for Secret Sharing and Non-Shannon Inequalities |
May
Uri Feige |
The
Cover Time of Random Walks |
|
Ariel Gabizon, Chair |
STOC'08 rehearsal talks |
|
18/5/2008 |
|
|
25/5/2008 |
|
No
meeting |
June
Shachar Lovett |
||
[Tuesday, 2pm] |
Ran Raz |
Two query PCP with sub-constant error |
Daniel Glasner |
A Preemptive algorithm for maximizing disjoint paths on trees |
|
[Ziskind 1] |
Robi Krauthgamer |
Optimal
hierarchical decompositions for congestion minimization in networks |
Guy Kindler |
Cubical tilings with sphere-like surface area |
July
Avinatan Hassidim |
Quantum Multi Prover Interactive Proofs with Communicating Provers |
|
Avi Ben-Aroya |
A combinatorial construction of almost-Ramanujan graphs using the Zig-Zag product |
|
[At Zi261!] |
Subhash Khot |
|
Oded Goldreich |
Two
recent works in property testing |
Title: |
|
Speaker: |
Shachar Lovett |
We will discuss the problem of constructing pseudorandom generators that fool low degree polynomials over finite fields. I will describe the result by Bogdanov and Viola from FOCS 2007, which shows that the sum of d independent epsilon-biased sources fools polynomials of degree d, assuming the Inverse Gowers Conjecture,
and also my result which shows that the sum of 2d independent epsilon-biased sources fools degree-d polynomials, which does not assume any unproven conjecture.
On the way, I will try to give some basic facts regarding the Inverse Gowers Conjecture.
Title: |
Suen's inequality |
Speaker: |
Ariel Yadin |
In many applications in Probability, Combinatorics and Computer Science, we wish to calculate the probability that N events do not occur. If the events are independent, this is just the product of the probabilities that each of the events does not occur. But in most cases the events are not independent.
Suen's inequality is an upper and lower bound on the ratio between the actual probability, and the "independent case". That is, we can assume the events are indeed independent, and Suen's inequality will provide us with the error term. The special feature of Suen's inequality is that the error depends only on the two point correlations; that is, on the quantities Pr[A and B] over all events A,B. Suen's inequality can be applied in a much more general setting than Janson's inequality, and is usually stronger than the Lovasz Local Lemma.
In this talk, we pursue three different objectives:
(1) Prove Suen's inequality.
(2) Show a nice example of how "continuous" techniques from calculus (such as differentiation and integration) can be used to give simple proofs of combinatorial results. Janson's proof of Suen's inequality is such an example.
(3) Give a simple example of a combinatorial / probabilistic problem where Suen's inequality immediately provides a solution.
Title: |
An introduction to function fields and algebraic geometric
codes |
Speaker: |
Ariel Gabizon |
I
begin with some motivation:
Univariate polynomials give us a simple way to construct codes that have an
optimal tradeoff between size, distance and block length. These are the well
known Reed-Solomon codes where the message is interpreted as a univaraite
polynomial over a field k and is encoded by its evaluations on elements of
k.
The main disadvantage of RS codes is that the alphabet has to be large, specifically, as large as the output block length. This corresponds to the fact that a univariate polynomial cannot have more evaluation points than the size of the underlying field.
Constructions of so-called algebraic geometric codes circumvent this disadvantage by using objects known as algebraic functions fields.
Loosely speaking, the elements of a function field are objects that behave like univariate polynomials while having more evaluation points than the underlying fields size.
Formally, a function field is an algebraic extension of the rational function field k(X). In contrast to the more standard field k(X,Y) of rational functions in two variables which is a trancendental extension of k(X) .
This definition may spark an intuition of objects 'in-between' univaraite and bivariate polynomials.
From a geometric perspective, an element of a function field roughly corresponds to evaluating a bi-variate (or more generally, mulitvariate) polynomial on the points of a curve, i.e., the set of zeros of an irreducible polynomial f(x,y), rather than on the whole plane.
I will give the basic definitions and facts relating to function fields and algebraic-geometric codes, and try to give an intuitive justification for the definitions.
It is recommended to review the notions of a ring and an ideal before the lecture.
Title: |
Approximating General Metric Distances Between a Pattern and a Text |
Speaker: |
Klim Efremenko |
Let T=t0…tn-1 be a text and P = p0…pm-1 a pattern taken from some finite alphabet set S,
and
let d be a metric on S.
We consider the problem of calculating the sum of distances between the symbols
of P and the symbols of substrings of T of length m for all possible offsets.
We present an e-approximation
algorithm for this problem which runs in time O(1/e2) n polylog(n,|S|)).
This algorithm is based on a low distortion embedding of metric spaces into
normed spaces (especially, into l¥), which is done as a preprocessing stage. The
algorithm is also based on a technique of sampling.
Title: |
Hardness-Randomness Tradeoffs for Bounded Depth Arithmetic Circuits |
Speaker: |
Zeev Dvir |
I'll talk about a recent joint work with Amir Shpilka and Amir Yehudayoff showing that lower bounds for bounded depth arithmetic circuits imply derandomization of polynomial identity testing for bounded depth arithmetic circuits. More formally, if there exists an explicit polynomial f(x1,...,xm) that cannot be computed by a depth d arithmetic circuit of small size then there exists an efficient deterministic algorithm to test whether a given depth d-8 circuit is identically zero or not (assuming the individual degrees of the tested circuit are not too high). In particular, if we are guaranteed that the tested circuit computes a multilinear polynomial then we can perform the identity test efficiently. To the best of our knowledge this is the first hardness-randomness tradeoff for bounded depth arithmetic circuits.
The above results are obtained using the arithmetic Nisan-Wigderson generator of [KabanetsImpagliazzo04] together with a new theorem on bounded depth circuits, which is the main technical contribution of our work. This theorem deals with polynomial equations of the form P(x1,...,xn,y)º0 and shows that if P has a circuit of depth d and size s and if the polynomial f(x1,...,xn) satisfies P(x1,...,xn,f(x1,...,xn))º0 then f has a circuit of depth d+3 and size O(s·r+mr), where m is the total degree of f and r is the degree of y in P. In the other direction we observe that the methods of [KabanetsImpagliazzo04] imply that if we can derandomize polynomial identity testing for bounded depth circuits then NEXP does not have bounded depth arithmetic circuits. That is, either NEXPP/poly or the Permanent is not computable by polynomial size bounded depth arithmetic circuits.
Title: |
The Homomorphism Domination Exponent |
Speaker: |
Swastik Kopparty (MIT) |
For fixed graphs F and G, we study the relationship between the number of homomorphisms from F to T and from G to T for an arbitrary graph T. Define the homomorphism domination exponent of F and G, denoted by hde(F,G), to be the largest constant c such that |Hom(F,T)| ≥ |Hom(G,T)|c for all T. The decidability of "hde(F,G) ≥ 1" is an important open question in database theory (known as the containment problem for conjunctive queries under bag semantics).
We give algorithms to compute the hde(F,G) for certain families of F and G. Our techniques are based on entropy and linear programming, and generalize the entropy variants of Shearer's lemma due to Radhakrishnan,
Friedgut, Kahn and others.
Joint work with Ben Rossman (MIT).
Title: |
Representing an Arithmetic Circuit as a Sum of `Structured Polynomials' |
Speaker: |
Amir Yehudayoff |
We will study the structure of polynomials computed by arithmetic circuits.
For simplicity, we will focus on syntactically multilinear circuits.
We will show a certain kind of arguments that was used to prove lower bounds for various models in [Raz04a, Raz04b, RY07].
Roughly speaking, the arguments are based on the observation that any function that can be computed by a multilinear arithmetic circuit of size s can be written as the sum of at most s polynomials that have some specific structure.
If time permits, we will describe the proofs of several lower bounds using such arguments.
Title: |
|
Speaker: |
Vinod Vaikuntanathan (MIT) |
A fundamental question in cryptography is whether it is possible to reduce the security of cryptographic schemes to the worst-case hardness of problems. To do this, one has to show that solving a problem is as hard on the average as it is on the worst case. Ajtai in 1996 showed exactly such a result for a class of problems on integer lattices.
We will present a simplified proof of Ajtai's theorem, due to Micciancio and Regev, and some of the amazing cryptography that it generates.
Title: |
Approximating the bandwidth via volume respecting embeddings |
Speaker: |
Chandan Dubey |
A linear arrangement of an n-vertex graph is a one-to-one mapping of its vertices to the integers (1,...,n}. The bandwidth of a linear arrangement is the maximum difference between mapped values of adjacent vertices. The problem of finding a linear arrangement with smallest possible bandwidth is NP-hard. We present a randomized algorithm that runs in nearly linear time and outputs a linear arrangement whose bandwidth is within a poly-logarithmic multiplicative factor of optimal. The algorithm is based on a new notion, called volume respecting embeddings.
This presentation is based on the paper by Uriel Feige from STOC 1998. Interestingly, this paper is still the best known upper bound on bandwidth.
Title: |
Three expander based constructions of codes |
Speaker: |
Or Meir |
For long time, Coding theory sought constructions of good error-correcting
codes that have very efficient encoding and decoding algorithms. However, until
1994, most of the known good codes were algebraic codes based on polynomials,
and their encoding and decoding algorithms usually required time of at least
O(nlogn). This situation was changed in 1994, when Sipser and Spielman gave
constructions of error correcting codes that were based on expander graphs, and
that had decoding algorithm that runs in time O(n). Their construction inspired
a sequence of works on this subject, yielding better and better codes that have
linear time encoding and decoding algorithms.
In this talk, I'll survey three works in this line: The original paper of
Sipser and Spielman from 1994, the follow-up paper of Spielman on codes that
have both linear time encoding and decoding algorithms, and the paper of
Venkatesan Guruswami and Piotr Indyk that yielded such codes that have optimal
rate.
Title: |
|
Speaker: |
Shachar Lovett |
Cheese, Wine and report on interesting papers from STACS’08.
Title: |
|
Speaker: |
Dana Moshkovitz |
We'll go over some results presented at the IPAM Expander Workshop: near Euclidean sections of L1 (Avi Wigderson), applications of “local expansion” to extremal graph theory (Benny Sudakov), problems on unbalanced expanders (Omer), finding edge disjoint paths in expanders deterministically and online (Noga Alon), certifying quasi-randomness of hypergraphs (Luca Trevisan) and more…
Title: |
Towards an Optimal Separation of Space and Length in Resolution |
Speaker: |
Jakob Nordström (Royal Institute of Technology, Stockholm, Sweden) |
Most state-of-the-art satisfiability algorithms today are variants of the DPLL procedure augmented with clause learning. The main bottleneck for such algorithms, other than the obvious one of time, is the amount of memory used. In the field of proof complexity, the resources of time and memory correspond to the length and space of resolution proofs. There has been a long line of research trying to understand these proof complexity measures, but while strong results have been proven on length our understanding of space is still quite poor. For instance, it remains open whether the fact that a formula is provable in short length implies that it is also provable in small space or whether on the contrary these measures are unrelated in the sense that short proofs can be "arbitrarily complex" with respect to space.
In this talk, we present some evidence that the true answer should be that the latter case holds and provide a possible roadmap for how such an optimal separation result could be obtained. We do this by proving a tight bound of Q(Ön) on the space needed for so-called pebbling contradictions over pyramid graphs of size n. This yields the first polynomial lower bound on space that is not a consequence of a corresponding lower bound on width, another well-studied measure in resolution, as well as an improvement of the weak separation in (Nordström 2006) of space and width from logarithmic to polynomial.
Joint work with Johan Håstad, Royal Institute of Technology.
Title: |
Disjointness is hard in the multi-party number-on-the-forehead model |
Speaker: |
Adi Shraibman |
We show that disjointness requires randomized communication Q(n1/2k/(k-1)2k-122^{k-1})
in the general k-party number-on-the-forehead model of complexity. The previous
best lower bound was Q((log
n)/k-1).
Joint work with Troy Lee from Rutgers university.
Title: |
|
Speaker: |
Madhu Sudan (MIT) |
A basic goal in Property Testing is to identify a minimal set of features that make a property testable.
For the case when the property to be tested is membership in a binary linear error-correcting code, Alon et al [AKKLR] had conjectured that the presence of a single low weight code in the dual, and ``2-transitivity'' of the code (i.e., the code is invariant under a 2-transitive group of permutations on the coordinates of the code) suffice to get local testability. The motivation for this conjecture is that it captures most natural families of linear locally testable codes, especially low-degree polynomials, Reed-Muller codes, and BCH codes. Furthermore, 2-transitivity seemed to be the (only?) essential element in the analysis of the local testability
We introduce a richer class of properties (codes), where codewords are multivariate functions and the code is invariant under affine transformations of the domain. On the one hand these classes lead to generalizations, and unifications of the earlier results. On the other hand they lead to a rich class of 2-transitive properties. In this talk we will focus on the negative implications. We show that the AKKLR conjecture is false, by giving an affine-invariant property (code) that has low-weight codewords in its dual, but is not locally testable.
Joint work with Elena Grigorescu (MIT) and Tali Kaufman (MIT & IAS).
Title: |
Barriers to Proving Hardness of Improper Learning from Worst-case Assumptions |
Speaker: |
Dave Xiao (Princeton) |
Computational learning theory, and in particular PAC learning, was introduced in 1984 by Valiant and has since become a major area of research in theoretical and applied computer science. One natural question that was posed at the very inception of the field is whether there are classes of functions that are hard to learn. Here it is important to make a distinction between proper and improper learning: in proper learning one is required to output a hypothesis that is comparable to the function being learned (e.g. if we are trying to learn k-DNF then the hypothesis should also be a k-DNF), while in improper learning the hypothesis can be more complex (e.g. if we are trying to learn k-DNF then the hypothesis could be a circuit of size nc).
Computational limitations to proper and improper learning have been extensively studied, starting with the seminal works of Pitt-Valiant (JACM '88) and Kearns-Valiant (JACM '94). However, while the hardness of proper learning can be based on worst case assumptions on the power of NP (e.g. NP≠RP), all known limitations on improper learning are based on cryptographic assumptions (e.g., the existence of one-way functions).
It is natural to ask whether this gap is inherent, specifically: is it possible to base hardness of improper learning on worst case assumptions such as NP≠RP?
Unfortunately this problem has remained open and seems difficult to solve. In this work we show that it is related to questions of average-case vs. worst-case complexity and weak cryptographic primitives. In particular, we prove the following: if there exists a black-box reduction R from deciding a language L to learning small circuits, then there exists a reduction R' from deciding L to inverting a family of auxiliary-input one-way functions (as defined by Ostrovsky-Wigderson). Furthermore, if R is non-adaptive, then in fact we can adapt the results of Akavia et al. to show that this implies L is contained in coAM. As a corollary, if L is NP-complete then PH collapses and so such a reduction is unlikely to exist. Our results hold even in the stronger model of agnostic learning.
This is joint work with Benny Applebaum and Boaz Barak.
Title: |
Lower Bounds for Secret Sharing and Non-Shannon Inequalities |
Speaker: |
Noam Livne |
A secret sharing scheme enables a dealer to distribute shares (pieces of information) between parties, such that predefined coalitions can reveal some secret (a r.v.), but other coalitions cannot reveal any information on the secret. A central issue in secret sharing schemes is the size of the shares: in all known secret sharing schemes the size of shares is exponential in the number of participants. However, the best known lower bound on the size of shares is ~W(n/log n) (where n is the number of parties). We will show this lower bound. We will then show that essentially, the "known" entropy inequalities cannot improve these lower bounds, and that this requires "non-Shannon inequalities" (we will explain what these are). Such inequalities were found in the last decade.
If time permits, we will show a result by Beimel, Padro and myself, which circumvents such impossibility result (albeit in a different, "weaker" setting). This result indeed uses a non-Shannon inequality.
Title: |
The Cover Time of Random Walks |
Speaker: |
Uri Feige |
The cover time of a random walk on a finite graph is the expected number of
steps taken until all vertices are visited. This talk will survey some of the known
bounds on the cover time and some of the techniques that are used in order to
establish these bounds. The techniques include connections with electrical
resistance, the use of minimum weight spanning trees, and the use of
probabilistic arguments. The known bounds refer to the maximum and minimum
possible cover time on general graphs (expressed as a function of the number of
vertices in the graph), as well as on special families of graphs such as trees
and regular graphs. The question of designing an efficient deterministic
algorithm that on any given graph produces an accurate estimate of the cover
time is still open.
STOC'08 rehearsal talks |
A special meeting dedicated to STOC’08 rehearsal talks. The plan is to meet at room 261, 16:00, go over the schedule for the conference and point to some of the conference highlights. Then we split into two rooms: 261 and Pekeris. A tentative schedule for the double-session is below.
Room 261 Session Chair:
Games for Exchanging Information
Gillat Kol and Moni Naor
Combinatorial Construction of Locally
Testable Codes
Or Meir
Sketching in Adversarial Environments
Ilya Mironov and Moni Naor and Gil
Segev
Pekeris Session Chair: Amir Yehudayoff
Unconditional Pseudorandom Generators
for Low Degree Polynomials
Shachar Lovett
Inverse Conjecture for the Gowers
Norm is False
Roy Meshulam and Shachar Lovett and
Alex Samorodnitsky
Title: |
|
Speaker: |
Shachar
Lovett |
Linial and Nisan [LN90] conjectured that every depth-d Boolean
circuit with AND/OR gates,
using M gates, is fooled by any k-wise independent distribution, for k = (log
M)d-1.
However, they could only show it for k=ÖMlogM
I will show a very beautiful result of Bazzi (FOCS' 07), showing that CNF/DNF
formulas on M clauses
are fooled by any (log M)2-wise distribution.
The result uses several reductions, each of which is beautiful in its own
right, considering the representation of a DNF in various basis: the regular
base, the polynomial {0,1} base, and the polynomial {-1,1} base (i.e the
Fourier base).
Let g(x)=g(x1,...,xn) be some DNF formula on the
variables with M clauses.
We consider g as a polynomial over the reals: g:{0,1}n®R.
1. He first uses duality of linear programming to show that proving that any
function g(x) is fooled by k-wise
independence, is equivalent to finding polynomials of degree k over the reals
bounding g, i.e. polynomials
gu,gl s.t. for every x, gu(x) >= g(x) >=
gl(x).
2. He then shows that if you can find a low degree polynomial f(x) s.t. for any
x s.t. g(x)=0, also f(x)=0,
then you can build the polynomials gu,gl.
Thus, we reduced to the problem of finding such f(x).
3. In order to build f(x) he builds a special truncation of high Fourier
frequencies of DNFs. In order to analyze
this, he needs to analyze various functions related to the
DNF. The analysis here relies strongly on their
Fourier expansion.
4. Finally the proof reduces to bounding the tail L2 norm of certain
other DNFs related to the original DNF.
He uses a Linial-Mansour-Nisan bound of the L2
norm of the Fourier tail of DNFs, resulting from the
Hastad switching lemma.
Title: |
Two Query PCP with Sub-Constant Error |
Speaker: |
Ran
Raz |
We show that the NP-Complete language
3Sat has a PCP verifier that makes two queries to a proof of almost-linear size
and achieves sub-constant probability of error o(1). The verifier performs only
projection tests, meaning that the answer to the first query determines at most
one accepting answer to the second query.
Previously, by the parallel
repetition theorem, there were PCP Theorems with two-query projection tests,
but only (arbitrarily small) constant error and polynomial size.
There were also PCP Theorems with
sub-constant error and almost-linear size, but a constant number of queries
that is larger than 2.
As a corollary, we obtain a host of
new results. In particular,our theorem improves many of the hardness of
approximation results that are proved using the parallel repetition theorem. A
partial list includes the following:
1) 3SAT cannot be efficiently
approximated to within a factor of 7/8+o(1), unless P=NP.
This holds even under almost-linear
reductions.
Previously, the best known
NP-hardness factor was 7/8+e for any constant e>0, under polynomial reductions (Hastad).
2) 3LIN cannot be efficiently
approximated to within a factor of 1/2+o(1), unless P=NP.
This holds even under almost-linear
reductions. Previously, the best known NP-hardness factor was 1/2+e for any
constant e>0,
under polynomial reductions (Hastad).
3) A PCP Theorem with amortized query
complexity 1+o(1) and amortized free bit complexity o(1). Previously, the best
known amortized query complexity and free bit complexity were 1+e and e,
respectively, for any constant e>0 (Samorodnitsky and Trevisan).
4) Clique cannot be efficiently
approximated to within a factor of n1-o(1), unless ZPP=NP.
Previously, a hardness factor of n1-e for any constant e>0 was known, under the assumption that P≠NP (Hastad and
Zuckerman).
One of the new ideas that we use is a
new technique for doing the composition step in the (classical) proof of the
PCP Theorem, without increasing the number of queries to the proof. We
formalize this as a composition of new objects that we call Locally
Decode/Reject Codes (LDRC). The notion of LDRC was implicit in several previous
works, and we make it explicit in this work. We believe that the formulation of
LDRCs and their construction are of independent interest.
This is joint work with Dana
Moshkovitz
Title: |
A Preemptive Algorithm for Maximizing Disjoint Paths on Trees |
Speaker: |
Daniel
Glasner |
We consider the online version of the maximum vertex disjoint path problem when the underlying network is a tree. In this problem, a sequence of requests arrives in an online fashion, where every request is a path in the tree. The online algorithm may accept a request only if it does not share a vertex with a previously accepted request. The goal is to maximize the number of accepted requests. It is known that no online algorithm can have a competitive ratio better than W(log n) for this problem, even if the algorithm is randomized and the tree is simply a line. Obviously, it is desirable to beat the logarithmic lower bound. Adler and Azar [SODA 1999] showed that if preemption is allowed (namely, previously accepted requests may be discarded, but once a request is discarded it can no longer be accepted), then there is a randomized online algorithm that achieves constant competitive ratio on the line. In the current work we present a randomized online algorithm with preemption that has constant competitive ratio on any tree. Our results carry over to the related problem of maximizing the number of accepted paths subject to a capacity constraint on vertices (in the disjoint path problem this capacity is~1). Moreover, if the available capacity is at least~4, randomization is not needed and our online algorithm becomes deterministic.
This is a joint work with Yossi Azar and Uri Feige.
Title: |
Optimal
Hierarchical Decompositions for Congestion Minimization in Networks |
Speaker: |
Robi
Krauthgamer |
I will
describe a remarkable STOC 2008 paper by Harald Raecke (best paper award
co-winner):
"Optimal
Hierarchical Decompositions for Congestion Minimization in Networks"
http://www.dcs.warwick.ac.uk/~harry/pdf/opthierarchical.pdf
The paper designs a new recursive decomposition technique that approximates
cuts in a graph (in a certain sense). This remarkable concept leads, quite
easily, to improved approximation algorithms for two seemingly unrelated
problem, oblivious routing with minimum congestion, and minimum bisection in
graphs.
Title: |
Can cubic tiles be sphere-like? |
Speaker: |
The unit cube tiles Rd by Zd, in the sense that its translations by vectors from Zd cover Rd.
It is natural to ask what is the minimal surface area of a body that tiles Rd by Zd.
The volume of any such body should clearly be at least 1, and therefore its surface area must be at least that of a unit volume ball, which is of order Öd.
The surface area of the cube, however, is of order d, and no better tiling was known.
In this work we use a random construction to show that the optimal surface area is indeed Q(Öd),
namely there exist bodies that tile Rd as a cube would, but have sphere-like surface areas.
Tiling problems were considered for well over a century,
but this particular tiling problem was also recently considered in Computer Science
because of its relations with the Unique Games conjecture and with the Parallel Repetition problem.
Indeed, our current result follows from the same idea that was used recently by Raz in his counterexample for the strong Parallel Repetition conjecture.
This is joint work with Ryan O'Donnell, Anup Rao and Avi Wigderson.
Quantum Multi Prover Interactive Proofs with Communicating Provers |
|
Speaker: |
Avinatan
Hassidim |
We introduce a variant of Quantum Multi Prover Interactive Proofs (QMIP), where
the provers do not share entanglement, the communication between the verifier
and the provers is quantum, but the provers are unlimited in the classical
communication between them.
At first, this model may seem
very weak, as provers who exchange information seem to be equivalent in power
to a simple prover. This in fact is not the case - we show that any language in
NEXP can be recognized in this model efficiently, with just two provers and two
rounds of communication, with a constant completeness-soundness gap.
The main idea is not to bound the information the provers exchange with each
other, as in the classical case, but rather to prove that any ``cheating''
strategy employed by the provers has constant probability to diminish the
entanglement between the verifier and the provers by a constant amount.
Detecting such reduction gives us the soundness proof. Similar ideas and
techniques may help with other models of Quantum MIP,including the dual
question, of non communicating provers with unlimited entanglement.
Joint work with Michael Ben-Or and Haran Pilpel
Title: |
A combinatorial construction of almost-Ramanujan graphs using the Zig-Zag product |
Speaker: |
Avraham Ben-Aroya (Tel-Aviv University) |
Reingold, Vadhan and Wigderson [FOCS'00] introduced the graph Zig-Zag product. This product combines a large graph and a small graph into one graph, such that the resulting graph inherits its size from the large graph, its degree from the small graph and its spectral gap from both. Using this product they gave the first fully-explicit combinatorial construction of expander graphs. They showed how to construct D-regular graphs having spectral gap 1-O(D-1/3). In the same paper, they posed the open problem of whether a similar graph product could be used to achieve the almost-optimal spectral gap 1-O(D-1/2).
In this paper we propose a generalization of the Zig-Zag product that combines a large graph and several small graphs. The new product gives a better relation between the degree and the spectral gap of the resulting graph. We use the new product to give a fully-explicit combinatorial construction of D-regular graphs having spectral gap 1-D-1/2+o(1).
Joint work with Amnon Ta-Shma
Title: |
|
Speaker: |
Subash
Khot (NYU) |
This talk will review some recent results proving lower bounds on the distortion
needed to embed specific classes of finite metrics into L1. This includes
earth-mover metrics, edit distance metrics and the so-called negative type
metrics.
Title: |
Two recent works in property testing |
Speaker: |
Oded
Goldreich |
I plan to talk on two recent works of myself and Dana Ron.
Both works are in the area of property testing.
1.
"Algorithmic
Aspects of Property Testing in the Dense Graphs Model" initiates a refined
study of the query complexity of testing in this model, and focuses on the
relation between adaptive and non-adaptive testers.
2.
"On
Proximity Oblivious Testing" defines and studies a restricted class of
testers; specifically, testers that repeatedly invoke a basic test that is not
given the proximity parameter as input.
The works are available in http://www.wisdom.weizmann.ac.il/~oded/p_testAA.html and http://www.wisdom.weizmann.ac.il/~oded/p_testPOT.html respectively.