The seminar is on Sundays,
November
Ariel Gabizon |
Isomorphism
of Graphs with Bounded Eigenvalue Multiplicity |
|
Dana Moshkovitz |
||
Zeev Dvir |
Extractors for a
Constant Number of Polynomial Min-Entropy Independent Sources |
|
Amir Yehudayoff |
Estimation of a certain exponential sum arising in complexity theory |
December
13/12/2005 [Tuesday!] |
Adam Smith |
Evolving Notions of Security for Quantum Protocols |
Elad Verbin |
Markov-Chain
Monte-Carlo (MCMC) Methods in Theoretical Computer Science: An overview |
|
26/12/2005 |
|
Hanuka |
January
2/1/2006 |
|
|
9/1/2006 |
|
|
Hovav Shacham |
||
Guy Rothblum |
||
Ronen Gradwohl |
February
Ariel Gabizon |
||
Gidi Amir |
Biased Graph Processes and The Emergence of The Giant
Component |
|
[at 1pm!] |
Ariel Yadin |
Everyday Inequalities |
March
[Thursday!] |
Omer Reingold |
Tutorial on Black-Box Separations |
Emmanuelle Lebhar |
A doubling
dimension threshold O(loglog n) for augmented graphs navigability |
|
13/3/2006 |
|
Ta’anit
Esther |
[Sunday!] |
Moni Naor, Danny Harnik |
How to prove that Minicrypt=Cryptomania (in the future) [Note: The talk will be at room 5 in Feinberg!] |
|
Finding small roots of modular polynomials |
April
Zeev Dvir |
Semi-direct product
in groups and Zig-zag product in graphs: connections and applications |
|
6/4/2006 [Thur, |
Mira Meyerovich |
|
16/4/2006 |
|
Passover |
23/4/2006 |
|
|
Dana Moshkovitz |
Reducing
Randomness for Sampling – Part I |
May
Dana Moshkovitz |
Reducing
Randomness for Sampling – Part II |
|
14/5/2006 |
|
|
Noam Livne |
Proving Lower Bounds for Secret Sharing Schemes Using the
Entropy Function |
|
Dana Moshkovitz |
June
Dana Moshkovitz |
||
Amir Yehudayoff |
On
the Efficiency of Local Decoding Procedures for Error-Correcting Codes |
|
18/6/2006 |
|
|
25/6/2006 |
|
|
July
Hovav Shacham |
||
|
|
|
[room 261] |
Anup Rao |
|
24/7/2006 [Monday!] |
Ariel Gabizon |
Stepanov's elementary method for proving Weil’s
bounds |
Or Meir |
Graph Isomorphism for Planar Graphs |
August
Oded Goldreich |
Pseudorandomness -- an overview |
|
Oded Goldreich |
Pseudorandomness -- an overview (cont’d) |
September
Michael Schapira |
Truthful
Randomized Mechanisms for Combinatorial Auctions |
|
24/9/2006 |
|
Rosh Ha’Shana |
Title: |
Isomorphism of Graphs with
Bounded Eigenvalue Multiplicity |
Speaker: |
Ariel Gabizon |
The graph isomorphism problem is one of the most interesting problems in NP whose complexity is
undetermined. It is believed not to be NP-Complete and it is not known whether it is in CO-NP or P.
However, some very nice algorithms have been discovered for restricted classes of graphs. Maybe the best
example is that of Luks (80) which shows how to test isomorphism in polynomial time when the degrees of the
vertices are bounded by a constant. In this talk we show how to solve the following problem:
Let A and A' be the adjacency matrices of graphs G and G' on n vertices. Both A and A' have n eigenvalues
which are not necessarily distinct. If no specific eigenvalue appears more than c times, for some constant c, then
we can check in polynomial time whether G and G' are isomorphic.
The paper is by Babai, Grigoryev and Mount from 1982.
Title: |
|
Speaker: |
Dana Moshkovitz |
The PCP Theorem [AS92,ALMSS92] is one of the great discoveries of complexity theory.
It argues that all NP languages have Probabilistically Checkable Proofs,
that is, proofs that can be efficiently (probabilistically) verified by reading only a constant number of their symbols (picked in some random manner).
A few months ago, Irit
Dinur from the
This is a truly beautiful proof, considerably simpler than previous ones.
Also, it can be used to obtain the shortest PCPs known to date.
We’ll outline the entire construction, give the appropriate intuitions and background,
and focus on the composition step.
This step is not the novelty of the current proof, but:
1. It’s a key step, involving important (and relatively new) notions, such as that of an “assignment tester” or a “PCP of proximity” [DR04,BGHSV04].
2. It was not covered in detail in Irit’s talk (who, naturally, preferred to show the proof of the new amplification step).
Title: |
Extractors for a
Constant Number of Polynomial Min-Entropy Independent Sources |
Speaker: |
Zeev Dvir |
A randomness extractor is a function transforming several independent sources of randomness into an almost uniform output. In order for the output of the extractor to be close to uniform the input sources must satisfy some lower bound on their min-entropy. Much effort has been made in order to construct extractors for as few sources as possible with as low min-entropy as possible.
Recently, Anup Rao gave a beautiful construction of an extractor for O(1) n-bit sources, each with min-entropy at least nr for any constant 0<r<1 (the number of sources depends on r). Rao's construction is interesting for two reasons. The first is that this is the only construction that works for a constant number of sources with sub-linear entropy and the second is that this construction does not rely on results from additive number theory and uses only "classical" extractor techniques.
Title: |
Estimation of a certain
exponential sum arising in complexity theory |
Speaker: |
Amir Yehudayoff |
The paper is by J. Bourgain.
Bourgain bounds the correlation between two functions using exponential sums.
Then, using the epsilon-discriminator lemma of Hajnal, Maass, Pudlak, Szegedy and Turan,
he proves an exponential lower bound on the size of a certain depth 3 circuit
(a Maj o Mod_q o And_d circuit; that is, a majority of mod q of ands).
Title: |
Evolving Notions of Security for Quantum Protocols |
Speaker: |
Adam Smith |
The talk will discuss some of the difficulties of defining and proving security of cryptographic protocols in a context where the adversary (and occasionally the participants) have quantum computers.
First, we'll discuss existing classical security proofs -- not all of them go through when the adversary has a quantum computer. We'll explain the difficulties as well as some of the techniques that have been used to 'patch' these proofs. Second, we'll discuss new, truly 'quantum' protocols and reductions.
Familiarity with the basic notation of quantum computing will be helpful but not necessary.
Title: |
Markov-Chain
Monte-Carlo (MCMC) Methods in Theoretical Computer
Science: An overview |
Speaker: |
Elad Verbin (Tel-Aviv University) |
In this lecture I will give an overview presentation of the topic of Markov Chain Monte Carlo (MCMC) Algorithms.
The field of MCMC Algorithms is about Markov chains for combinatorial objects and trying to upper bound their so-called mixing time (the time it takes them to "mix" to the uniform distribution, or close to it).
This as opposed to ergodicity theory, developed in the early half of the 20th century, which dealt only with researching the (existence of) a limit distribution, without researching the rate of convergence to it.
I will do my best to give enough background and intuition into MCMC so that the listeners will be able to identify "themes" that relate to MCMC in their problems, and to adopt the MCMC "way of thinking". The lecture will try to give intuition without being technical. Only elementary backround in probability theory will be assumed.
We will discuss the theory of MCMC, as opposed to practical (and often heuristic) applications of it. The theoretical study of MCMC started in the '80s, and rapid progress has been made since then. Some of the successes of this field are:
(i) an efficient algorithm for approximating (to arbitrary precision) the permanent of a positive matrix;
(ii) An efficient algorithm for approximating the volume of an n-dimensional convex body (represented by a membership oracle) that runs in time polynomial in n;
(iii) An efficient algorithm for approximating the number of proper k-colorings of a graph, for large enough k.
Here is an example: Let G be a graph with maximum degree D, let k be an integer such that k>D+1. You wish to uniformly sample a legal coloring of the graph G with k colors (this is called a k-coloring). It is easy to see that there is at least one such legal coloring, and it is easy to find it. (Note: the problem is not NP-Hard because we are coloring not by the minimal number of colors, but by a number of colors that is guaranteed to suffice). But how can we pick a random legal coloring? We take the (arbitrary) coloring that we found, and we make small local random changes to it repeatedly, while making sure it stays a legal coloring. A process of this form is called a Markov Chain (in this case, the chain is defined on the colorings of the graph). With an appropriate Markov Chain, one can prove that if we run for a sufficiently large number of steps then we will get a coloring from a distribution that is (almost) uniform. The problem is how many steps this actually requires: is it polynomial? The answer to this question depends, of course, on the Markov Chain Chosen, that is on the type of "local, random changes".
Speaker: |
Hovav Shacham |
Moni Naor observed that every identity-based encryption (IBE) scheme gives rise to a signature scheme. For example, BLS signatures are the signatures obtained from Boneh-Franklin IBE. For BLS signatures and Boneh-Franklin IBE, only a heuristic argument of security is known, in the so-called random oracle model.
At Eurocrypt 2005, Brent Waters gave the first practical IBE scheme secure without random oracles. In this talk, we consider the signature scheme obtained from Waters IBE.
Waters signatures are provably secure (without random oracles) under the computational Diffie-Hellman assumption.
Furthermore, the scheme yields short signatures -- 320 bits -- and, like BLS, has algebraic structure that is useful for constructing extensions and variants.
Title: |
|
Speaker: |
Guy Rothblum |
In a
recent work Appelbaum, Ishai, and Kushilevtiz studied the parallel
time-complexity of basic cryptographic primitives such as one-way functions
(OWFs) and pseudorandom generators (PRGs). They considered the possibility of computing
instances of these primitives by NC0 circuits, in which each output bit depends
on a constant number of input bits, and showed overwhelming evidence for the
possibility of cryptography in NC0.
We will survey their surprising theorem, which states that every one-way
function computable in NC1 can be compiled into a one-way function in which
each output bit depends on a constant number of input bits.
Title: |
|
Speaker: |
Ronen Gradwohl |
In this talk we will consider the following question: given some balanced Boolean function f on n variables, how much influence do the individual variables have on the outcome? In particular, do there exist functions in which the influence of every variable is small?
Ben-Or and Linial defined the TRIBES function, in which every variable has influence $\Theta(\log n / n)$, and conjectured that this was the best possible. Kahn, Kalai and Linial proved this conjecture by showing that in every Boolean function there exists a variable with influence $\Omega(\log n / n)$.
In the talk we will prove both of these results (although most of the time will be spent on the latter). On the way, we will survey the main tools from Fourier analysis of Boolean functions used in the proof of KKL.
Bibliography:
· Michael Ben-Or and Nati Linial. Collective Coin Flipping. In "Randomness and Computation," 1990.
· Jeff Kahn, Gil Kalai and Nati Linial. The Influence of Variables on Boolean Functions. In FOCS, 1988.
Title: |
|
Speaker: |
Ariel Gabizon |
The rate
of a code C:Fkà Fn is k/n. Intuitively, this is the amount of
information transferred per symbol.
The
list decoding problem with (constant) parameter ε>0, is as follows:
Given a
received word r we wish to find in poly(n) time, all the codewords that disagree
with r in at most a (1-ε)-fraction of the coordinates.
In
particular, this implies that there can be at most poly(n) codewords in any
hamming sphere of radius (1-ε).
Given
this requirement we would like to maximize the rate R of the code. It can be
shown that we must have R <= ε.
Non-constructively,
we can get rate R= ε–o(1). Guruswami and Sudan gave a list decoding
algorithm for Reed-Solomon codes that can work with R=ε2<<
ε. For a long time, attempts to go past the O(ε2) 'barrier'
were unsuccessful. Recently, breakthrough works of Parvaresh and Vardi, shortly
after improved by Guruswami and Rudra achieved R= ε - ε' for any
ε'>0.
In the
talk we will review the standard Berlkamp-Welch Reed Solomon decoding, and the
Guruswami-Sudan list-decoding algorithm. Then we will go over the ideas in
these two new works.
Title: |
Biased Graph Processes and The Emergence of The Giant
Component |
Speaker: |
Gidi Amir |
A
random graph process, G1(n), is a sequence of graphs on n vertices
which begins with the edgeless graph, and where at each step a single edge is
added according to a uniform distribution on the missing edges. It is well
known that in such a process a giant component (of linear size) typically
emerges after (1+o(1))n/2 edges (a phenomenon known as “the double
jump”), i.e., at time t=1 when using a timescale of n/2 edges in each
step.
We consider a generalization of this process, GK(n), which gives a
weight of size 1 to missing edges between pairs of isolated vertices, and a
weight of size KÎ[0,¥)
otherwise. This corresponds to a case where links are added between n initially
isolated settlements, where the probability of a new link in each step is
biased according to whether or not its two endpoint settlements are still
isolated.
Using methods of Spencer and Wormald, we describe the typical emerging time of
a giant component in this process, tc(K), as the singularity point
of a solution to a set of differential equations. We proceed to analyze these
differential equations and obtain properties of GK, and in
particular, we show that tc(K) decreases from 3/2 to 0 as K
increases from 0 to ¥, and
that tc(K) = 4/Ö(3K)(1
+ o(1)).
Joint work with Ori Gurel-Gurevich, Eyal Lubetzky and Amit Singer.
Title: |
Everyday Inequalities |
Speaker: |
Ariel Yadin |
We plan to review and prove very
basic inequalities that are frequently used:
Jensen,
Markov, Chebyshev, estimates for (1 + x)n, Chernoff,
Azuma, Shearer, Hölder & Cauchy-Schwartz, Janson,...
A link to Ariel’s
inequalities file: http://www.wisdom.weizmann.ac.il/~ayadin/Ineq.pdf.
Title: |
Tutorial on Black-Box Separations |
Speaker: |
Omer Reingold |
A practice talk for a tutorial
on black-box separations to be given in TCC 06.
Title: |
A doubling dimension threshold O(loglog n) for augmented
graphs navigability |
Speaker: |
Emmanuelle Lebhar (LIP, ENS Lyon [France]) |
In his seminal work, Kleinberg
showed how to augment meshes using random edges, so that they become navigable;
that is, greedy routing computes paths of poly-logarithmic expected length
between any pairs of nodes. This yields the crucial question of determining
whether such an augmentation is possible for all graphs.
In this paper, we answer
negatively to this question by exhibiting a threshold on the doubling
dimension, above which an infinite family of Cayley graphs cannot be augmented to
become navigable whatever the distribution of random edges is. Precisely, it
was known that graphs of doubling dimension at most O(log log n) are navigable.
We show that for doubling dimension >> loglog n, an infinite family of
Cayley graphs cannot be augmented to become navigable.
Our proof relies on a result of
independent interest: we show that the analysis of greedy routing worst
performances on graphs augmented by arbitrary distributions of edges can be
reduced to the analysis assuming a symmetric distribution. Finally, we complete
our result by studying the special case of square meshes that we prove to
always be augmentable to become navigable.
Joint work with Pierre
Fraigniaud and Zvi Lotker.
Title: |
How to prove that Minicrypt=Cryptomania (in the future) |
Speaker: |
Moni Naor and Danny Harnik |
Arguably the most significant
question in the intersection of cryptography and complexity is whether
Minicrypt=Cryptomania. One way to phrase the question is whether public-key cryptography
is a fluke and can be based only on very special types of functions, or that in
principal it is possible to base it on any one-way function.
In this talk we will present two
approaches for showing the latter. The first, due to Rudich is based on secret
sharing over general access structures. The second, due to Harnik and Naor (and
continues Harnik's presentation in the Sunday seminar) is based on compression
of NP problems. That is given a long NP problem with a relatively short witness
the goal is to come up with another instance having the same solution.
Title: |
Finding small roots of modular polynomials |
Speaker: |
Hovav Shacham |
Consider a modulus N of unknown
factorization.
How do we go about finding roots
of a polynomial f(x) modulo N? (Or, more
generally, modulo some factor of N?)
In this talk, we give
Coppersmith's method for computing, in polynomial time, roots of the equation
f(x) = 0 mod b where b divides N, and b > Nb
that are small: less than
(roughly) Nb^2 / deg(f).
We begin by presenting the
Lenstra-Lenstra-Lovasz approximation algorithm for the shortest-vector problem
on lattices. We then present Coppersmith's method. This method uses the LLL
algorithm to obtain from the modular polynomial f(x) a polynomial g(x) that has
the same small root x0 over the integers; we also discuss
Howgrave-Graham's sufficient condition for a polynomial to have this property.
Finally, we consider some
applications of Coppersmith's method to cryptanalyzing certain classes of RSA
keys.
Our exposition is a simplified
version of that given in Alexander May's thesis.
Title: |
Semi-direct product in groups and Zig-zag product in graphs: connections and applications |
Speaker: |
Zeev Dvir |
The talk will present a paper by Alon, Lubotzky and Wigderson from 2001 showing a connection between the Zig-Zag product of [RVW] and a well known group theoretic operation called the semi-direct product.
Using this connection and results on the Zig-Zag product, the paper answers an open question in group theory.
I
will not assume any previous knowledge in group theory.
Title: |
|
Speaker: |
Mira Meyerovich |
The goal of steganography is to hide the existence of communication.
The sender must disguise his secret messages as an innocent-looking covertext. For example, the sender can insert a secret letter into a photo and post it on the Internet. Real world stegosystems are often broken because they make invalid assumptions about the covertext distribution. All provably secure stegosystems in the literature assume that, at a minimum, the sender is able to sample the covertext distribution. We examine whether it is possible to weaken this assumption. By modeling the covertext distribution as a stateful Markov process, we create a sliding scale between real world and provably secure stegosystems. We also show that insufficient knowledge of past states can have catastrophic results.
Joint work with Anna Lysyanskaya
Title: |
Reducing Randomness for Sampling |
Speaker: |
Dana Moshkovitz |
This talk will cover basic topics in derandomization.
We will consider the following sampling problem:
Fn is some finite vector space over a field F.
We have access to a membership oracle for some subset AÍFn.
We wish to estimate the size of A via an efficient probabilistic algorithm.
The algorithm is given as input an accuracy parameter e and a confidence parameter d,
and outputs an estimate m that approximates |A|/|Fn| well with high probability;
specifically, | m - |A|/|Fn| | £ e with probability at least 1-d.
We would like to minimize the amount of randomness the algorithm uses (which is a function of |F|, n, 1/d and 1/e).
We will discuss several commonly known sampling techniques, such as: standard independent sampling, pairwise independent sampling and almost pairwise independent sampling (via Cayley expanders). We’ll see the connection between samplers and extractors and analyze the randomness needed for each of the sampling techniques.
We will also apply Fourier analysis to analyze a very simple and relatively randomness-efficient sampler from a recent paper with Ran Raz.
As usual, notes for the talk will be available for anyone to read and/or photocopy.
References:
· Survey on samplers: Oded Goldreich, A Sample of Samplers: A Computational Perspective on Sampling, 1997, ECCC TR97-020.
·
Almost k-wise independence:
· Connection to extractors: David Zuckerman, Randomness-Optimal Oblivious Sampling, 1997 (preliminary version in STOC’96).
Title: |
Proving Lower Bounds for Secret Sharing Schemes Using the
Entropy Function |
Speaker: |
Noam Livne |
The entropy function, introduced by
The talk will consist of three parts:
1) basic notions in secret sharing;
2) entropy and related functions and their properties;
3) lower bound for secret sharing.
Title: |
|
Speaker: |
Dana Moshkovitz |
Talks/pictures and chocolates from Ran, straight from
Title: |
On the Efficiency of Local Decoding Procedures for Error-Correcting Codes |
Speaker: |
Amir Yehudayoff |
We present the paper On the Efficiency of Local Decoding Procedures for Error-Correcting Codes, by Jonathan Katz and Luca Trevisan.
In this paper Katz and Trevisan define locally decodable codes, and give lower bounds for the size of such codes. Their lower bounds are the best known.
Title: |
|
Speaker: |
Hovav Shacham |
In a blind signature
scheme, a user interacts with a signer to obtain a signature on message of his
choice, in such a way that the signer does not learn anything about the message
he is signing. (Blind signatures have
application, for example, in anonymous e-cash.)
We describe
Fischlin's recent construction of two-move concurrent blind signatures based on
general assumptions. The construction
makes use, as a building block, of unique non-interactive zero-knowledge proofs
-- i.e., NIZKs in which there is a bijection between witnesses and proofs. We
describe Fischlin's construction, based on general assumptions, of a UNIZK for
CircuitSAT.
Title: |
|
Speaker: |
Anup Rao |
We give polynomial-time, deterministic randomness extractors for sources generated in small space, where we model space s sources on {0,1}n as sources generated by width 2s branching programs: For every constant d>0, we can extract .99dn bits that are exponentially close to uniform (in variation distance) from space s sources of min-entropy dn, where s=O(n). In addition, assuming an efficient deterministic algorithm for finding large primes, there is a constant h>0 such that for any b>n-h, we can extract m=(d-b)n bits that are exponentially close to uniform from space s sources with min-entropy dn, where s=O(b3n). Previously, nothing was known for d£½, even for space 0.
Joint work with Jesse Kamp, Salil Vadhan and David Zuckerman.
Title: |
Stepanov's elementary method for proving Weil’s bounds |
Speaker: |
Ariel Gabizon |
Let F be a field of size q, where q is odd. When choosing a random
element t from F\{0} the probability of t being a quadratic residue, i.e., a
square of another element in F, is exactly 1/2. Let f be a polynomial of degree
d<<q that is not of the form f=c*g(t)2 for any constant c and
polynomial g. Suppose we pick a random element t from F, and now apply f on it.
What is the probability that f(t) will be a quadratic residue? It turns out
that this probability is also very close to 1/2, specifically, ½ ± d/q1/2.
This result has many applications in theoratical computer
science. It was originally derived from Andre Weil's important theorem known as
the 'Riemann hypothesis for curves over finite
fields'. Weil's proof, published in 1948, is considered very difficult and uses
deep ideas from algebraic geometry. 20 years later, Sergei Stepanov gave
elementary proofs of some of Weil's most
significant results including the one described above. We will show Stepanov's
proof of this result.
Title: |
Graph Isomorphism for Planar Graphs |
Speaker: |
Or Meir |
The Graph Isomorphism
problem is one of today's big question marks for algorithms and complexity.
On the one hand, it
has resisted any attempts to solve it for quite a long time, and thus is conjectured
not to be in P.
On the other hand,
there are theoretical evidence pointing that the problem is not
NP-complete, and actually, it is not even known to be P-complete.
In contrast to the
lack of knowledge regarding the general problem, polynomial time algorithms
were found for many special cases of Graph Isomorphism. In this talk we will
present a O(nlogn) algorithm for the special case of planar graphs.
Title: |
Pseudorandomness -- an overview |
Speaker: |
Oded Goldreich |
A fresh view at the
"question of randomness" was taken in the theory of computing: It has
been postulated that a distribution is pseudorandom if it cannot be told apart
from the uniform distribution by an efficient procedure.
The paradigm,
originally associating efficient procedures with polynomial-time algorithms,
has been applied also with respect to a variety of limited classes of such
distinguishing procedures.
Starting with the
general paradigm, I will present the case of general-purpose pseudorandom
generators (running in polynomial-time and withstanding any polynomial-time
distinguisher), as well as derandomization-aimed generators (used towards the
derandomization of complexity classes such as BPP), generators withstanding
space-bounded distinguishers, and some special-purpose generators.
The talk will have a
significant intersection with my lectures on the subject during this year's
complexity class.
But, the style and
details will be different.
Title: |
|
Speaker: |
Michael Schapira |
The
field of Algorithmic Mechanism Design attempts to design efficient mechanisms
for decentralized computerized settings. The combinatorial auctions
problem has gained the status of the paradigmatic problem of this field. We
design two computationally-efficient incentive-compatible mechanisms for
combinatorial auctions with
general bidder preferences. Both mechanisms are randomized, and are incentive-compatible
in the universal sense. This is in contrast to recent previous work that only
addresses the weaker notion of incentive compatibility in expectation. The
first mechanism obtains an O(Öm)-approximation
of the optimal social welfare for arbitrary bidder valuations -- this is the
best approximation possible in polynomial time. The second one obtains an
O(log2 m)-approximation for a subclass of bidder valuations that
includes all submodular bidders. This improves over the best previously
obtained incentive-compatible mechanism for this class which only provides a O(Öm)-approximation.
Based on a recent joint work with Shahar Dobzinski and Noam Nisan.