Theory of Computation Student Seminar

Oded Nir and I are organizing a special seminar for TCS students from Israeli academic institutes.
The seminar's main goal is for theory students to get to know each other and provide a conducive atmosphere to discuss our research (or anything else).


Some technicalities:



You can add the seminar calendar to your Google account using the following link. 



You are more than welcome to share the seminar with any students who might be interested, sign up to give a talk,
write us about anything, and, most of all - join us! We would love to see you there! 


Inbar (inbar DOT ben-yaacov AT weizmann DOT ac DOT il)

and Oded (odednir123 AT gmail DOT com)


Probabilistically Checkable Arguments for all NP

Abstract: A probabilistically checkable argument (PCA) is a computational relaxation of PCPs, where soundness is guaranteed to hold only for false proofs generated by a computationally bounded adversary. The advantage of PCAs is that they are able to overcome limitations of PCPs. A succinct PCA has proof length that is polynomial in the witness length (and is independent of the non-deterministic verification time). Such succinctness is impossible to achieve for PCPs, under standard complexity assumptions. Bronfman and Rothblum (ITCS 2022) constructed succinct PCAs that are publicly-verifiable and have constant query complexity. Their scheme works only for a subclass of NP of relations that can be verified by a small-depth circuit, while assuming the sub-exponential hardness of the learning with errors (LWE) problem.


We construct PCAs for all NP under any one of the following assumptions: (1) polynomial hardness of LWE; (2) O(1)-LIN; or (3) Quadratic Residuosity (QR) and sub-exponential Decisional Diffie-Hellman (DDH). Our scheme is publicly-verifiable, adaptively sound, and has constant query complexity.


One of the main ingredients in our PCA scheme is a new RAM Delegation construction that satisfies several desirable properties that may be of independent interest. Our RAM Delegation scheme supports RAM programs that perform both read and write operations, and it has a strong soundness guarantee. Previous constructions only supported a subset of these properties. Our scheme has an additional prover efficiency property. The space complexity of the honest prover is only proportional to the number of locations the RAM program writes to (and not to its running-time). Our construction can be realized under the same set of assumptions as in the PCA construction.



Past Talks:



Spiking Neural Networks Through the Lens of Streaming Algorithms

Lecture abstract: In her talk, Yael will tell us about her research with Merav Parter and Cameron Musco, in which they study biological neural networks from an algorithmic perspective.


They consider the model of spiking neural networks (SNN), a simple yet biologically plausible model that captures the spiking behavior observed in real neural networks.

 

In this setting, she will focus on two results.

The algorithmic aspects of time measurement and counting and the connection between biological neural networks and streaming algorithms.

The Communication Overhead of Payment Computation is Exponential

Abstract: In the basic setting of algorithmic mechanism design, the goal is to design an algorithm that solves an optimization problem whose input is distributed among rational agents. We assume that the output of the algorithm matters to the agents, so game theoretic considerations should be taken into account. An example that effectively illustrates this scenario is provided by auctions. An auction is composed of two functions: an allocation function f that specifies which bidders get which items, and a payment function P that specifies the payment of each bidder. It is a common approach to focus on the design of f and neglect the role of the payment function P.

Fadel and Segal [Journal of Economic Theory, '09] question this approach by taking the lenses of communication complexity: they ask - can it be that computing the allocation function and the payments is significantly harder than computing the allocation function alone? They show that for every allocation function f, the communication overhead of computing its payment function P is at most exponential. It was later shown by Babaioff, Blumrosen, Naor and Schapira [EC'08] that the overhead of payment computation can be at least linear in the number of players. The question of whether the exponential upper bound of Fadel and Segal is tight remained wide open.

In this work, we solve this question in the affirmative by providing two different proofs, where each proof highlights a different source of hardness of payment computation. We complement our negative results by providing efficient protocols for payment computation in several domains.


Based on joint work with Shahar Dobzinski.

How to Recover a Secret with O(n) Additions

Abstract: Threshold cryptography is typically based on the idea of secret-sharing a private-key s \in F “in the exponent” of some cryptographic group G, or more generally, encoding s in some linearly homomorphic domain. In each invocation of the threshold system (e.g., for signing or decrypting) an “encoding” of the secret is being recovered and so the complexity, measured as the number of group multiplications over G, is equal to the number of F-additions that are needed to reconstruct the secret. Motivated by this scenario, we initiate the study of n-party secret-sharing schemes whose reconstruction algorithm makes a minimal number of additions. The complexity of existing schemes either scales linearly with n \log |F| (e.g., Shamir, CACM’79) or, at least, quadratically with n independently of the size of the domain F (e.g., Cramer-Xing, EUROCRYPT ’20). This leaves open the existence of a secret sharing whose recovery algorithm can be computed by performing only O(n) additions. 

We resolve the question in the affirmative and present such a near-threshold secret sharing scheme that provides privacy against unauthorized sets of density at most τ_p, and correctness for authorized sets of density at least τ_c, for any given arbitrarily close constants τ_p < τ_c. Reconstruction can be computed by making at most O(n) additions and, in addition,
(1) the share size is constant,
(2) the sharing procedure also makes only O(n) additions, and
(3) the scheme is a blackbox secret-sharing scheme, i.e., the sharing and reconstruction algorithms work universally for all finite abelian groups F. 

Prior to our work, no such scheme was known even without features (1)–(3) and even for the ramp setting where τ_p and τ_c are far apart. As a byproduct, we derive the first blackbox near-threshold secret-sharing scheme with linear-time sharing. We also present several concrete instantiations of our approach that seem practically efficient (e.g., for threshold discrete-log-based signatures). Our constructions are combinatorial in nature. We combine graph-based erasure codes that support “peeling-based” decoding with a new randomness extraction method that is based on inner-product with a small-integer vector. We also introduce a general concatenation-like transform for secret-sharing schemes that allows us to arbitrarily shrink the privacy-correctness gap with a minor overhead. Our techniques enrich the secret-sharing toolbox and, in the context of blackbox secret sharing, provide a new alternative to existing number-theoretic approaches.


Based on joint work with Benny Applebaum and Benny Pinkas

Doubly-Efficient  Interactive Proofs for Distribution Properties

Abstract: Suppose we have access to a small number of samples from an unknown distribution, and would like learn facts about the distribution.

An untrusted data server claims to have studied the distribution and makes assertions about its properties. Can the untrusted data server prove that its assertions are approximately correct? Can a short efficiently verifiable proof be generated in polynomial time?

 

We study doubly-efficient interactive proof systems that can be used to verify properties of an unknown distribution over a domain [N]. On top of efficient verification, our focus is on proofs that the honest prover can generate in polynomial time. More generally, the complexity of generating the proof should be as close as possible to the complexity of simply running a standalone analysis to determine whether the distribution has the property.

 

Our main result is a new 2-message doubly-efficient interactive proof protocol for verifying any label-invariant distribution property (any property that is invariant to re-labeling of the elements in the domain of the distribution). The sample complexity, communication complexity and verifier runtime are all \widetilde{O}({\sqrt{N}}). The proof can be generated in quasi-linear \widetilde{O}(N) time and sample complexities (the runtimes of the verifier and the honest prover hold under a mild assumption about the property's computational complexity). This improves on prior work, where constructing the proof required super-polynomial time [Herman and Rothblum, STOC 2022].  Our new proof system is directly applicable to proving (and verifying) several natural and widely-studied properties, such as a distribution's support size, its Shannon entropy, and its distance from the uniform distribution. For these (and many other) properties, the runtime and sample complexities for generating the proof are within $\polylog (N)$ factors of the complexities for simply determining whether the property holds.


Based on joint work with Guy Rothblum


Lifting Theorem for Information Complexity

Abstract: Lifting is a powerful technique in communication complexity. Using lifting theorems researchers proved (and unified) many important separations and lower bounds. Lifting theorems have also been used in the proofs of lower bounds in other areas such as proof complexity, circuit complexity and data structures. Generally speaking, using lifting theorems one proves communication complexity lower bounds by first proving lower bounds in the simpler model of query complexity and then they “lift” the lower bound to the harder model of communication complexity.


In this talk we prove a lifting theorem for a measure called “information complexity”. Information complexity is an information theoretic measure for communication, simply put, it measures the amount of information that two parties need to communicate in order to solve some problem. Information complexity has been useful in proving (and greatly simplifying) many lower bounds in communication complexity. Moreover, information complexity is interesting in its own right as, for example, it has connections to parallel repetition of communication problems (the direct sum problem).


Based on joint work with Or Meir

Color Fault-Tolerant Spanners

Abstract: We initiate the study of spanners in arbitrarily vertex- or edge-colored graphs (with no “legality” restrictions), that are resilient to failures of entire color classes. When a color fails, all vertices/edges of that color crash. An f-color fault-tolerant (f-CFT) t-spanner of a colored n-vertex graph G is a subgraph H that preserves distances up to factor t, even in the presence of at most f color faults. This notion generalizes the well-studied notions of f-vertex/edge fault-tolerant (f-V/EFT) spanners. The size of an f-V/EFT spanner crucially depends on the number f of vertex/edge faults to be tolerated. In the colored variants, even a single color fault can correspond to an unbounded number of vertex/edge faults.


The key conceptual contribution of this work is in showing that the size (number of edges) required by an f-CFT spanner is in fact comparable to its uncolored counterpart, with no dependency on the size of color classes. We provide optimal bounds on the size required by f-CFT (2k−1)-spanners. Our upper bounds are based on a generalization of the blocking set technique of [Bodwin and Patel, PODC 2019] for analyzing the (exponential-time) greedy algorithm for FT spanners. We complement them by providing efficient constructions of CFT spanners with similar size guarantees, based on the algorithm of [Dinitz and Robelle, PODC 2020].


Based on joint work with Shay Sapir and Elad Tzalik

Reaching Consensus for Asynchronous Distributed Key Generation

Abstract: We give a protocol for Asynchronous Distributed Key Generation (A-DKG) that is optimally resilient (can withstand  parties), has a constant expected number of rounds, has Õ(n3)​​ expected communication complexity, and assumes only the existence of a PKI. Prior to our work, the best A-DKG protocols required Ω(n)​​ expected number of rounds, and Ω(n4)​​ expected communication.

Our A-DKG protocol relies on several building blocks that are of independent interest. We define and design a Proposal Election (PE) protocol that allows parties to retrospectively agree on a valid proposal after enough proposals have been sent from different parties. With constant probability the elected proposal was proposed by a non-faulty party. In building our PE protocol, we design a Verifiable Gather protocol which allows parties to communicate which proposals they have and have not seen in a verifiable manner. The final building block to our A-DKG is a Validated Asynchronous Byzantine Agreement (VABA) protocol. We use our PE protocol to construct a VABA protocol that does not require leaders or an asynchronous DKG setup. Our VABA protocol can be used more generally when it is not possible to use threshold signatures.


Based on joint work with Ittai Abraham, Philipp Jovanovic, Mary Maller, Sarah Meiklejohn, and Alin Tomescu

Moderate Dimension Reduction for k-Center Clustering

Abstract: The Johnson-Lindenstrauss (JL) Lemma introduced the concept of dimension reduction via a random linear map, which has become a fundamental technique in many computational settings. For a set of n points in \mathbb{R}^d and any fixed \epsilon>0, it reduces the dimension d to O(\log n) while preserving, with high probability, all the pairwise Euclidean distances within factor 1+\epsilon. Perhaps surprisingly, the target dimension can be lower if one only wishes to preserve the optimal value of a certain problem, e.g., max-cut or k-means. However, for some notorious problems, like diameter (aka furthest pair), dimension reduction via the JL map to below O(\log n) does not preserve the optimal value within factor 1+\epsilon.


We propose to focus on another regime, of \emph{moderate dimension reduction}, where a problem's value is preserved within factor \alpha=O(1) (or even larger) using target dimension \log n / \mathrm{poly}(\alpha).

We establish the viability of this approach and show that the famous k-center problem is \alpha-approximated when reducing to dimension O(\tfrac{\log n}{\alpha^2}+\log k). Along the way, we address the diameter problem via the special case k=1. Our result extends to several important variants of k-center (with outliers, capacities, or fairness constraints), and the bound improves further with the input's doubling dimension.


While our \mathrm{poly}(\alpha)-factor improvement in the dimension may seem small, it actually has significant implications for streaming algorithms, and easily yields an algorithm for k-center in dynamic geometric streams, that achieves O(\alpha)-approximation using space \mathrm{poly}(kdn^{1/\alpha^2}). This is the first algorithm to beat O(n) space in high dimension d, as all previous algorithms require space at least \exp(d).

Furthermore, it extends to the k-center variants mentioned above.


Based on joint work with Shaofeng H.-C. Jiang, and Robert Krauthgamer

Non-interactive Universal Arguments

Abstract: In 2002, Barak and Goldreich introduced the notion of a universal argument and constructed an interactive universal argument for non-deterministic computations based on polynomially hard collision-resistant hash functions. Since then, and especially in recent years, there have been tremendous developments in the construction of non-interactive succinct arguments for deterministic computations under standard hardness assumptions. However, the constructed succinct arguments can be proven universal only under sub-exponential assumptions.

Assuming polynomially hard fully homomorphic encryption and a widely believed worst-case complexity assumption, we prove a general lifting theorem showing that all existing non-interactive succinct arguments can be made universal. The required complexity assumption is that non-uniformity does not allow arbitrary polynomial speedup. In the setting of uniform adversaries, this extra assumption is not needed.


Based on joint work with Nir Bitansky, Omer Paneth and Tomer Solomon

IOPs with Inverse Polynomial Soundness Error

Abstract: We show that every language in NP has an Interactive Oracle Proof (IOP) with inverse polynomial soundness error and small query complexity. This achieves parameters that surpass all previously known PCPs and IOPs. Specifically, we construct an IOP with perfect completeness, soundness error 1/n, round complexity O(loglog n), proof length poly(n) over an alphabet of size O(n), and query complexity O(loglog n). This is a step forward in the quest to establish the sliding-scale conjecture for IOPs (which would additionally require query complexity O(1)).


Our main technical contribution is a \emph{high-soundness small-query} proximity test for the Reed--Solomon code. We construct an IOP of proximity for Reed--Solomon codes, over a field F with evaluation domain L and degree d, with perfect completeness, soundness error (roughly) max{1-\delta , O(\rho^{1/4})}$ for \delta-far functions, round complexity O(loglog d), proof length O(|L|/\rho) over F, and query complexity O(loglog d); here \rho = (d+1)/|L| is the code rate. En route, we obtain a new high-soundness proximity test for bivariate Reed--Muller codes.


The IOP for NP is then obtained via a high-soundness reduction from NP to Reed--Solomon proximity testing with rate \rho = 1/poly(n) and distance \delta = 1-1/poly(n) (and applying our proximity test). Our constructions are direct and efficient, and hold the potential for practical realizations that would improve the state-of-the-art in real-world applications of IOPs.


Based on joint work with Alessandro Chiesa and Eylon Yogev.

On a Problem in Communication Complexity and Additive Combinatorics

Abstract: In this talk I will focus on the Exactly-N problem studied communication complexity which turns out to be characterized by well-known problems in Additive combinatorics. The Exactly-N problem in the number-on-forehead (NOF) communication setting asks k players, each of whom can see every input but their own, if the k input numbers add up to N. Chandra, Furst, and Lipton (STOC 1983) introduced the Exactly-N problem and showed that its k-party NOF communication complexity exactly corresponds to the number of colors needed to color [N]^{k−1} avoiding monochromatic corners. The latter is a generalization of van der Waerden’s famous coloring problem concerning arithmetic progressions (APs).


Until recently, the best-known corner-free colorings came via a reduction to the constructions of AP-free sets. New results by Linial and Shraibman (CCC 2021) and Green (New Zealand Journal of Mathematics 2021) changed this picture by providing improved protocols for Exactly-N in the k=3 case. The communication complexity point of view is key to these improvements.


In our recent work, we design an explicit protocol for Exactly-N that works for any number of players. Based on that, we give an improved protocol, which matches Green’s construction for k=3 players but works for any number of players. Consequently, our protocol yields the first improvement in the higher-order terms of higher-dimensional corner-free colorings since the original construction of Rankin from 1961.


Based on joint work with Toniann Pitassi, Suhail Sherif, Morgan Shirley, and Adi Shraibman


Abstract: In distributed certification, we have a general, incomplete network graph, and we wish to certify some property of the network. To that end, a prover assigns each node of the network a certificate, and the nodes then communicate with one another and decide whether to accept or reject. We require soundness and completeness: the property holds if and only if there exists an assignment of certificates to the nodes that causes all nodes to accept. Our goal is to minimize the length of the certificates, as well as the communication between the nodes of the network.


Distributed certification has been extensively studied in the distributed community, but it has so far only been studied in the information-theoretic setting, where the prover and the network nodes are computationally unbounded. In this work we introduce and study computationally bounded distributed certification: we define locally verifiable distributed SNARGs (LVD-SNARGs), which are an analog of SNARGs for distributed networks, and which circumvent known hardness results for information-theoretic distributed certification by requiring both the prover and the verifier to be computationally efficient (namely, PPT algorithms).


We give two LVD-SNARG constructions: the first allows us to succinctly certify any network property in P, using a global prover that can see the entire network; the second construction gives a distributed prover, which succinctly certifies the execution of any efficient distributed algorithm. Our constructions rely on non-interactive batch arguments for NP (BARGs) and on RAM SNARGs, which have recently been shown to be constructible from standard cryptographic assumptions.


Based on joint work with Elette Boyle, Ran Cohen, Tal Moran, and Rotem Oshman


Dual Systolic Graphs


Abstract: We define a family of graphs we call dual systolic graphs. This definition comes from graphs that are duals of systolic simplicial complexes. Our main result is a sharp (up to constants) isoperimetric inequality for dual systolic graphs. The first step in the proof is an extension of the classical isoperimetric inequality of the Boolean cube. The isoperimetric inequality for dual systolic graphs, however, is exponentially stronger than the one for the Boolean cube. Interestingly, we know that dual systolic graphs exist, but we do not yet know how to efficiently construct them. We, therefore, define a weaker notion of dual systolicity. We prove the same isoperimetric inequality for weakly dual systolic graphs, and at the same time provide an efficient construction of a family of graphs that are weakly dual systolic. We call this family of graphs clique products. We show that there is a non-trivial connection between the small set expansion capabilities and the threshold rank of clique products, and believe they can find further applications.


Based on joint work with Amir Yehudayoff

Worst-case to Expander-case Reductions


Abstract: In recent years, the expander decomposition method was used to develop many graph algorithms, resulting in major improvements to longstanding complexity barriers.

This powerful hammer has led the community to (1) believe that most problems are as easy on worst-case graphs as they are on expanders, and (2) suspect that expander decompositions are the key to breaking the remaining longstanding barriers in fine-grained complexity.


We set out to investigate the extent to which these two things are true (and for which problems).

Towards this end, we put forth the concept of worst-case to expander-case self-reductions.

We design a collection of such reductions for fundamental graph problems, verifying belief (1) for them. The list includes $k$-Clique, $4$-Cycle, Maximum Cardinality Matching, Vertex-Cover, and Minimum Dominating Set.

Interestingly, for most (but not all) of these problems the proof is via a simple gadget reduction, not via expander decompositions, showing that this hammer is effectively useless against the problem and contradicting (2).


Based on joint work with Amir Abboud

Lower Bounds for Pseudo-Deterministic Counting in a Stream

Abstract: Many streaming algorithms provide only a high-probability relative approximation. These two relaxations, of allowing approximation and randomization, seem necessary -- for many streaming problems, both relaxations must be employed simultaneously, to avoid an exponentially larger (and often trivial) space complexity. A common drawback of these randomized approximate algorithms is that independent executions on the same input have different outputs, that depend on their random coins. Pseudo-deterministic algorithms combat this issue, and for every input, they output with high probability the same ``canonical'' solution.

We consider perhaps the most basic problem in data streams, of counting the number of items in a stream of length at most $n$. Morris's counter [CACM, 1978] is a randomized approximation algorithm for this problem that uses $O(\log\log n)$ bits of space, for every fixed approximation factor (greater than $1$). Goldwasser, Grossman, Mohanty and Woodruff [ITCS 2020] asked whether pseudo-deterministic approximation algorithms can match this space complexity. Our main result answers their question negatively, and shows that such algorithms must use $\Omega(\sqrt{\log n / \log\log n})$ bits of space.

Our approach is based on a problem that we call Shift Finding, and may be of independent interest. In this problem, one has query access to a shifted version of a known string $F\in\{0,1\}^{3n}$, which is guaranteed to start with $n$ zeros and end with $n$ ones, and the goal is to find the unknown shift using a small number of queries. We provide for this problem an algorithm that uses $O(\sqrt{n})$ queries. It remains open whether $poly(\log n)$ queries suffice; if true, then our techniques immediately imply a nearly-tight $\Omega(\log n/\log\log n)$ space bound for pseudo-deterministic approximate counting.


Based on joint work with Vladimir Braverman, Robert Krauthgamer and Aditya Krishnan

Advisor-Verifier-Prover Games and the Hardness of Information Theoretic Cryptography

Abstract: A major open problem in information-theoretic cryptography is to obtain a super-polynomial lower bound for the communication complexity of basic cryptographic tasks. This question is wide open even for very powerful non-interactive primitives such as private information retrieval (or locally-decodable codes), general secret sharing schemes, conditional disclosure of secrets, and fully-decomposable randomized encoding (or garbling schemes). In fact, for all these primitives we do not even have super-linear lower bounds. Furthermore, it is unknown how to relate these questions to each other or to other complexity-theoretic questions.


In this note, we relate all these questions to the classical topic of query/space trade-offs, lifted to the setting of interactive proof systems. Specifically, we consider the following Advisor-Verifier-Prover (AVP) game: First, a function $f$ is given to the advisor who computes an advice $a$. Next, an input $x$ is given to the verifier and to the prover who claims that $f(x)=1$. The verifier should check this claim via a single-round of interaction based on the private advice $a$ and without having any additional information on $f$. We focus on the case where the prover is laconic and communicates only a constant number of bits, and, mostly restrict the attention to the simplest, purely information-theoretic setting, where all parties are allowed to be computationally unbounded. The goal is to minimize the total communication complexity which is dominated by the length of the advice plus the length of the verifier's query.


As our main result, we show that a super-polynomial lower bound for AVPs implies a super-polynomial lower bound for a wide range of information-theoretic cryptographic tasks. In particular, we present a communication-efficient transformation from any of the above primitives into an AVP protocol. Interestingly, each primitive induces some additional property over the resulting protocol. Thus AVP games form a new common yardstick that highlights the differences between all the above primitives.


Equipped with this view, we revisit the existing (somewhat weak) lower bounds for the above primitives, and show that many of these lower-bounds can be unified by proving a single counting-based lower-bound on the communication of AVPs, whereas some techniques are inherently limited to specific domains. The latter is shown by proving the first polynomial separations between the complexity of secret-sharing schemes and conditional disclosure of secrets and between the complexity of randomized encodings and conditional disclosure of secrets.


Based on joint (and yet-unpublished) work with Benny Applebaum


YOSOfier: A Compiler for YOSO MPC

Abstract: In this work, we explore the problem of perfectly-secure multi-party computation (MPC) in the You Only Speak Once (YOSO) model [Gentry et. al] where security is guaranteed against even a computationally unbounded adversary and the output is always correct. YOSO MPC protocols in the literature are inspired by protocols in the standard model. Hence, we ask the natural question: can we construct a generic security preserving compiler from secure protocols in the standard model to secure protocols in the YOSO model?


We provide a positive answer to the above question. Specifically, we design a generic security preserving compiler capable of compiling a secure protocol in the standard Universally Composable (UC) model to a secure protocol in the YOSO model. Our compiler is security-preserving even against a computationally unbounded adversary and assumes the following:

1. The UC-secure protocol has some natural mechanisms of input provision.

2. The YOSO protocol participants have access to an ideal transcript re-randomizing functionality. This is a natural primitive to build YOSO MPC protocols on. For instance, [Yakoubov et. al, 2023] use a similar functionality to construct round optimal YOSO MPC protocols.


We also get rid of the above two assumptions for the compilation of perfectly-secure MPC protocols in the UC model by designing a perfectly-secure Verifiable Secret Sharing (VSS) protocol and a perfectly-secure view re-randomisation protocol in the YOSO model.


In the talk, I will present our compiler in two parts:

1. Instantiating our compiler for a very simple toy MPC protocol based on lines over finite fields.

2. Presenting our main results as generalizations of the toy compiler

Both parts do not assume any background in security and multi-party computation.


Based on joint work with Arpita Patra, Protik Paul and Shravani Patil

Statistically-Sender-Private Oblivious-Transfer from LPN and Derandomization

Abstract: We construct a two-message oblivious transfer protocol with statistical sender privacy (SSP OT) based on the Learning Parity with Noise (LPN) Assumption and a standard Nisan-Wigderson style derandomization assumption. Beyond being of interest on their own, SSP OT protocols have proven to be a powerful tool toward minimizing the round complexity in a wide array of cryptographic applications from proofs systems, through secure computation protocols, to hard problems in statistical zero knowledge (SZK).

The protocol is plausibly post-quantum secure. The only other constructions with plausible post quantum security are based on the Learning with Errors (LWE) Assumption. Lacking the geometric structure of LWE, our construction and analysis rely on a different set of techniques.

Technically, we first construct an SSP OT protocol in the common random string model from LPN alone, and then derandomize the common random string. Most of the technical difficulty lies in the first step. Here we prove a robustness property of the inner product randomness extractor to a certain type of linear splitting attacks. A caveat of our construction is that it relies on the so called low noise regime of LPN. This aligns with our current complexity-theoretic understanding of LPN, which only in the low noise regime is known to imply hardness in SZK.


Based on joint work with Nir Bitansky

Random Walks on Rotating Expanders (or: Free Ramanujan Graphs)

Abstract: Random walks on expanders are extremely useful in ToC. Unfortunately, though, they have an inherent cost: the spectral expansion of a Ramanujan graph deteriorates exponentially with the length of the walk; put differently, a Ramanujan graph, when squared, is no longer Ramanujan, and the distance grows exponentially with the power.

In this talk, we will see how this exponential cost can be reduced to linear by applying a permutation after each random step. These permutations are tailor-made to the graph at hand, requiring no randomness to generate. Our proof is established using the powerful framework of finite free probability and interlacing families that was introduced around ten years ago by Marcus, Spielman, and Srivastava.

We will also talk about matrices forming free random variables and how they can help us make use of the entire spectrum of the graph (and not only the extreme eigenvalues).


Based on joint work with Gil Cohen

Verifying The Unseen: Interactive Proofs for Label-Invariant Distribution Properties

Abstract: Given i.i.d. samples from an unknown distribution over a large domain [N], approximating several basic quantities, including the distribution's support size, its entropy, and its distance from the uniform distribution, requires \Theta(N/log N)  samples [Valiant and Valiant, STOC 2011]. Suppose, however, that we can interact with a powerful but untrusted prover who knows the entire distribution (or a good approximation of it). Can we use such a prover to approximate (or rather, to approximately verify) such statistical quantities more efficiently? In the talk, we review some fundamental results in the field of Distribution Testing, as well as present the known proof systems for verifying some properties of distributions.

Lastly, we also present our result: the support size, the entropy, and the distance from the uniform distribution can all be approximately verified via a 2-message interactive proof, where the communication complexity, the verifier's running time, and the sample complexity are \tilde{O}(\sqrt{N}). For all these properties, the sample complexity is tight up to \polylog N factors (for any interactive proof, regardless of its communication complexity or verification time).

More generally, we give a tolerant interactive proof system with the above sample and communication complexities for verifying a distribution's proximity to any label-invariant property (any property that is invariant to re-labeling of the elements in the distribution's support). The verifier's running time in this more general protocol is also \tilde{O}(\sqrt{N}), under a mild assumption about the complexity of deciding, given a compact representation of a distribution, whether it is in the property or far from it.


Based on joint work with Guy Rothblum