Rutgers/DIMACS Theory of Computing Seminar
Spring 2014

 

Place:          CoRE 301 (NOTE THE NEW ROOM FOR THIS SEMESTER) 
Time:          Wednesdays 11:00 am -- 12:00 Noon
Organizers: Swastik Kopparty and Shubhangi Saraf


Directions

For those arriving by train to New Brunswick station, the best way to get to the seminar room is by Rutgers bus. The directions are available by clicking here.
For those driving in, the best strategy is to pre-arrange to pick up a parking tag from one of the organizers upon arrival, and park in lot 64. For directions to Lot 64, click on this link
.

Links

·      The Rutgers discrete math seminar.

·      Other Math and CS seminars at Rutgers.

·      Theory of Computing seminars from previous semesters.

 

Schedule

January 22

Yuval Filmus

IAS

Talk cancelled due to snow

January 29

Aleksandar Nikolov

Rutgers University

Approximating Hereditary Discrepancy via Small Width Ellipsoids

 

The Discrepancy of a hypergraph is the minimum attainable value, over  two-colorings of its vertices, of the maximum absolute imbalance of any hyperedge. The Hereditary Discrepancy of a hypergraph, defined as the maximum discrepancy of a restriction of the hypergraph to a subset of its vertices, is a measure of its complexity. Lovasz, Spencer and Vesztergombi (1986) related the natural extension of this quantity to matrices to rounding algorithms for linear programs, and gave a determinant based lower bound on the hereditary discrepancy. Matousek (2011) showed that this bound is tight up to a polylogarithmic factor, leaving open the question of actually computing the bound. Recent work by Nikolov, Talwar and Zhang (2013) showed a polynomial time O(log^3 n)-approximation to hereditary discrepancy, as a by-product of their work in differential privacy. In this paper, we give a direct and simple O(log^{3/2} n)-approximation algorithm for this problem. We show that up to this approximation factor, the hereditary discrepancy of a matrix A is characterized by the optimal value of simple geometric convex program that seeks to minimize the largest infinity norm of any point in a ellipsoid containing the columns of A.

 

Joint work with Kunal Talwar.

February 5

 

Avi Wigderson

IAS

Talk cancelled due to freezing rain

 

Rescheduled to the discrete math seminar:

Feb 18, 2pm – 3pm, Hill 525.

Click here for more information

February 12

Bernard Chazelle

Princeton University

Algorithmic Tools for Self-Organization

 

Just as physics speaks the language of mathematics, the life sciences speak the language of algorithms. The difference lies in the loss of symmetry and the high descriptive and historical complexity of the systems commonly found in social and biological organisms. This talk will discuss the power of "natural algorithms" through the lens of "influence systems," a broad family of high-dimensional self-organizing systems for which algorithmic tools can do what differential equations cannot.

February 19

Joshua Grochow

Univ. of Toronto

New connections between lower bounds on algorithms, circuits, and proofs

 

NP versus coNP is the very natural question of whether, for every graph that doesn't have a Hamiltonian path, there is a short proof of this fact. One of the arguments for the utility of proof complexity is that by proving lower bounds against stronger and stronger proof systems, we "make progress" towards proving NP is different from coNP. However, until now this argument has been more the expression of a philosophy or hope, as there is no known proof system for which lower bounds imply computational complexity lower bounds.

 

We remedy this situation by introducing a very natural algebraic proof system*, which has very tight connections to (algebraic) circuit complexity. We show that any lower bound on any boolean tautology in our proof system implies that the permanent does not have polynomial-size algebraic constant-free circuits over any ring (VNP^0 \neq VP^0). Note that, prior to our work, essentially all implications went the opposite direction - a circuit complexity lower bound implying a proof complexity lower bound.

 

This allows us to give a very simple and completely self-contained proof that NP not in coMA implies ParityP/poly differs from NC^1/poly, by applying the preceding result over F_2. Our proof system also has the advantage that it begins to make clear and precise how much harder it is to prove lower bounds in algebraic proof complexity than in algebraic circuit complexity. It thus helps explain why it has been so difficult to prove true size lower bounds on algebraic proof systems, and thereby also lower bounds on AC^0[p]-Frege systems (a > 30-year barrier). Finally, we flip this difference in hardness on its head, to suggest a new way to transfer techniques from algebraic proof complexity, such as the method of partial derivatives, to prove lower bounds in algebraic proof complexity. Joint work in progress with Toniann Pitassi.

 

* - As with many algebraic proof systems, a proof is checkable in probabilistic polynomial time.

February 26

NO SEMINAR

 

March 5

Anindya De

IAS

Central limit theorem for Gaussian chaos and deterministic counting for polynomial threshold functions

 

It is a well-known fact that linear combinations of Gaussians are Gaussians. What about polynomial combinations? In this talk, we will see a new central limit theorem for polynomials of Gaussians. The techniques will draw from the so-called "Stein's method" in probability theory and Malliavin calculus. As the main application, we will give the first deterministic polynomial time algorithm for approximate counting of any constant degree PTF with subconstant error. This theorem will involve a new decomposition procedure for polynomial which might be of interest in other applications as well.

 

Joint work with Rocco Servedio.

March 12

Siu Man Chan

Princeton University

Monotone Lower Bounds via Fourier Analysis

 

We will discuss lower bounds on combinatorial models of computation under monotone restrictions, using the Fourier analysis techniques introduced by Potechin. We will prove tight size bounds on monotone switching networks for the NP-complete problem of k-clique, and (if time permits) for an explicit monotone problem by connecting the P-complete problem of generation with reversible pebble games. The latter result gives alternative proofs of the separations of m-NC from m-P and of m-NC^i from m-NC^{i+1} , different from Raz–McKenzie (Combinatorica ’99).

March 19

NO SEMINAR

SPRING BREAK

March 26

Ran Raz

Weizmann Institute, IAS

How to Delegate Computations: The Power of No-Signaling Proofs

 

The Martians built an amazingly fast computer and they run it to answer the great question of life, the universe and everything. They claim that the answer is 42. Will they be able to convince us that 42 is the right answer, assuming that we do not have sufficient computational power to run the computation ourselves, and that we do not trust Martians?

 

The problem of delegating computation is a central problem in cryptography. It considers a setting where one party, the delegator, wishes to delegate the computation of a function to another party, the worker. The challenge is that the delegator may not trust the worker, and thus it is desirable to have the worker ``prove'' that the computation was done correctly.

 

We show a curious connection between that problem and the model of multi-prover interactive proofs that are sound against no-signaling (cheating) strategies, a model that was studied in relation to multi-prover interactive proofs with provers that share quantum entanglement, and is motivated by the physical principle that information cannot travel faster than light.

 

We prove that the class of languages that have polynomial-time multi-prover interactive proofs, that are sound against no-signaling strategies, is exactly EXP. We use this to give a 1-round delegation protocol for every language in EXP.

 

Joint work with Yael Tauman Kalai and Ron Rothblum

April 2

Aaron Roth

U. Penn.

Privately Solving Allocation Problems

 

In this talk, we'll consider the problem of privately solving the classical allocation problem: informally, how to allocate items so that most people get what they want. Here, the data that we want to keep private is the valuation function of each person, which specifies how much they like each bundle of goods. This problem hasn't been studied before, and for good reason: its plainly impossible to solve under the constraint of differential privacy. The difficulty is that publishing what each person i receives in a high-welfare allocation might necessarily have to reveal a lot about the preferences of person i, which is what we are trying to keep private! What we show is that under a mild relaxation of differential privacy (in which we require that no adversary who learns the allocation of all people j != i – but crucially not the allocation of person i -- should be able to learn much about the valuation function of player i) the allocation problem is solvable to high accuracy, in some generality. Our solution makes crucial use of Walrasian equilibrium prices, which we use as a low information way to privately coordinate a high welfare allocation.

 

This is joint work with Justin Hsu, Zhiyi Huang, Tim Roughgarden, and Steven Wu

April 9

Aditya Bhaskara

Google

Smoothed Analysis of Tensor Decomposition

 

Low rank decomposition of tensors is a powerful tool that can been used to learn the parameters of a variety of probabilistic mixture models. However, tensors pose significant algorithmic challenges, and tensors analogs of much of the matrix algebra toolkit are unlikely to exist because of hardness results. Efficient decomposition in the overcomplete case (where rank exceeds dimension) is particularly challenging. We introduce a smoothed analysis model for studying these questions and develop an efficient algorithm for tensor decomposition in the highly overcomplete case (rank polynomial in the dimension). In this setting, we also show that our algorithm is robust to inverse polynomial error, a property necessary for learning applications.

 

We use our decomposition algorithm to obtain results for learning multi-view models and mixtures of axis-aligned Gaussians where there are many more "components" than dimensions, in a smoothed analysis framework. We believe this an appealing way to analyze realistic instances of learning problems, since it allows us to overcome many of the (hardness related) limitations of using tensor methods.

 

Joint work with Moses Charikar, Ankur Moitra and Aravindan Vijayaraghavan

April 16

Yuval Filmus

IAS

Monotone submodular maximization over a matroid

 

Maximizing a monotone submodular function over the bases of a matroid is a fundamental algorithmic question, which generalizes the well-known NP-complete problem MaxCover. A few years ago, Calinescu, Chekuri, Pál and Vondrák developed a sophisticated algorithm for the problem, the continuous greedy algorithm, which attains the optimal approximation ratio 1-1/e. Their algorithm combines a Frank--Wolfe phase together with lossless rounding. We offer an alternative algorithm attaining the same approximation ratio. Our algorithm is based on a neglected algorithmic paradigm, non-oblivious local search, in which local search is run with respect to an auxiliary objective function. Our auxiliary objective function is also monotone and submodular, and satisfies the following property: a local optimum with respect to the auxiliary objective function is a (1-1/e)-approximate *global* optimum with respect to the original objective function.

 

Joint work with Justin Ward (University of Warwick).

April 23

Bireswar Das

DIMACS/IIT Gandhinagar

Succinct Encodings of Graph Isomorphism

 

It is well known that problems encoded with circuits or formulas generally gain an exponential complexity blow-up compared to their original complexity.

 

We introduce a new way for encoding graph problems, based on CNF or DNF formulas. We show that contrary to the other existing succinct models, there are examples of problems whose complexity does not increase when encoded in the new form, or increases to an intermediate complexity class less powerful than the exponential blow up.

 

We also study the complexity of the succinct versions of the Graph Isomorphism problem. We show that all the versions are hard for PSPACE. Although the exact complexity of these problems is not known, we show that under most existing succinct models the different versions of the problem are equivalent. We also give an algorithm for the DNF encoded version of GI whose running time depends only on the size of the succinct representation.

 

This is a joint work with Patrick Scharpfenecker, Jacobo Torán

April 30

Timothy Naumovitz

Rutgers University

Approximating distance to monotonicity in the streaming setting

 

Abstract: Given a sequence of integers, one natural problem that can be considered is the LIS problem, which involves finding the length of their longest increasing subsequence. The complement of this problem is known as the distance to monotonicity problem, with the intuition that the distance to monotonicity of a sequence of integers is the minimum number of values that would have to be altered to make the sequence monotone. While one can look at these problems in other settings, we will focus on the streaming setting, where the sequence of integers is given to us one at a time and the goal is to minimize the amount of space used.

 

It has been shown by Gopalan, Jayram, Krauthgamer, and Kumar that computing these quantities exactly in the streaming setting cannot be done using sublinear space, so we instead settle for finding a multiplicative approximation. Previously, Saks and Seshadhri found a polylogarithmic space randomized algorithm to approximate distance to monotonicity, and we match this with a polylogarithmic space algorithm in the deterministic case. Additionally, we provide nontrivial lower bounds in both the deterministic and randomized cases, which will be discussed if there is time.

 

This is joint work with Michael Saks.