Seminars & Colloquia Calendar

Download as iCal file

DIMACS Theory of Computing Seminar

Time-Space Hardness of Learning Sparse Parities

Avishay Tal: IAS

Location:  CoRE 301
Date & time: Wednesday, 01 March 2017 at 11:00AM - 11:11AM

How can one learn a parity function, i.e., a function of the form $$f(x) = a_1 x_1 + a_2 x_2 + ... + a_n x_n (mod 2)$$ where $$a_1, ..., a_n are in {0,1}$$, from random examples?

One approach is to gather $$O(n)$$ random examples and perform Gaussian-elimination. This requires a memory of size $$O(n^2)$$ and $$poly(n)$$ time. Another approach is to go over all possible $$2^n$$ parity functions and to verify them by checking $$O(n)$$ random examples for each guess. This requires a memory of size $$O(n)$$, but $$O(2^n * n)$$ time.

In a recent work, Raz [FOCS, 2016] shows that if an algorithm has memory of size much smaller than $$n^2$$, then it has to spend exponential time in order to learn a parity function. In other words, fast learning requires good memory.

In this work, we show that even if the parity function is known to be extremely sparse, where only log(n) of the a_i's are nonzero, then the learning task is still time-space hard. That is, we show that any algorithm with linear size memory and polynomial time fails to learn log(n)-sparse parities.

Consequently, the classical tasks of learning linear-size DNF formulae, linear-size decision trees, and logarithmic-size juntas are all time-space hard.

Based on joint work with Gillat Kol and Ran Raz.

Special Note to All Travelers

Directions: map and driving directions. If you need information on public transportation, you may want to check the New Jersey Transit page.

Unfortunately, cancellations do occur from time to time. Feel free to call our department: 848-445-6969 before embarking on your journey. Thank you.