Event Details

Derandomizing Probabilistic Algorithms means Proving Circuit Lower Bounds

Presenter: Dr. Valentine Kabanets - Department of Computer Science, University of California, San Diego
Supervisor: R. Nigel Horspool, Chair, Department of Computer Science

Date: Thu, March 6, 2003
Time: 13:30:00 - 14:30:00
Place: Engineering Office Wing Building (EOW), Room # 430

ABSTRACT

ABSTRACT:

"What is the power of randomness in algorithm design?" On the one hand, many computational problems have very efficient and simple probabilistic algorithms. On the other hand, certain probabilistic algorithms can be successfully derandomized, i.e., replaced by equivalent deterministic algorithms, without significant decrease in efficiency. The most famous example is the recent discovery of a deterministic polynomial-time algorithm for Primality Testing [AKS'02].

Can every probabilistic algorithm be derandomized? Recent research suggests that the answer is Yes, since exponential lower bounds on the circuit complexity of certain computational problems would imply universal derandomization [IW'97]. However, proving even superpolynomial circuit lower bounds is a notoriously difficult task. Is there any other way to achieve derandomization?

In this talk, I will show that proving circuit lower bounds is indeed necessary. In particular, designing a deterministic polynomial-time algorithm to test whether the determinant of a given symbolic matrix is identically zero is as hard as proving superpolynomial circuit lower bounds. Thus, progress in algorithm design is dependent on progress in circuit complexity.

[Joint work with Russell Impagliazzo (UCSD).]

Note: Dr. Kabanets is a candidate for a faculty position in the Department of Computer Science.