Complexity Theory By Degrees?
Dr. Kenneth Regan
University at Buffalo
January 29, 2004
12:40 - 1:40pm
Bailey Hall 312
Abstract

Computational complexity theory is the study of the amount of time, memory, gates, and other machine resources needed to solve computational problems. Current knowledge of computing requirements is far less precise than the ancient Greeks' knowledge of astrophysics. One of the seven "Millennium Prize Problems" carrying a $1M bounty from Harvard's Clay Mathematics Institute (www.claymath.org) is the "P versus NP Question". This boils down to whether there is (or is not) a vastly better algorithm for determining whether a given Boolean formula in n variables is a tautology, than the grade-school method of trying all 2^n rows of its truth table, which uses 2^n gates and hence 2^n time. Thousands of other computational problems are known to be equivalent to this one.

Much of the uncertainty owes to the current lack of good connections between computational complexity and long-studied mathematical notions of complexity. Thirty years ago a connection was proved between the number of gates needed to compute a function f(x_1,...,x_n) and the geometric degree of the mapping induced by the first partial derivatives of f. However, the connection "poops out" at n*log(n) gates, intuitively because Euclidean geometry is a "lossy" medium for graphing multi-variable equations. We consider a new "non-lossy" geometry pioneered by Grothendieck in the 1950s, and whether it can make the connection stand up to n^2 or 2^n gates and thus help solve the P vs. NP question.

Lunch will be provided at 12:00 in Steinmetz 237, and a discussion session for students will follow the talk, also in Steinmetz 237.