TOPICS IN REAL AND COMPLEX NUMBER COMPLEXITY THEORY 5 problem, see [98], was the following: ’Can a zero of n complex polynomial equations in n unknowns be found approximately, on the average, in polynomial time with a uniform algorithm?’ After a lot of work on the problem involving different authors a major breakthrough was obtained by Beltr´ an and Pardo, see [11,13]. They showed how to solve polynomial systems by a uniform randomized homotopy algorithm that runs in polynomial time on the average. Another important progress based on that work was made by B¨ urgisser and Cucker in [28], where a deterministic algorithm for Smale’s problem running in pseudo-polynomial time was designed. For a more accurate account on the history of the problem we refer to the survey [12]. For the purposes of the present paper we are just interested in some particular aspects arising from the above discussions. They lead into different directions of complexity theory, both with respect to the classical Turing model and real number complexity theory. In the rest of this section we discuss a problem resulting from the above approach that leads to a hard combinatorial optimization problem in classical complexity theory. The following sections then deal with the problem to decide solvability of such polynomial systems as we shall see this is a task at the heart of real and complex number complexity theory. For the moment let us restrict ourselves to considering polynomial systems of the form F : Cn → Cn, F := (f1,...,fn) over the complex numbers. Here, each component polynomial fi is supposed to have a degree di ∈ N. Since the system has as many equations as variables it is canonically solvable. For a successful application of homotopy methods the choice of the starting system G is of huge importance. One aspect is that if the zero structure of G significantly differs (for example, with respect to its cardinality) from that of the target system F , then many zeros from G are followed in vain, thus wasting computation time. A common idea for choosing G therefore is to get as far as possible the same number of zeros as F . There are different ways to estimate the number of complex zeros that a canonical system F : Cn → Cn has. A first classical result is B´ ezout’s theorem which upper bounds the number of isolated zeros by d := n i=1 di. Though this number can be easily calculated and a system G with d many isolated zeros is easily found, the disadvantage is that it often drastically overestimates the number of zeros of F . A prominent example is the computation of eigenvalues and -vectors of an (n, n)- matrix M, formulated via the polynomial system Mx − λx = 0, x 2 − 1 = 0 in variables (x, λ) ∈ Cn+1. The B´ ezout number is exponential in n, whereas clearly only n solutions exist. To repair this disadvantage one might try to use better bounds for the number of solutions. A famous theorem by Bernstein [15] determines for generic systems the exact number of zeros in (C∗)n. Though giving the exact number applying this theorem algorithmically suffers from another aspect. In order to compute this bound one has to calculate so-called mixed volumes. The latter is a computational problem that is expected to be even much harder than solving problems in NP because in suitable formulations it leads to #P -hard problems.2 Thus at least in general one has to be careful whether to compute this bound for the target system F in order to construct G. A third approach has been used as well, relying on so-called multi-homogeneous B´ ezout numbers, see [61,80] for more. Here, the idea is to obtain better estimates 2 A bit more information about the class #P can be found at the end of section 4.

Purchased from American Mathematical Society for the exclusive use of nofirst nolast (email unknown) Copyright 2013 American Mathematical Society. Duplication prohibited. Please report unauthorized use to cust-serv@ams.org. Thank You! Your purchase supports the AMS' mission, programs, and services for the mathematical community.