Part 1
Classical Computation
1. Turing machines
Note. In this section we address the abstract notion of computability, of
which we only need a few basic properties. Therefore our exposition here
is very brief. For the most part, the omitted details are simply exercises
in programming a Turing machine, which is but a primitive programming
language. A little of programming experience (in any language) suffices to
see that these tasks are doable but tedious.
Informally, an algorithm is a set of instructions; using it, “we need only to
carry out what is prescribed as if we were robots: neither understanding, nor
cleverness, nor imagination is required of us” [39]. Applying an algorithm
to its input (initial data) we get some output (result). (It is quite possible
that computation never terminates for some inputs; in this case we get no
result.)
Usually inputs and outputs are strings. A string is a finite sequence
of symbols (characters, letters) taken from some finite alphabet. Therefore,
before asking for an algorithm that, say, factors polynomials with integer
coefficients, we should specify the encoding, i.e., specify some alphabet A
and the representation of polynomials by strings over A. For example, each
polynomial may be represented by a string formed by digits, letter x, signs
+, and ×. In the answer, two factors can be separated by a special
delimiter, etc.
One should be careful here because sometimes the encoding becomes
really important. For example, if we represent large integers as bit strings (in
binary), it is rather easy to compare them (to find which of two given integers
is larger), but multiplication is more difficult. On the other hand, if an
9
http://dx.doi.org/10.1090/gsm/047/02
Previous Page Next Page