subproblem whose proof represents progress towards proving the
goal problem. The iteration of the above led to a "chaining" of
subproblems. LT was able to prove 38
Ma.thema.tica.. More importantly, it introduced several basic ATP
concepts (e.g. chaining), artificial intelligence (AI) techniques (list
processing) and initiated the field of ATP.
The Geometry Theorem-proving Machine (GT:M) (see Gelemter
[Gel], Gelernter et
(GHl]) (1959) was a landmark project in its
time in the AI field. At a 1956 conference 'Minsky made the obser-
vation that the diagram that traditionally accompanies plane
geometry problems is a simple model for the theorem that could
greatly prune the proof search. Inspired by this idea, Herbert
Gelernter and Nathaniel Rochester at IBM organized a project to
build the GTM. The diagram proved to be a powerful heuristic.
The technique for its use is to work backwards ("backward chain-
ing") from the conclusion (goal) towards the premises creating
new subgoals whose truth imply (by the chain just created) the
final goal. When working backwards from goal G one locates in
memory the inference rules that conclude G and takes the
hypotheses of each inference rule as a new subgoal. Of course,
every hypothesis of some single inference rule must be established
in order to establish G. (Note that this can be diagrammed by
arcs emanating from a node marked G; this begins a proof search
tree. This tree is an AND/OR tree in that from some nodes
emanate arcs all of which must be pursued successfully and from
other nodes emanate arcs only one of which must be pursued suc-
cessfully.) Not all of the subgoals need be true. The geometry
model was used to say which were true; the others were dropped
as subgoals. This should be contrasted with forward chaining
(reasoning forward from premises via modus
where all
conclusions are true. The GTM was capable of proving most high
Previous Page Next Page