-
-
AUTOMATED THEOREM PROVING
17
proving. Unfortunately, we are not' very sophisticated yet in use of
examples, but some work is being done here. (See Bledsoe [Bll]
for use of examples to instantiate se.t variap}es.)
Models and counterexamples -- used to reject wrong paths of
pursuit. Unlike an example, every true statement about the prob-
lem is true in the model so a statement false in the model can be
abandoned in the proof search. We previously noted that the
geometry theorem proving machine [GH1] provides the classic
example of use of models in ATP; also see Reiter [Re1]. Counterex-
amples serve the same role of using semantic information to reject
false statement, thus truncating erroneous search subtrees. Even
if one cannot devise a suitable model for the entire problem set-
ting, one might hope to find counterexamples for specific situa-
tions if such exist. (See Ballantyne and Bledsoe [BB1] for illustra-
tion of use of counterexamples.)
Analogy -- the use of a similar reasoning pattern in a related
situation to guide search. This is a very important concept where
little has been done, because progress is hard to come by. For
comprehensive results here one needs good characterization of
reasoning chains, including a metric over· these chains. The best
known work was done by Kling [Kl1] some years ago (1971) with
interesting but less known work done by Munyer [Mul]. A very
simple use of analogy using syntactic symmetry occurred in the
Gelernter
et
al.
geometry theorem prover [GHl]. There a simple
uniform replacement of points (point labels) for original points in
a proof could yield a new result, because of the frequent
occurrence of symmetric premises.
Learning -- the improvement of performance by automated
retention of prior knowledge or processing thereof. Very little
p'rogress has been made here. Low level semi-automated learning
takes place in the Boyer-Moore system summarized in this review
Previous Page Next Page