A Summary of Strategy Used by Argonne's Automated Deduction Software

The diverse strategies were formulated with the objective of sharply increasing program performance. In various combinations, the strategies have been used in the following programs. In virtually all of our successes, one or more strategies played a key role. Experiments strongly suggest that, without strategy, an automated reasoning program offers little for attacking deep questions.

Contents

  1. Restriction Strategies
  2. Direction Strategies
  3. Look-Ahead Strategies
  4. Redundancy-Control Strategies

Restriction Strategies

Restriction strategies are designed to block paths of reasoning. For example, it is typically profitable to block paths of reasoning that involve axioms alone; judicious use of the set of support strategy achieves this goal. The use of restriction strategies is virtually required for attacking deep questions and hard problems, for otherwise the space of conclusions that can be drawn grows wildly. Regarding completeness, although not obvious, far more proofs exist of almost any given theorem than intuition suggests.

Set of Support

The set of support strategy asks the researcher to choose from among the clauses characterizing the question under study those to be placed in the initial set of support list, called list(sos) in Otter. The corresponding restriction imposed on the reasoning prevents the program from applying an inference rule to a set of clauses all of which are in the complement of the set of support. Each clause that is deduced and retained is (with Otter) added to list(sos). Experimentation suggests that the most effective choice for the initial set of support is the join of the special hypothesis and the denial of the theorem under study; the next best choice is just the denial of the theorem. For example, the special hypothesis of the theorem that asserts the provability of commutativity for groups in which the square of every element x is the identity e is the clause equivalent of xx = e.

Weighting

Weighting (as a restriction strategy) can be used to block all paths of reasoning that rely on a formula or equation whose complexity exceeds an upper bound assigned by the researcher, max_weight for Otter. The complexity is first determined, if possible, by weight templates chosen by the researcher and included in the appropriate weight_list, either purge_gen or pick_and_purge. If no template applies, then the complexity is determined by symbol count.

Subtautology

The subtautology strategy blocks the use of clauses that contain as a proper subterm a term that is true. For example, in equivalential calculus, it can be used to prevent the program from using expressions that contain as a proper subexpression e(x,x).

Resonance-Restriction

The resonance-restriction strategy restricts the program's reasoning by discarding conclusions that do not match (ignoring variables) one of the patterns chosen by the researcher, patterns called resonators. The resonators are typically placed in weight_list(purge_gen). One source for such resonators is the steps of some known proof of a theorem related to that under study.

Distinct Variables

The limited distinct variable strategy is designed to prevent the program from retaining any clause containing more than k distinct variables, where k is the upper bound chosen by the researcher. One use this strategy in conjunction with a max_weight larger than might ordinarily seem advisable, for its use curtails the retention of long expressions.

Proof-Checking Filter

The proof-checking filter strategy is designed to discard all formulas that do not have one of a set of chosen ancestor histories. For example, one might input a proof in detail, including the precise history of each deduced step. Then, using demodulation, one can instruct the program to discard any conclusion lacking an appropriate ancestor history. The strategy is useful when the intention is to restrict a program's reasoning to refereeing a given proof.

Maximum Weight Reduction

The max_weight reduction strategy is designed to further restrict a program's reasoning during a run by reducing the acceptable complexity of retained conclusions. In particular, the researcher may conjecture that an upper bound of k on the complexity of retained conclusions is needed but, if maintained, the program will drown in information unless k is replaced by j with j less than k after some number of clauses have been chosen as the focus of attention to drive the reasoning.

Passive List Dependency

The passive list dependency strategy prevents the program from reasoning from conclusions that are dependent on some subset of the statements that characterize the question under consideration. One can use the strategy by first choosing appropriate subsets of the characterizing statements, second making runs with the chosen subsets in the initial set of support, and third extracting from the runs those conclusions that are thought to be unwanted and placing them in the passive list. If an unwanted conclusion is deduced, its copy in the passive list will cause it to be subsumed and discarded immediately. Only unit clauses are allowed as members of the passive list.

Singly Connectedness

The singly connectedness strategy (not implemented in Otter) is designed to block a program from pursing a set of paths that arrive at the same conclusion. For example, with binary resolution as the rule of inference, and with the unit clauses P, Q, and R and the nonunit clause -P | -Q | -R | S, where ``|'' denotes logical or and ``-'' denotes logical not, there exist six different paths that terminate in the conclusion S.

Direction Strategies

Direction strategies are designed to direct the program in its choice of where next to focus its attention. Although the choice can be one of level saturation, equivalently first come first serve or breadth first, in the majority of cases, success requires taking into account the syntax or (even better) the semantics. For example, the reasoning can be directed by focusing in order on the least complex bit of information available, where least complex can be determined by symbol count or by the researcher supplying appropriate weight templates. Without judicious directing of the reasoning, a program can be trapped in a wide valley of similar and uninteresting information, among other obstacles.

Level Saturation

Level saturation is in the obvious sense the most naive way to direct a program's reasoning, instructing the program to focus on the clauses in the order they are retained, first come first serve (or, equivalently, breadth first). Still, the strategy (surprisingly) has proved of use occasionally.

Unit Preference

The unit preference strategy is mainly designed to cause a program to prefer the use of unit clauses (clauses consisting of a single literal and, hence, free of logical or) in preference to nonunit clauses. The strategy is not implemented in Otter.

Weighting

The weighting strategy, in addition to its use in the context of restricting reasoning, can be used to direct a program's reasoning by choosing for the focus of attention the least complex clause that is available. Complexity can be determined solely with symbol count, or it can be determined by weight templates (symbol patterns) that the researcher supplies. For Otter, variables in a weight template are treated as indistinguishable.

Ratio

The ratio strategy is designed to combine some use of directing a program's reasoning by focusing on the least complex clause available with the use of level saturation. For example, if the pick_given_ratio (in Otter) is assigned three, then three clauses are chosen as the focus of attention to drive the reasoning to every one clause chosen by breadth first.

Resonance

The resonance strategy is designed to take advantage of symbol patterns found in one success to direct a program's reasoning in its attempt to obtain a second success. For example, steps of a completed proof can be used as resonators (weight templates), instructing the program to prefer for the focus of attention any formula or equation that resembles, with variables ignored, one of the chosen resonators.

Tail

The tail strategy is designed to enable a program to prefer for the focus of attention two-place formulas or equations whose second argument is short. For example, in equivalential calculus, formulas are expressed in the form e(s,t) for terms s and t. In the study of this area of logic, the tail strategy prefers formulas whose right-hand term t is short.

Recursive Tail

The recursive tail strategy acts like the tail strategy except that, where the latter focuses on just the major function in a formula, the former gives preference to expressions with short tails at any level. For example, in equivalential calculus, whose formulas are expressed in the form e(s,t), the recursive tail strategy prefers formulas with a sort second argment (of the function e) even when that occurrence of e is deeply nested.

Hints

The hints strategy (not currently available in the Argonne version of Otter) is designed to direct a program's reasoning by giving preference to formulas or equations that match a user-chosen symbol pattern included in the form of a weight template. This strategy also gives preference to formulas or equations subsumed by or subsuming one of the included weight templates.

Last-in First-out

The last-in first-out strategy is in the obvious sense the opposite of level saturation, for it directs the reasoning by focusing on the last retained clause, rather that on the first retained that has not yet been chosen to drive the reasoning.

Look-Ahead Strategies

These strategies are designed to enable the program to draw conclusions ahead of the time that they would ordinarily be drawn. Without their use, a program can be forced to wait many CPU-hours or forever to draw a needed conclusion. For but one example, the needed conclusion may require focusing on a retained bit of information whose complexity would otherwise prevent the program from considering it for far too long.

Hot List

The hot list strategy was designed to cope with the too-often-encountered delay in focusing on a retained conclusion. For example, far too frequently, a retained conclusion exists that, if used to draw additional conclusions, would enable a program to complete its assignment, but a substantial delay in CPU time occurs for lack of any mechanism for choosing the key conclusion to drive the program's reasoning. The hot list strategy addresses that problem by having the researcher choose some set of clauses to be placed in the hot list, a list whose elements are immediately considered with each newly retained clause even before that clause is stored in the appropriate place in the database. For example, if one is studying rings in which the cube of x is x for all x, one might place the corresponding clause asserting that xxx = x in the hot list, causing that clause to be immediately considered with each newly retained clause, enabling the program to look ahead rather than waiting for the newly retained clauses to be chosen as the focus of attention.

Dynamic Hot List

The dynamic hot list strategy was designed to enable the program to adjoin, during the run, new clauses to the hot list. For example, the researcher might wish certain items to be in the hot list if such items were provable.

Redundancy-Control Strategies

These strategies are designed to reduce the obstacle of redundancy within the retained information. Two types of redundant information can be present. The obvious type is exemplified by the two statements 0+x =x and 0+a =a, where x is a variable and a is a constant; the first clearly captures the second, and, for a program such as Otter, the second is unneeded in general. The not-so-obvious type is exemplified by the two statements 0+x =x and 0+0+x = x; the second is ordinarily of little interest when the first is present. If either type of redundancy is left unchecked, the program will almost certainly drown in unneeded information.

Subsumption

Subsumption was designed to prevent a reasoning program from retaining information that is obviously redundant, specifically, information that is logically captured by already-existing information. For example, the general statement asserting that people are female or male logically captures the specific statement asserting that Gail is female or male. The specific statement is unneeded because of the nature of the type of reasoning (inference rules) employed by automaed reasoning programs such as Otter. Subsumption enables the program to discard the specific statement in the presence of the more general statement. A variation of subsumption, back subsumption, enables the program to discard retained information when newly deduced information is logically more general.

Demodulation

Demodulation, designed to enable a program to simplify and canonicalize information by applying demodulators (rewrite rules), removes less obviously redundant information. Indeed, although 0+(0+x) = x is not logically captured by 0+x = x, the former is usually unneeded and in a sense redundant in the presence of the latter. Demodulators can be included in the input, and they can also be adjoined during the run.

These activities are projects of the Mathematics and Computer Science Division of Argonne National Laboratory.

Larry Wos / [email protected]