Description
Both intelligent backtracking and AND-parallel execution require information about the dependency between body literals. This information can be derived either at compile-time by using a static analysis or at run-time. Although a run-time analysis is more effective than a static (hence worst-case) analysis, it incurs a lot of overhead at run-time and is thus inefficient. Therefore, this thesis has emphasized the use of compile-time analysis to improve run-time performance.
A methodology for a Static Data Dependency Analysis (SDDA) was developed. The SDDA is based on a worst-case analysis of variable bindings. To perform the SDDA, only one declaration, which describes the worst case activation, is necessary for each procedure which can be directly invoked from the top level query. This extra work can be handled quite easily by the programmer. The cost of doing the SDDA is shown to be comparable to the cost of compilation of a Prolog program. The outputs from the SDDA are a collection of data dependency graphs, one for each clause in a Prolog program. From data dependency graphs, both intelligent backtracking and AND-parallel execution can be determined.
A scheme for compiling intelligent backtracking based on the SDDA has been designed. To take full advantage of dependency graphs, three different types of backtracking are differentiated. At run-time, when a subgoal fails, a backtrack literal can be determined by the type of the backtracking and its corresponding backtracking path. Execution including this intelligent backtracking is simulated for a sequential Prolog machine. It includes modifications of the hardware and the compiler. This scheme has been proved to be very effective for improving the execution of Prolog programs.
A scheme to exploit AND-parallelism is also proposed. It includes generating parallel executable tasks by the SDDA, using a set of message protocols to coordinate co-operating processes, exploiting both intelligent backtracking and parallel backtracking. It is shown that Prolog has potential in parallel processing because of its procedural invocation, non-deterministic execution, concise syntax, single-assignment variable bindings, and local variable scoping.