Abstract
Typically there are significant differences between the initial formulation of an algorithm and its ultimate implementation. For example the minimum path between two nodes in a weighted di-graph can be found by enumerating all paths between the two nodes and choosing the smallest. This approach can easily be formulated as a recursively defined function, which may in turn be implemented in a standard way. This is significantly different than Dykstra's algorithm, the favored shortest path implementation. On the one side, close to the problem statement, then there is an initial, simply formulated, but often inefficient algorithm. On the other side, nearer to the final implementation, is an efficient algorithm. The study of the connection between these two is the subject of this paper. It will be assumed that the initial formulation of an algorithm is as a recursive definition and that this definition is in a standard form. The recursive definition, though sufficient to provide the value of the function anywhere in its domain, is non-deterministic as to which of a variety of sequential implementations are to be used to determine that value. The variety of implementations correspond to the various orders of substitution which are equally valid in evaluating such a definition. Some orders of evaluation become possible only if the primitive functions which enter into the recursive definition have appropriate properties. Different orders of evaluation will result in different memory requirements, but will not cause significant time differences in the resultant implementations. This dependence of memory requirements on the order of evaluation is one of the main subjects of this paper. Implementation of the recursive definition generally requires the repetitive execution of similar operations. If it can be shown that some pairs of these operations will yield the same or similar intermediate results at different points in the computation--then only one such intermediate result need be computed and remembered. It may then be accessed from memory when needed again instead of being recomputed. This can happen many times in a sufficiently systematic way so that a significant time saving can be realized. The existence of this situation depends on properties of the primitive functions which compose the recursive definition. A significant class of problems for which time efficient implementations are available are explored in this paper.