At the same time, because these analyses execute while a program is running, they are computationally constrained, since any overhead imposed by an analysis is reflected in the program's execution time. Thus, online dynamic analyses necessarily tend to measure simple events. Such constraints present a challenge: is it possible to construct sophisticated program transformations from simple dynamic analyses?
In this talk, we show that sophisticated program transformations are possible without proving complex program properties. Instead, dynamic analysis of simple properties can sufficiently approximate complex ones when coupled with runtime support such as speculation. We describe two complex automatic program optimizations based on simple dynamic analyses that yield order-of-magnitude speedups.
The first is a specializer. While a program is running, it automatically determines which heap locations are constant, and selects portions of the program to recompile, using these heap constants to seed constant propagation, loop unrolling, and other classic optimizations. The specializer achieves speedups of up to 500% on programs that depend heavily on constant heap values.
The second transformation is an incrementalizer, known as Ditto. Ditto monitors data structure invariant checks as they execute, dynamically constructing models of their computation. When a check is invoked again, Ditto automatically reuses applicable portions of previous invocations and only computes anew computations based on new input. It speeds up common invariant checks by an asymptotic factor.
These two optimizations indicate that complex transformations are possible despite the limited complexity afforded to the online dynamic analyses that drive them. We find that these results can be achieved when (1) the transformation is tailored to a domain and (2) dynamic analyses are carefully constructed to exploit the common-case behavior of programs in that domain.