Floating-point numbers are widely used to approximate real number arithmetic in applications from domains such as scientific computing, graphics, and finance. However, squeezing one or more real numbers into a finite number of bits requires an approximate representation. As a consequence, the result of a floating-point computation typically contains numerical errors. To minimize the chance of problems, developers without an extensive background in numerical analysis are likely to use the highest available precision throughout the whole program. While more robust, this can increase the program execution time, memory traffic, and energy consumption. In this work, we propose a dynamic analysis technique to assist developers in tuning precisions of floating-point programs with the objective of decreasing the program execution time. Our technique includes two phases. The first phase is a white-box analysis to reduce the precision search space, entitled Blame Analysis. The second phase is a black-box Delta Debugging based search algorithm that finds a type configuration that minimizes the set of variables that are required to be in higher precision, such that the new program, when transformed according to type configuration, will produces an accurate enough output and runs at least as fast as the original program. Our preliminary evaluation using ten programs from the GSL library and NAS benchmarks shows that our algorithm can infer type configurations that result in 4.56% execution time speed up on average, and up to 40% execution time speed up. In addition, Blame Analysis helps to speed up the analysis time of the second phase 9x on average





Download Full History