Testing with manually generated test cases often results in poor coverage and fails to discover many corner case bugs and security vulnerabilities. Automated test generation techniques based on static or symbolic analysis usually do not scale beyond small program units. We propose predictive testing, a new method for amplifying the effectiveness of existing test cases using symbolic analysis. We assume that a software system has an associated test suite consisting of a set of test inputs and a set of program invariants, in the form of a set of assert statements that the software must satisfy when executed on those inputs. Predictive testing uses a combination of concrete and symbolic execution, similar to concolic execution, on the provided test inputs to discover if any of the assertions encountered along a test execution path could be violated for some closely related inputs. We extend predictive testing to catch bugs related to memory-safety violations, integer overflows, and string-related vulnerabilities.

Furthermore, we propose a novel technique that leverages the results of unit testing to hoist assertions located deep inside the body of a unit function to the beginning of the unit function. This enables predictive testing to encounter assertions more often in test executions and thereby significantly amplifies the effectiveness of testing. We have implemented predictive testing in a tool called PRETEX and our initial experiments on some open-source programs show that predictive testing can effectively discover bugs that are missed by normal testing. PRETEX uses symbolic analysis and automated theorem proving techniques internally, but all of this complexity remains hidden from the user behind a testing usage model. For this reason, we expect that PRETEX will be easy to integrate into existing software engineering processes and will be usable even by unsophisticated developers.




Download Full History