Test Suite Evaluation Based on Reachability Analysis of Property State Space

ABSTRACT

Coverage metrics answer the question of whether we adequately checked a given software program. For example, statement coverage metrics measure how many and how often lines of code were executed. Path coverage metrics measure the frequency of execution of interleaving branches of code. Data flow coverage metrics compute computational and predicate use of data paths. In recent years, researchers introduced effective static analysis techniques for checking the correctness of software programs. Consequently, we started to see more formal properties embedded in software programs. In this paper, we present a novel coverage criterion and metrics that evaluate test suites based on their coverage of program properties rather than of structural program elements. We consider properties expressed as propositional and temporal formulas with several predicates. We measure the property states covered by a given test suite against an approximation of the reachable state space of the property. Our results show that property coverage can reveal deficiencies in test suites where other coverage techniques can not. Our approach can also suggest test cases that completes the test suite and may uncover bugs.

Appendices

Please find the appendices referred to in the body of the paper here: http://research-fadi.aub.edu.lb/pbcovfiles/PBCOVAppendices.pdf

IMPLEMENTATION

We implemented a tool that computes the PBCOV metric using an instrumentation that dynamically evaluates the atomic predicate terms of a property. The implementation works for C programs and makes use of available open source tools. We used the Crest [1] tool to interface with CIL [3] code manipulation framework and the Yices [2] SMT solver. We also used the ABC [4] model checker.

Source code preparation

The user needs to include a header file that has the needed macros, breakpoint functions, and pragmas. He will also need to specify his properties using the provided PBCOV_ASSERT macro. The macro flags the property to be to be processed later on.

CIL instrumentation

We instrument the source code that now contains the properties using CIL. CIL composes all conditions in the code into nested if-statement structures while preserving the logic. The conditions in these ifstatements are nothing else than the terms described before. We capture and process the block of code consisting of the if-statements corresponding to the property provided by the user. We fragment the nested if-statement in this block to hold only atomic predicate terms as explained in section V. We instrument these to collect their values and their symbolic versions at run time.

Reporting missing states

We collect the different term permutations and the value of the conjunction of the properties in a file. This dynamic information is used to build a dynamic description of the property as a simple truth table.

We construct a Boolean circuit with one Boolean variable per atomic predicate term and construct a symbolic Boolean formula specifying the property. We construct a sequential circuit claiming the equivalence of the the symbolic Boolean description of the property as well as the dynamically collected truth table and we pass the circuit to the ABC model checker.

The ABC model checker reports the missing states in either a symbolic form or in an enumeration form of states. These missing states are counter examples to the formula. We pass the counter examples with the original SMT form of the property to Yices. We read the output of Yices and discard the unsatisfiable missing states from the reachable state space.

INSTRUCTIONS

The following setup was done linux Fedora 10. It requires a running gcc and a java JRE.

Install prerequisites
  1. download crest from http://code.google.com/p/crest/
  2. download the files “crest.h” and “crest.cc” from the downloads section below (crest.tar.gz) and replace the existing in ~/crest-0.1.1/src/libcrest
  3. download the file crestInstrument.ml from the downloads section below (crest.tar.gz) and replace the existing in ~/crest-0.1.1/cil/src/ext
  4. compile and install crest
  5. download and install ABC from this link http://www.eecs.berkeley.edu/~alanmi/abc/abc70930.tar.gz
Sample Run

This sample run is intended to confirm that the environment is appropriately configured. In the following we detail instructions to run the sort example listed in section III of the paper.

  1. create a folder sort in ~/crest-0.1.1/bin
  2. download the tar ball crest_bin.tar.gz and place its contents in ~/crest-0.1.1/bin
  3. place a link to the executable of abc in ~/crest-0.1.1/bin
  4. In folder sort, place the contents of sort.tar.gz
  5. change directory to ~/crest-0.1.1/bin/sort and run the script “run”

Note that in any code to run, the assert expression must be placed in the two macros consecutively: PBCOVASSERTEVAL and PBCOVASSERTINST in this order.

TCAS results replication

In the following we list the steps required to replicate the results available in section VII in the paper, results are available in the downloads section below (tcas-results.tar.gz). The prerequisites are ATAC coverage tool (http://invisible-island.net/atac/atac.html) and gcov (must be available by default on linux).

  1. create a folder in your home directory called “tcas”, and place in it the contents of the “tcas.tar” tar ball from below
  2. create a folder called testing that houses the results, and place in it the contents of the “results_dir.tar.gz” tar ball from below, this only contains a script, a java tool and the required directory structure
  3. Fix the paths in the script “master”, the paths that need fixing are: crest bin directory, tcas directory, and testing directory, this can be done by modifying the variable “home” in the script
  4. run the script “tcasdirbin” inside ~/crest-0.1.1/bin to produce the directory structure inside the bin directory
  5. run the script “master” inside the testing directory on the specified property number to generate results inside the folde ===== DOWNLOADS =====

In this section we list the files needed to run the implementation as described above.

  1. crestbin.tar.gz: http://research-fadi.aub.edu.lb/pbcovfiles/crest_bin.tar.gz This tar ball contains the necessary files that should be available in ~/crest-0.1.1/bin 2. crest.tar.gz: http://research-fadi.aub.edu.lb/pbcovfiles/crest.tar.gz This tar ball contains the necessary files (crest.cc, crest.h, crestInstrument.ml)that should modified in crest before compiling and installing it 3. resultsdir.tar.gz: http://research-fadi.aub.edu.lb/pbcovfiles/results_dir.tar.gz

    This tar ball contains the necessary directory structure for the results directory along with the “master” script that generates the results, and a compiled java utility tool

  2. sort.tar.gz: http://research-fadi.aub.edu.lb/pbcovfiles/sort.tar.gz

    This tar ball contains the necessary files to be placed in sort directory in ~/crest-0.1.1/bin to run the sort example in the paper

  3. tcasdirbin: http://research-fadi.aub.edu.lb/pbcovfiles/tcas_dir_bin

    This script generates the directory structure for TCAS in ~/crest-0.1.1/bin, and it must be run from that path

  4. tcasdir.tar: http://research-fadi.aub.edu.lb/pbcovfiles/tcas_dir.tar This tar ball contains the 41 versions of tcas along with the original test suite, it can deployed anywhere, but its path is required by “master” script. 7. tcas-results.tar.gz: http://research-fadi.aub.edu.lb/pbcovfiles/tcas-results.tar.gz This tar ball contains the complete and detailed results of TCAS, they have the same directory structure as in resultsdir.tar.gz

REFERENCES

[1] J. Burnim and K. Sen. Heuristics for scalable dynamic test generation. In ASE ’08: Proceedings of the 2008 23rd IEEE/ACM International Conference on Automated Software Engineering, pages 443–446, 2008.

[2] B. Dutertre and L. De Moura. A fast linear-arithmetic solver for DPLL (T). In Computer Aided Verification, pages 81–94. Springer, 2006.

[3] G. C. Necula, S. McPeak, S. P. Rahul, and W. Weimer. Cil: Intermediate language and tools for analysis and transformation of c programs. In Proceedings of the 11th International Conference on Compiler Construction, London, UK, 2002.

[4] B. L. Synthesis and V. Group. Abc: A system for sequential synthesis and verification release 70930.

 
pbcov.txt · Last modified: 2020/07/13 12:31 (external edit)
 
Except where otherwise noted, content on this wiki is licensed under the following license:CC Attribution-Noncommercial-Share Alike 3.0 Unported
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki