An Adaptive Strategy for Improving the Performance of Genetic Programming-based Approaches to Evolutionary Testing José Carlos Bregieiro Ribeiro
Mário Zenha-Rela
Francisco Fernández de Vega
Polytechnic Institute of Leiria (IPL) Morro do Lena, Alto do Vieiro Leiria, Portugal
University of Coimbra (UC) CISUC, DEI, 3030-290 Coimbra, Portugal
University of Extremadura (UNEX)
[email protected]
[email protected]
[email protected]
Abstract
C/ Sta Teresa de Jornet, 38 Mérida, SpainT
Framework Overview
2
This paper proposes an adaptive strategy for enhancing Genetic Programming -based approaches to automatic test case generation. The main contribution of this study is that of proposing an adaptive Evolutionary Testing methodology for promoting the introduction of relevant instructions into the generated test cases by means of mutation; the instructions from which the algorithm can choose are ranked, with their rankings being updated every generation in accordance to the feedback obtained from the individuals evaluated in the preceding generation. The experimental studies developed show that the adaptive strategy proposed improves the algorithm's efficiency considerably, while introducing a neglectful computational overhead.
Introduction
1
Software Testing
Software testing is an expensive process. In industry, it is often done manually. Locating suitable test data can be time-consuming, difficult, and expensive.
Automating
Evolutionary Testing the
Evolutionary Algorithms
test data generation process is vital to advance the state-of-the-art in software testing.
3
Strongly-Typed GP is particularly suited for unit-test cases in stronglytyped languages such as Java.
Software Testing
Methodology Overview
Search Space
=
The input domain of the test object.
Evolutionary Testing
Our Approach
Employing evolutionary algorithms for generating and evolving test cases for the structural unittesting of objectoriented programs.
Test data quality evaluation includes instrumenting the test object, executing it with the generated test cases, and tracing the structures traversed in order to derive coverage metrics.
An Adaptive Evolutionary Testing Strategy Adaptive Evolutionary Algorithms are distinguished by their dynamic manipulation of selected parameters during the course of evolving a problem solution. They are more reactive to the unanticipated particulars of the problem.
Let the constraint selection ranking of constraint c in generation g be identified as ρg. Then, ρg is updated, at the beginning of each generation, as follows.
Our Adaptive Evolutionary Testing methodology involves promoting the introduction of relevant instructions into the generated test cases by means of mutation. The instructions from which the algorithm can choose are ranked, with their rankings being updated every generation in accordance to the feedback obtained from the individuals evaluated in the preceding generation.
The goal is to encourage diversity and test case feasability.
4
The runtime exceptions caused factor λ main purpose is that of penalizing the ranking of constraints corresponding to instructions that have caused runtime exceptions to be thrown in the preceding generation. The runtime exceptions caused by ancestors factor σ main purpose is that of penalizing the ranking of constraints which have participated in the composition of sub-trees that have caused runtime exceptions to be thrown in the preceding generation. The constraint diversity factor γ main purposes are those of allowing constraints to recover their ranking if they have been being used infrequently, and penalizing the ranking of constraints which have been selected too often.
The adaptive strategy outperformed the static approach for 28.4% of the methods tested, whereas the latter only surpassed the former in 5.9% of the situations.
The computational overhead introduced was neglectful.
Conclusions
The adaptive strategy proposed improves the algorithm’s efficiency.
Finding a set of input data (test cases) that satisfies a certain test criterion.
+
Strongly-Typed GP
Problem
The time overhead introduced by the adaptation procedure was a mere 0.19%.
This strategy allows
mitigating the negative effects of including a large number of entries into the test cluster; a higher degree of freedom when defining the test cluster, by minimizing the impact of redundant, irrelevant or erroneous choices.
GECCO 2009. Montreal, Canada