The paper A Multi-Objective GRASP for Partial Classification recently submitted to Soft Computing only provided a brief summary of the results. In particular, no graphs showing the difference between different parameter settings are presented and the fronts and rules produced by the algorithms with best settings are not given. This page, and the attached Excel files provides some of these details.

Parameter tuning


The choice of dataset and class of interest had little impact on how the parameter settings affected the performance of NSGA II. Best results were obtained with a population size of 100 or 200, a crossover rate of 20% or 40% and a mutation rate of 10% or 20%. Summary spreadsheets and graphs are attached in this zip file.


Since SPEA2 and NSGA II are similar algorithms, it is unsurprising that parameter settings affected the quality of search in similar ways for each algorithm. Best results were achieved with an archive size of 100 or 200, population sizes between 50 and 200 with a similar population size and crossover rates and mutation rates as for NSGA II. Spreadsheets giving the results of parameter tuning in more detail are given in this zip file.

Note that the archive size here means the size of the SPEA2 archive which is integral to the algorithm. Indeed parent solutions are selected exclusively from this archive. When the archive size is increased to 500, performance drops since less effort is spent in optimizing each of the solutions contained. A separate, external archive is used to store all the non-dominated solutions found in the search, and it is on the basis of the contents of this archive that the algorithms are evaluated.


Experiments were performed using PAES with a large range of the different parameters. In particular, archive sizes of up to 50000 were used. Since only 50000 rules are evaluated per run, this is essentially the same as using an unlimited archive. In practice, the archive seldom held more than 5000 rules. Best results were obtained for large archive sizes (2000+).

The way parameter settings affected performance suggests that different parameters should be used for different problems. In particular, low mutation rates (2%) were best for the cover-type dataset, while when predicting low earners from the adult dataset, much higher mutation rates (30%) were more appropriate.

A zipped collection of spreadsheets with more details is provided here.

Genetic local search

Both forms of genetic local search performed better with a fairly large neighbourhood sample size (50 for IM-GLS and 100 for J-GLS), though when applied to the cover type data, slightly smaller sample sizes were preferred (20 for IM-GLS and 50 for J-GLS). In comparison, the number of elite solutions in IM-GLS and the size of the temporary population in J-GLS had considerably less impact on the quality of the results. Spreadsheets with further details of the results are available for both IM-GLS and J-GLS.


The results of varying the proportion of local search used in the multi-objective GRASP algorithm (MOG) indicate that at least 50% of CPU time should be spent on local search. On the adult dataset, there was little, if any, degradation in performance when this proportion was increased to 95%. However, on the cover-type dataset, there was a significant loss of performance for proportions of local search greater than 80%.

The cover-type results also show a drop in performance when more than one front of solutions was created and optimized by the local search, though this effect has not been noted for the adult dataset. Spreadsheets and charts giving more details are in this zip file.

Results with best settings

Charts comparing the algorithms on the hypervolume measure and two forms of the C-measure are given in the paper. Here we provide more details of the fronts obtained, rules produced and dominated area (hypervolume) for each run of the algorithms.

Fronts are provided in comma delimited text files, saved at five points during each of 60 runs of the algorithm. These list coverage before confidence. A summary file provides the dominated area at each of these points, complete with the mean dominated area, standard deviation and 95% confidence interval.

The NSGA II results file gives only the quality of the rules produced, as do the results files for IM-GLS and J-GLS.PAES results also include the non-dominated rules in full, but only for some of the problems. Results for SPEA2 and MOG include the non-dominated rules in full for all problems.

Rule Pruning vs. Careful Operators

It has been brought to our attention that the pruning of over-large rules might be considered disadvantageous for the genetic algorithms and that genetic operators that ensure that the rules satisfy the size constraint may be preferable. We have experimented with the use of such 'careful' genetic operators, within NSGA II. Results are provided here. A comparison with the pruning approach indicates little difference in the quality of the results obtained. If anything, results are slightly better using pruning, though more extensive experimentation would be required to confirm this.