FOGA '23: Proceedings of the 17th ACM/SIGEVO Conference on Foundations of Genetic Algorithms

Full Citation in the ACM Digital Library

SESSION: Keynote

Analyzing the Fourier Representation of Permutation-Based Combinatorial Optimization Problems

Combinatorial optimization seeks to uncover efficient algorithms for solving complex problem instances. While achieving the ultimate goal of universal optimization remains a challenge, progress in this direction yields valuable insights for the field. A critical initial step involves taxonomizing problems and instances through a common representation. In this presentation, we employ the Fourier transform framework to investigate permutation-based combinatorial optimization problems. Specifically, we examine the Fourier coefficients of various special cases of the quadratic assignment problem, revealing their inherent characteristics. Leveraging this decomposition, we explore the transition of the linear ordering problem from being tractable (P) to becoming NP-hard, shedding light on the intricacies of this transformation. Through this analysis, we advance our understanding of permutation-based combinatorial optimization, paving the way for potential algorithmic breakthroughs.

Bridging Theory and Practice in Evolutionary Computation?

Evolutionary computation methods are successfully applied to solve a broad range of industrial and academic optimization problems. Most of these problems are far too complex to be analyzed analytically. Runtime analysis, a central topic in the theory of evolutionary computation, is therefore typically restricted to structurally simple artificial optimization tasks. In this presentation, I will discuss various ways in which we can nevertheless "bridge the gap" between theory and practice in evolutionary computation.

SESSION: Paper

Rigorous Runtime Analysis of Diversity Optimization with GSEMO on OneMinMax

The evolutionary diversity optimization aims at finding a diverse set of solutions which satisfy some constraint on their fitness. In the context of multi-objective optimization this constraint can require solutions to be Pareto-optimal. In this paper we study how the GSEMO algorithm with additional diversity-enhancing heuristic optimizes a diversity of its population on a bi-objective benchmark problem OneMinMax, for which all solutions are Pareto-optimal. We provide a rigorous runtime analysis of the last step of the optimization, when the algorithm starts with a population with a second-best diversity, and prove that it finds a population with optimal diversity in expected time O(n2), when the problem size n is odd. For reaching our goal, we analyse the random walk of the population, which reflects the frequency of changes in the population and their outcomes.

Curing ill-Conditionality via Representation-Agnostic Distance-Driven Perturbations

The objective value of an ill-conditioned function may significantly change with a minor shift of the argument in the search space. Ill-conditioned functions do not have at all or exhibit very few hints towards better solutions and, thus, they are usually difficult to optimize with randomized search heuristics. However, problems that emerge in practical applications are likely to be formulated as ill-conditioned functions, as often Euclidean metric is used to measure distance in the search space. At the same time, it may be possible to use domain-specific knowledge to define such a metric in the search space so that the function stops being ill-conditioned. We consider finite search spaces and propose two mutation operators that leverage such metric to optimize the function more efficiently. The first operator assumes prior knowledge about the distance, the second operator uses the distance as a black box. Those operators apply an estimation of distribution algorithm to find the best mutant according to the defined function, which employs the given metric. For pseudo-Boolean and integer optimization problems, we experimentally show that both mutation operators speed up the search on most of the functions when applied in considered evolutionary algorithms and random local search. Moreover, those operators can be applied in any randomized search heuristic which uses perturbations. However, our mutation operators increase wall-clock time and so are helpful in practice when distance is (much) cheaper to compute than the real objective function.

Finding Antimagic Labelings of Trees by Evolutionary Search

Randomized search heuristics can sometimes be effective verifiers for combinatorial conjectures. In this paper, we demonstrate how a simple evolutionary algorithm can be used to confirm the antimagic tree conjecture for all trees up to order 25. This conjecture, which has been open for over thirty years, is that every tree except K2 has an antimagic labeling: a bijective edge labeling such that the sum of labels assigned to edges incident to a vertex v is unique for all vertices v ϵ V. Moreover, we formally prove that that simple evolutionary algorithms are guaranteed to find antimagic labelings in expected polynomial time on trees of any order for certain restricted classes (paths, combs, uniform caterpillars, uniform spiders and perfect binary trees).

Using Automated Algorithm Configuration for Parameter Control

Dynamic Algorithm Configuration (DAC) tackles the question of how to automatically learn policies to control parameters of algorithms in a data-driven fashion. This question has received considerable attention from the evolutionary community in recent years. Having a good benchmark collection to gain structural understanding on the effectiveness and limitations of different solution methods for DAC is therefore strongly desirable. Following recent work on proposing DAC benchmarks with well-understood theoretical properties and ground truth information, in this work, we suggest as a new DAC benchmark the controlling of the key parameter λ in the (1 + (λ, λ)) Genetic Algorithm for solving OneMax problems. We conduct a study on how to solve the DAC problem via the use of (static) automated algorithm configuration on the benchmark, and propose techniques to significantly improve the performance of the approach. Our approach is able to consistently outperform the default parameter control policy of the benchmark derived from previous theoretical work on sufficiently large problem sizes. We also present new findings on the landscape of the parameter-control search policies and propose methods to compute stronger baselines for the benchmark via numerical approximations of the true optimal policies.

Weighted Mutation of Connections To Mitigate Search Space Limitations in Cartesian Genetic Programming

This work presents and evaluates a novel modification to existing mutation operators for Cartesian Genetic Programming (CGP). We discuss and highlight a so far unresearched limitation of how CGP explores its search space which is caused by certain nodes being inactive for long periods of time. Our new mutation operator is intended to avoid this by associating each node with a dynamically changing weight. When mutating a connection between nodes, those weights are then used to bias the probability distribution in favour of inactive nodes. This way, inactive nodes have a higher probability of becoming active again. We include our mutation operator into two variants of CGP and benchmark both versions on four Boolean learning tasks. We analyse the average numbers of iterations a node is inactive and show that our modification has the intended effect on node activity. The influence of our modification on the number of iterations until a solution is reached is ambiguous if the same number of nodes is used as in the baseline without our modification. However, our results show that our new mutation operator leads to fewer nodes being required for the same performance; this saves CPU time in each iteration.

First Steps Towards a Runtime Analysis of Neuroevolution

We consider a simple setting in neuroevolution where an evolutionary algorithm optimizes the weights and activation functions of a simple artificial neural network. We then define simple example functions to be learned by the network and conduct rigorous runtime analyses for networks with a single neuron and for a more advanced structure with several neurons and two layers. Our results show that the proposed algorithm is generally efficient on two example problems designed for one neuron and efficient with at least constant probability on the example problem for a two-layer network. In particular, the so-called harmonic mutation operator choosing steps of size j with probability proportional to 1/j turns out as a good choice for the underlying search space. However, for the case of one neuron, we also identify situations with hard-to-overcome local optima. Experimental investigations of our neu-roevolutionary algorithm and a state-of-the-art CMA-ES support the theoretical findings.

Runtime Analysis of a Co-Evolutionary Algorithm: Overcoming Negative Drift in Maximin-Optimisation

Co-evolutionary algorithms have found several applications in game-theoretic applications and optimisation problems with an adversary, particularly where the strategy space is discrete and exponentially large, and where classical game-theoretic methods fail. However, the application of co-evolutionary algorithms is difficult because they often display pathological behaviour, such as cyclic behaviour and evolutionary forgetting. These challenges have prevented the broad application of co-evolutionary algorithms.

We derive, via rigorous mathematical methods, bounds on the expected time of a simple co-evolutionary algorithm until it discovers a Maximin-solution on the pseudo-Boolean Bilinear problem. Despite the intransitive nature of the problem leading to a cyclic behaviour of the algorithm, we prove that the algorithm obtains the Maximin-solution in expected O(n1.5) time.

However, we also show that the algorithm quickly forgets the Maximin-solution and moves away from it. These results in a large total regret of Θ(Tn1.5) after T iterations. Finally, we show that using a simple archive solves this problem reducing the total regret significantly.

Along the way, we present new mathematical tools to compute the expected time for co-evolutionary algorithms to obtain a Maximin-solution. We are confident that these tools can help further advance runtime analysis in both co-evolutionary and evolutionary algorithms.

General Boolean Function Benchmark Suite

Just over a decade ago, the first comprehensive review on the state of benchmarking in Genetic Programming (GP) analyzed the mismatch between the problems that are used to test the performance of GP systems and real-world problems. Since then, several benchmark suites in major GP problem domains have been proposed over time, filling some of the major gaps. In the framework of the first review about the state of benchmarking in GP, logic synthesis (LS) was classified as one of the major GP problem domains. However, a diverse and accessible benchmark suite for LS is still missing. In this work, we propose a benchmark suite for LS that covers different types of Boolean functions that are commonly used in the field of GP. We analyze the complexity of the proposed benchmark by using popular complexity measures that are commonly used to classify and characterize Boolean functions and digital circuits.

Fixed-Parameter Tractability of the (1 + 1) Evolutionary Algorithm on Random Planted Vertex Covers

We present the first parameterized analysis of a standard (1+1) Evolutionary Algorithm on a distribution of vertex cover problems. We show that if the planted cover is at most logarithmic, restarting the (1+1) EA every O(n log n) steps will find a cover at least as small as the planted cover in polynomial time for sufficiently dense random graphs p > 0.71. For superlogarithmic planted covers, we prove that the (1+1) EA finds a solution in fixed-parameter tractable time in expectation.

We complement these theoretical investigations with a number of computational experiments that highlight the interplay between planted cover size, graph density and runtime.

Self-adaptation Can Improve the Noise-tolerance of Evolutionary Algorithms

Real-world optimisation often involves uncertainty. Previous studies proved that evolutionary algorithms (EAs) can be robust to noise when using proper parameter settings, including the mutation rate. However, finding the appropriate mutation rate is challenging if the occurrence of noise (or noise level) is unknown. Self-adaptation is a parameter control mechanism which adjusts mutation rates by encoding mutation rates in the genomes of individuals and evolving them. It has been proven to be effective in optimising unknown-structure and multi-modal problems. Despite this, a rigorous study of self-adaptation in noisy optimisation is missing. This paper mathematically analyses the runtimes of 2-tournament EAs with self-adapting two mutation rates, fixed mutation rates and uniformly chosen mutation rate from two given rates on LeadingOnes with and without symmetric noise. Results show that using self-adaptation achieves the lowest runtime regardless of the presence of symmetric noise. In supplemental experiments, we extend analyses to other types of noise, i.e., one-bit and bit-wise noise. We also consider another self-adaptation mechanism, which adapts the mutation rate from a given interval. Self-adaptive EAs adapt their mutation rate to the noise level and outperform static EAs in these experiments. Overall, self-adaptation can improve the noise-tolerance of EAs in the noise-models studied here.

Convergence Properties of the (μ/μI, λ)-ES on the Rastrigin Function

The highly multimodal Rastrigin test function is analyzed by deriving a new aggregated progress rate measure. It is derived as a function of the residual distance to the optimizer by assuming normally distributed positional coordinates around the global optimizer. This assumption is justified for successful ES-runs operating with sufficiently slow step-size adaptation. The measure enables the investigation of further convergence properties. For moderately large mutation strengths a characteristic distance-dependent Rastrigin noise floor is derived. For small mutation strengths local attraction is analyzed and an escape condition is established. Both mutation strength regimes combined pose a major challenge optimizing the Rastrigin function, which can be counteracted by increasing the population size. Hence, a population scaling relation to achieve high global convergence rates is derived which shows good agreement with experimental data.

Neural Networks as Black-Box Benchmark Functions Optimized for Exploratory Landscape Features

Artificial benchmark functions are commonly used in optimization research because of their ability to rapidly evaluate potential solutions, making them a preferred substitute for real-world problems. However, these benchmark functions have faced criticism for their limited resemblance to real-world problems. In response, recent research has focused on automatically generating new benchmark functions for areas where established test suites are inadequate. These approaches have limitations, such as the difficulty of generating new benchmark functions that exhibit exploratory landscape analysis (ELA) features beyond those of existing benchmarks.

The objective of this work is to develop a method for generating benchmark functions for single-objective continuous optimization with user-specified structural properties. Specifically, we aim to demonstrate a proof of concept for a method that uses an ELA feature vector to specify these properties in advance. To achieve this, we begin by generating a random sample of decision space variables and objective values. We then adjust the objective values using CMA-ES until the corresponding features of our new problem match the predefined ELA features within a specified threshold. By iteratively transforming the landscape in this way, we ensure that the resulting function exhibits the desired properties. To create the final function, we use the resulting point cloud as training data for a simple neural network that produces a function exhibiting the target ELA features. We demonstrate the effectiveness of this approach by replicating the existing functions of the well-known BBOB suite and creating new functions with ELA feature values that are not present in BBOB.

First Complexity Results for Evolutionary Knowledge Transfer

The field of evolutionary knowledge transfer (EKT) has recently begun to systematically develop algorithms that exploit a number of related problem instances to accelerate problem solving on difficult optimization tasks. EKT has the potential to have a major impact on evolutionary computation practice, comparable to the role that neural network pretraining has had on machine learning. But the community has only scratched the surface of the theoretical hurdles that the knowledge-transfer workflow raises. We introduce a three-part collect-select-exploit framework for understanding EKT, which we use to highlight the need for better evaluation and benchmarking approaches for transfer. We then present some of the first analytical results for EKT, proving no-free-lunch theorems for transfer, and proving what is (to our knowledge) the first asymptotic runtime result for transfer optimization. These results can serve as the basis for future research into the strengths and limitations of knowledge transfer as an optimization paradigm, and they serve to emphasize the need for more comprehensive benchmarks to hone progress in the field.

Partition Crossover can Linearize Local Optima Lattices of k-bounded Pseudo-Boolean Functions

When Partition Crossover is used to recombine two parents which are local optima, the offspring are all local optima in the smallest hyperplane subspace that contains the two parents. The offspring can also be organized into a non-planar hypercube "lattice." Furthermore, all of the offspring can be evaluated using a simple linear equation. When a child of Partition Crossover is a local optimum in the full search space, the linear equation exactly determines its evaluation. When a child of Partition Crossover can be improved by local search, the linear equation is an upper bound on the evaluation of the associated local optimum when minimizing. This theoretical result holds for all k-bounded Pseudo-Boolean optimization problems, including MAX-kSAT, QUBO problems, as well as random and adjacent NK landscapes. These linear equations provide a stronger explanation as to why the "Big Valley" distribution of local optima exists. We fully enumerate a sample of NK landscapes to collect frequency information to complement our theoretical results. We also introduce new algorithmic contributions that can 1) expand smaller lattices in order to find larger lattices that contain additional local optima, and 2) introduce an efficient method to find new improving moves in lattices using score vectors.