GECCO '21: Proceedings of the Genetic and Evolutionary Computation Conference Companion

Full Citation in the ACM Digital Library

SESSION: Competition: Evolutionary computation in the energy domain: Smart grid applications

Ring cellular encode-decode UMDA: simple is effective

The management of today's smart grids is a challenge due to the stochastic nature of renewable energies, the vehicle-to-grid functionalities, the participation of external providers and the bidding in local and external markets. For this reason, the "Competition on Evolutionary Computation in the Energy Domain: Smart Grid Applications" with a framework to test algorithms for these problems is launched each year since 2017. In this paper, a novel algorithm named Ring Cellular Encode-Decode UMDA (RCED-UMDA) is introduced. The experimental results showed that RCED-UMDA achieves better average fitness for both tracks of the 2021 edition of this competition than the previous competition's algorithms.

Cooperative co-evolution strategies with time-dependent grouping for optimization problems in smart grids

The rapid development of smart grids provides end-users numerous services, like energy transactions and load management. Meanwhile, it raises new optimization problems worthy to explore. To solve these problems, cooperative co-evolution strategies with time-dependent grouping are proposed in this paper. Compared with five advanced methods, the proposed algorithms are applied in GECCO 2021 competition on evolutionary computation in the energy domain.

SESSION: Competition: Optimal camera placement problem (OCP) and the unicost set covering problem (USCP)

Exact and approximate USCP with branch and bound

We propose a parallel solver for the unicost set covering problem based on branch and bound approach. The main contributions of this algorithm lay in its fast parallel execution and the ability to work in approximate mode. We demonstrate the proposed approach on the problem instances from GECCO 2021 unicost set covering competition. To tackle the presented problem instances of varying difficulty, we use an automatic tuning of the algorithm's parameters. The results show all the instances can be solved but the performance remains weak on large instances.

SESSION: Competition: Open optimization competition 2021: Competition and benchmarking of sampling-based optimization algorithms

Benchmarking gradient-free optimizers for 3D performance capture in the nevergrad platform

In this paper we document our work that extends the nevergrad platform by adding a real-world benchmark at the intersection of computer vision and graphics: 3D Performance Capture. Detailed documentation can be found at vcl3d.github.io/nevergrad/

Robust benchmarking for multi-objective optimization

The performance assessment of multi-objective optimization algorithms is a crucial task for investigating their behaviour. However, the selected quality indicators and statistical techniques used in comparison studies can have huge impact on the study results. A quality indicator transforms high-dimensional data (an approximation set) into one-dimensional data (a quality indicator), followed by a potential loss of high-dimensional information concerning the transformation. Comparison approaches typically involve a single quality indicator or an ensemble of quality indicators to address more quality criteria, which are predefined by the user. To provide more robust benchmarking for multi-objective optimization, we extended the DSCTool with three approaches that are ensembles of quality indicators and one novel approach that compare the high-dimensional distributions of the approximation sets and reduces the users' preference in the selection of quality indicators. The approaches are provided as web services for robust ranking and hypothesis testing, including a proper selection of an omnibus statistical test and post-hoc tests if needed.

SESSION: Competition: Bound constrained single objective numerical optimization

SOMA-CLP for competition on bound constrained single objective numerical optimization benchmark: a competition entry on bound constrained single objective numerical optimization at the genetic and evolutionary computation conference (GECCO) 2021

The paper represents a competition entry for the competition on bound constrained single objective numerical optimization at The Genetic and Evolutionary Computation Conference (GECCO) 2021 by a novel algorithm titled Self-organizing Migrating Algorithm with CLustering-aided migration and adaptive perturbation vector control (SOMA-CLP).

SESSION: Competition: Optimization of a simulation model for a capacity and resource planning task for hospitals under special consideration of the COVID-19 pandemic

Hospital simulation model optimisation with a random ReLU expansion surrogate model

The industrial challenge of the GECCO 2021 conference is an expensive optimisation problem, where the parameters of a hospital simulation model need to be tuned to optimality. We show how a surrogate-based optimisation framework, with a random ReLU expansion as the surrogate model, outperforms other methods such as Bayesian optimisation, Hyperopt, and random search on this problem.

Surrogate-based optimisation for a hospital simulation scenario using pairwise classifiers

This paper presents a surrogate-based approach that uses a relatively simple population-based optimisation algorithm, a basic Differential Evolution algorithm (DE), and experiments with two complementary approaches to construct a surrogate. This surrogate-based optimisation uses a predictive model in-line and decides whether to calculate a candidate individual (using the simulation model) or discard it as part of the optimisation process. The complementary approaches for the design of the surrogate are (1) a traditional regression-based surrogate that approximates the surface of the fitness landscape using a supervised continuous machine learning algorithm (XGBoost), and (2) a pairwise approach that models the surrogate as a binary classification problem for a machine learning algorithm (in this experiment we proposed a Decision Tree binary classifier). Although there is no statistical difference in the performance of both surrogate approaches, the surface/regression one obtains a slightly better performance when the execution is limited to 200 fitness evaluations. In contrast, the pairwise/classification approach obtains the lowest value and a lower mean in executions with 750 fitness evaluations.

An evolutionary and neighborhood-based algorithm for optimization under low budget requirements

Noisy optimization problems pose several challenges to optimization algorithms under limited computational resources. The algorithm must balance the need to explore the search space and to exploit promising regions of this space. In this work we describe an algorithm that combines previous ideas to tackle the GECCO 2021 industrial challenge using an exploration and an exploitation step. The first step consists in an evolutionary algorithm combined with an experimental design, while the second phase is a neighborhood search embedded within a multi-armed ∈-greedy approach. The resulting algorithm is not only applicable to the challenge but also to more general problems with varying conditions.

Linear regression strategy for differential evolution

A new Differential Evolution algorithm is proposed which incorporates a linear regression strategy to find the best single-objective optimization solution. Furthermore, this algorithm contemplates directional information and classical chromosome conservation strategies. Its main motivation is to be used in problems in which it is computationally hard to calculate the fitness differences among solutions.

SESSION: Hot off the press

Do quality indicators prefer particular multi-objective search algorithms in search-based software engineering?: (hot off the press track at GECCO 2021)

In Search-based Software Engineering (SBSE), researchers and practitioners (SBSE users) using multi-objective search algorithms (MOSAs) often select commonly used MOSAs to solve their search problems. Such a selection is usually not justified, and the main selection criterion is the MOSA popularity. On the other hand, SBSE users are usually aware of the desired qualities of solutions of their search problem, captured by Quality Indicators (QIs). Consequently, to guide SBSE users in selecting MOSAs for their specific SBSE problems, we study preference relationships between QIs and MOSAs with an empirical evaluation. Given a QI or a quality aspect (e.g., convergence), we suggest a MOSA that is highly likely to produce solutions representing the QI or the quality aspect. Based on our experiments' results, we provide insights and suggestions for SBSE users to choose a MOSA based on experimental evidence.

This is an extended abstract of the paper [2]: S. Ali, P. Arcaini, and T. Yue, "Do Quality Indicators Prefer Particular Multi-Objective Search Algorithms in Search-Based Software Engineering?", 12th International Symposium on Search-Based Software Engineering (SSBSE 2020).

Runtime analysis via symmetry arguments: (hot-off-the-press track at GECCO 2021)

We use an elementary argument building on group actions to prove that the selection-free steady state genetic algorithm analyzed by Sutton and Witt (GECCO 2019) takes an expected number of [EQUATION] iterations to find any particular target search point. This bound is valid for all population sizes μ. Our result improves and extends the previous lower bound of Ω(exp(nδ/2)) valid for population sizes μ = O(n1/2--δ), 0 < δ < 1/2. This paper for the Hot-off-the-Press track at GECCO 2021 summarizes the work Benjamin Doerr. Runtime Analysis of Evolutionary Algorithms via Symmetry Arguments. Information Processing Letters, 166:106064. 2021. [5].

Theoretical analyses of multi-objective evolutionary algorithms on multi-modal objectives: (hot-off-the-press track at GECCO 2021)

Previous theory work on multi-objective evolutionary algorithms considers mostly easy problems that are composed of unimodal objectives. This paper takes a first step towards a deeper understanding of how evolutionary algorithms solve multi-modal multi-objective problems. We propose the OneJumpZeroJump problem, a bi-objective problem with single objectives isomorphic to the classic jump function benchmark. We prove that the simple evolutionary multi-objective optimizer (SEMO) cannot compute the full Pareto front. In contrast, for all problem sizes n and all jump sizes [EQUATION], the global SEMO (GSEMO) covers the Pareto front in Θ((n-2k)nk) iterations in expectation. To improve the performance, we combine the GSEMO with two approaches, a heavy-tailed mutation operator and a stagnation detection strategy, that showed advantages in single-objective multi-modal problems. Runtime improvements of asymptotic order at least kΩ(k) are shown for both strategies. Our experiments verify the substantial runtime gains already for moderate problem sizes. Overall, these results show that the ideas recently developed for single-objective evolutionary algorithms can be effectively employed also in multi-objective optimization.

This Hot-off-the-Press paper summarizes "Theoretical Analyses of Multi-Objective Evolutionary Algorithms on Multi-Modal Objectives" by B. Doerr and W. Zheng, which has been accepted for publication in AAAI 2021 [9].

Reducing bias in multi-objective optimization benchmarking

The performance assessment of multi-objective optimization algorithms involves a user-preference-based selection of a single quality indicator used as a performance measure. A single quality indicator maps the approximation set (i.e., high-dimensional data) into a real value (i.e, one-dimensional data). However, it is well known that the selection of the quality indicator can have a huge impact on the benchmarking conclusions. This invites researchers to present only results for quality indicators that are in favor of the desired algorithm, or performing bias performance assessment. To go beyond this, we proposed a novel ranking scheme that reduces the bias in the user-preference selection by comparing the high-dimensional data of approximations sets and consequently provides more robust statistical results. The selection of a quality indicator is only required in cases when high-dimensional distributions of the approximation sets differ. By performing such analyses, experimental results show that the cases affected by the user-preference selection are reduced.

Optimal recombination and adaptive restarts improve GA performance on the asymmetric TSP

We propose a new genetic algorithm (GA) with optimal recombination and adaptive restarts for the asymmetric travelling salesman problem (ATSP). The optimal recombination problem (ORP) is solved in a crossover operator based on a new exact algorithm that solves the ATSP on cubic digraphs. We use an adaptive restart rule based on the Schnabel census estimate. A computational experiment on the known benchmark instances shows that the proposed algorithm yields results competitive to those of state-of-the-art algorithms for the ATSP and confirms that the ORP may be used successfully in genetic algorithms.

This abstract for the Hot-off-the-Press track of GECCO 2021 summarizes work that has appeared in Eremeev A.V., Kovalenko Yu.V. A memetic algorithm with optimal recombination for the asymmetric traveling salesman problem. Memetic Computing 12, 23-36 (2020).

Genetic improvement of data for maths functions

Genetic Improvement (GI) can be used to give better quality software and to create new functionality.

We show that GI can evolve the PowerPC open source GNU C runtime library square root function into cube root, binary logarithm log2 and reciprocal square root. The GI cbrt is competitive in run-time performance and our inverted square root x-1/2 is far more accurate than the approximation used in the Quake video game. We use CMA-ES to adapt constants in a Newton-Raphson table, originally from glibc's sqrt, for other double precision mathematics functions. Such automatically customised math libraries might be used for mobile or low resource, IoT, mote, smart dust, bespoke cyber-physical systems.

Evolutionary Computing (EC) can be used to not only adapt source code but also data, such as numerical constants, and could enable a new way to conduct software data maintenance. This is an exciting opportunity for the GECCO and optimisation communities.

Achieving weight coverage for an autonomous driving system with search-based test generation (HOP track at GECCO 2021)

Autonomous Driving Systems (ADSs) are complex critical systems that must be thoroughly tested. Still, assessing the strength of tests for ADSs is an open and complex problem. Weight Coverage is a test criterion targeting ADSs which are based on a weighted cost function. It measures how much each weight, related to different aspects of the ADS's decision process, is involved in the decisions taken in a test scenario. All weights/aspects should be involved for a strong test suite. Although weight coverage can measure the quality of a test suite, it does not provide clear guidance for test generation. This work proposes weight coverage-driven search-based test generation for ADSs. It describes and compares three designs of the search process: a single-objective search aiming at generating a test covering a single weight; a multi-objective search where each objective targets a different weight; and a single-objective search where the fitness function is an aggregate function representing the coverage over multiple weights. Experiments using an ADS system provided by our industry partner show the validity of the method and provide insights into the benefits of each search design. This Hot-off-the-Press paper summarises the paper [2]: T. Laurent, P. Arcaini, F. Ishikawa and A. Ventresque, "Achieving Weight Coverage for an Autonomous Driving System with Search-based Test Generation", 25th International Conference on Engineering of Complex Computer Systems (ICECCS 2020).

Genetic improvement of routing in delay tolerant networks

Routing plays a fundamental role in network applications, but it is especially challenging in Delay Tolerant Networks (DTNs). These are a kind of mobile ad hoc networks made of e.g. (possibly, unmanned) vehicles and humans where, despite a lack of continuous connectivity, data must be transmitted while the network conditions change due to the nodes' mobility. In these contexts, routing is NP-hard and is usually solved by heuristic "store and forward" replication-based approaches, which however produce relatively low delivery probabilities. Here, we genetically improve two routing protocols widely adopted in DTNs, namely Epidemic and PRoPHET, in the attempt to optimize their delivery probability. First, we dissect them into their fundamental components, i.e., functionalities such as checking if a node can transfer data, or sending messages to all connections. Then, we apply Genetic Improvement (GI) to manipulate these components as terminal nodes of evolving trees. We apply this methodology, in silico, to six test cases of urban networks made of hundreds of nodes, and find that GI produces consistent gains in delivery probability in four cases.

Interactive parameter tuning of bi-objective optimisation algorithms using the empirical attainment function

We propose a visual approach of eliciting preferences from a Decision Maker (DM) in the context of comparing the stochastic outcomes of two alternative designs or parameter configurations of an optimization algorithm for bi-objective problems. Our proposal is based on visualizing the differences between the empirical attainment functions (EAFs) of the two alternative algorithmic configurations, and then ask the DM to choose their preferred side of the differences. Information about the regions preferred by the DM is translated into a weighted hypervolume indicator that will assign higher quality values to approximation fronts that result in EAF differences preferred by the DM. This indicator may be used to guide an automatic algorithm configuration method, such as irace, to search for parameter values that perform better in the objective space regions preferred by the DM. Experiments on the well-known bi-objective quadratic assignment problem and a real-world production planning problem arising in the manufacturing industry show the benefits of the proposal.

This manuscript for the Hot-off-the-Press track at GECCO 2021 is based on the paper: "Incorporating Decision-Maker's Preferences into the Automatic Configuration of Bi-Objective Optimisation Algorithms" published by European Journal of Operational Research [6].

The influence of uncertainties on optimization of vaccinations on a network of animal movements

This summary presents the results reported in the article Krzysztof Michalak and Mario Giacobini, "The influence of uncertainties on optimization of vaccinations on a network of animal movements" which studies the influence of uncertainties on the effectiveness of vaccination schemes obtained by using evolutionary algorithms and vaccination strategies. In this work the uncertainties are represented as unknown disease cases and changes introduced to the network of contacts by a rewiring mechanism. The experiments show that evolutionary algorithms outperform vaccination strategies when provided information is accurate, that is, when most disease cases are known and the intensity of rewiring is low. With higher level of uncertainty the strategies produce better results. Results presented in the article motivate further work in several areas: modelling and prediction of changes in the contacts network, development of computational methods for estimating the number of initial disease cases, and hybridization of evolutionary optimizers with vaccination strategies and other knowledge-based approaches.

Multi-objective parameter-less population pyramid in solving the real-world and theoretical problems

Many real-world problems are notoriously multi-objective and NP-hard. Hence, there is a constant striving for optimizers capable of solving such problems effectively. In this paper, we examine the Multi-Objective Parameter-less Population Pyramid (MO-P3). MO-P3 is based on the Parameter-less Population Pyramid (P3) that was dedicated to solving single-objective problems. P3 employs linkage learning to decompose the problem and uses this information during its run. P3 maintains many different linkage information sets, which is the key to effectively solve the problems of the overlapping nature, i.e., the problems whose variables form a large and complicated network of dependencies rather than additively separable blocks. MO-P3 inherits the features of its predecessor and employs both linkage learning and linkage diversity maintenance to effectively solve hard multi-objective problems, which includes both: well-known test problems and NP-hard real-world problems.

On sampling error in evolutionary algorithms

The initial population in evolutionary algorithms (EAs) should form a representative sample of all possible solutions (the search space). While large populations accurately approximate the distribution of possible solutions, small populations tend to incorporate a sampling error. A low sampling error at initialization is necessary (but not sufficient) for a reliable search since a low sampling error reduces the overall random variations in a random sample. For this reason, we have recently presented a model to determine a minimum initial population size so that the sampling error is lower than a threshold, given a confidence level. Our model allows practitioners of, for example, genetic programming (GP) and other EA variants to estimate a reasonable initial population size.

Improving assertion oracles with evolutionary computation

Assertion oracles are executable boolean expressions placed inside a software program that verify the correctness of test executions. A perfect assertion oracle passes (returns true) for all correct executions and fails (returns false) for all incorrect executions. Because designing perfect assertion oracles is difficult, assertions often fail to distinguish between correct and incorrect executions. In other words, they are prone to false positives and false negatives.

GAssert is the first technique to automatically improve assertion oracles by reducing false positives and false negatives. Given an assertion oracle and a set of correct and incorrect program states, GAssert employs a novel co-evolutionary algorithm that explores the space of possible assertions to identify one with fewer false positives and false negatives. Our evaluation on 34 Java methods shows that GAssert effectively improves assertion oracles.

Analysis of evolutionary algorithms on fitness function with time-linkage property (hot-off-the-press track at GECCO 2021)

In real-world applications, many optimization problems have the time-linkage property, that is, the objective function value relies on the current solution as well as the historical solutions. Although the rigorous theoretical analysis on evolutionary algorithms has rapidly developed in the last two decades, it remains an open problem to theoretically understand the behaviors of evolutionary algorithms on time-linkage problems. This paper takes the first step towards the rigorous analyses of evolutionary algorithms for time-linkage functions. Based on the basic OneMax function, we propose a time-linkage function where the first bit value of the last time step is integrated but has a different preference from the current first bit. We prove that with probability 1 - o(1), randomized local search and (1 + 1) EA cannot find the optimum, and with probability 1 - o(1), (μ + 1) EA is able to reach the optimum.

This paper for the Hot-off-the-Press track at GECCO 2021 summarizes the work "Analysis of Evolutionary Algorithms on Fitness Function with Time-linkage Property" by W. Zheng, H. Chen, and X. Yao, which has been accepted for publication in the IEEE Transactions on Evolutionary Computation 2021 [19].

SESSION: Late-breaking abstracts

An improved predictor of daily stock index based on a genetic filter

In this study, a genetic algorithm-based filter feature selection was applied to the data on the rates of change in various economic indices used worldwide to predict the rates of change of the Korea Composite Stock Price Index (KOSPI). The fitness function is composed of a combination of the results of information gains, F-test, and correlation coefficients. Data for the past 12 years (from 2007 to 2018) were divided into sections according to the yearly time intervals, and feature selection was applied to the data in each section. It was found that the amount of calculation time required for a genetic filter was approximately 75% lower compared to that required for a previous genetic wrapper. It was also found that the mean absolute error of the genetic filter was approximately 26% lower compared to that of the genetic wrapper. The analytic results verified that the genetic filter exhibited better calculation performance and required less time to calculate the optimal solution. Furthermore, a new combination of features related to the KOSPI was obtained.

Algorithm selection using transfer learning

Per-instance algorithm selection has been shown to achieve state of the art performance in solving Travelling Salesman Problems (TSP). By selecting optimization algorithms for each TSP instance, significant time savings have been achieved. In this work, we highlight how recent algorithm selection techniques apply to service composition; which is posed as a TSP problem. However, the service composition environment is highly dynamic, which poses unique challenges for algorithm selection. Chief amongst those is the availability of training data for all algorithms on unseen tasks, which is infeasible to obtain. To address this problem, we propose the use of transfer learning techniques to improve classification accuracy in dynamic settings such as service composition.

A software library for archiving nondominated points

The need to efficiently maintain nondominated points in an unbounded archive arises in many exact and heuristic approaches to multiobjective combinatorial optimization problems. In this work, we present a C++ library, nondLib, which allows to perform several operations to extract nondominated points from an archive for any number of dimensions. In addition, we discuss the practical impact of the number of nondominated points in the archive in the run-time of these implementations.

An interactive tool for enhancing hospital capacity predictions using an epidemiological model

Hospital managers have limited resources, and they have to plan on how best to allocate these resources based on predicted demand, which is often based on linear or exponential models. In this paper, we propose an interactive tool that produces a forecast of bed occupancy based on an epidemiological model. We optimise this model to fit recently observed data, and interactivity is conferred through a controllable parameter of the model such that users can readily investigate hypothetical scenarios for planning purposes. This study was designed with the Welsh National Health Service, and was born out of their practical need of accurately modelling hospital occupancy during the ongoing Covid-19 pandemic.

A new hybrid evolutionary algorithm for dial-a-ride problems

This work aims at planning the vehicle routes and schedules within Dial and Ride Problems. Therefore, we propose a new hybrid evolutionary algorithm that combines the evolutionary schema with a Tabu Search method. New controlled mutation operators and an original crossover for minimizing the redundancy of solutions are investigated. A comparison between different hybridized methods is operated on benchmark instances from the literature. Computational experiments show the effectiveness of our method in comparison with the current techniques from the state-of-the-art.

Generative design of microfluidic channel geometry using evolutionary approach

In this paper, we propose the evolutionary approach for the generative design of microfluidic channel geometry. Sets of candidate solutions for geometry of single cell analysis devices can be used to simplify the decision-making process for micro-devices design. The algorithmic core is based on continuous optimization of coordinates of a polygons set. The proposed approach is validated experimentally with the fabricated microfluidic device. The experiments confirm the correctness and effectiveness of the proposed methods.

Rapid prototyping of evolution-driven biclustering methods in Julia

Biclustering is a technique of detecting meaningful patterns in tabular data. It is also one of the fields in which evolutionary algorithms have risen to the very top in terms of speed and accuracy. In this short paper we summarize the results of porting one of the leading evolutionary-based biclustering methods EBIC to Julia - an emerging high-end programming language. This is probably the first biclustering package developed in this programming language. The main findings of this study are that flexibility combined with high performance make Julia an appealing platform for development and validation of new biclustering methods.

k-Pareto optimality for many-objective genetic optimization

The Pareto dominance-based evolutionary many-objective optimization methods are known to suffer from the deterioration of searchability. We propose to redefine the calculation of Paretodominance. Instead of assigning solutions to non-dominated fronts, we rank them according to the number of dominating solutions or the probability of being dominated. Through the experimental results from a 0/1 knapsack problem, we demonstrate the advantages of this probabilistic approach: 1) it allows to increase the hypervolume for both the multi- and many-objective optimization problems; 2) in the case of many-objective optimization, it results in better fitted solutions as compared to NSGA-II and NSGA-III.

Winner prediction for real-time strategy games through feature selection based on a genetic wrapper

We propose a method that can predict the game winner using a long short-term memory (LSTM) model that is trained using the match data of a real-time strategy game (Clash Royale). Feature selection based on a genetic wrapper is utilized to identify an excellent feature subset. Subsequently, the performance of a model trained using the data of the aforementioned subset is compared with that of an existing model. The model, which is trained using data that includes the entire features before feature selection, exhibited a winner prediction accuracy of about 60%. Conversely, the model, which is trained using the optimal subset data identified through the application of the genetic wrapper feature selection, exhibited a winner prediction accuracy of about 75%. Based on this comparison result, it was found that the proposed method increased the winner prediction accuracy by about 15% compared to the existing method. In excellent chromosomes explored through genetic wrapper feature selection, the spatial feature was always selected. Simultaneously, the temporal feature was always excluded.

POSTER SESSION: Posters: Ant colony optimization and swarm intelligence

Novelty particle swarm optimisation for truss optimisation problems

Bilevel optimisation has been successfully applied to truss optimisation to consider topology and sizing in upper and lower levels, respectively. This study proposes novelty particle swarm optimisation for the upper level to discover new designs by maximising novelty. Our experimental investigations show that our approach outperforms current state-of-the-art methods and obtains multiple high-quality solutions.

Partial-ACO as a GA mutation operator applied to TSP instances

A recent novel modification to Ant Colony Optimisation (ACO) known as Partial-ACO can be successfully used to solve Travelling Salesman Problems (TSP) by making partial modifications. The approach also dispenses with a pheromone matrix using the population to build pheromone levels on edges enabling scaling to large problems. Consequently, being population based the approach can be also used within a Genetic Algorithm as a mutation operator. Results demonstrate significant improvements when using Partial-ACO as a mutation operator with a range of crossover operators.

On detecting the novelties in metaphor-based algorithms

Following recent criticisms of the metaphor-based algorithms, several studies have reported that the basic core ideas of some nature-inspired algorithms are basically the same, so such methods do not bring any novelty at all. The majority of these papers dealt with the problem on the conceptual level. The aim of this study is to show how similar operations of these algorithms are on the operational level, where searching for the best solutions in the genotype space is affected by the exploration/exploitation components of the search process. In line with this, a new method is proposed for comparing the similarity of algorithms considering both levels.

Evolved response thresholds generalize across problem instances for a deterministic-response multiagent system

In this work, we use a multiobjective genetic algorithm to evolve agent response thresholds for a decentralized swarm and demonstrate that swarms with evolved thresholds outperform swarms with thresholds set using other methods. In addition, we provide evidence that the effectiveness of evolved thresholds is due in part to the evolutionary process being able to find, not just good distributions of thresholds for a given task across all agents, but also good combinations of thresholds over all tasks for individual agents. Finally, we show that thresholds evolved for some problem instances can effectively generalize to other problem instances with very different task demands.

Ant colony optimization for energy-efficient train operations

Traffic perturbations in railway systems may give rise to conflicts, which cause delays w.r.t. the timetable. Dealing with them requires solving the real-time Rail Traffic Management Problem (rtRTMP). A subproblem of the rtRTMP is the real-time Energy Consumption Minimization Problem (rtECMP). It defines the speed profiles along with the timing of multiple trains in a given network and time horizon. It takes as input the train routing and precedences computed by a rtRTMP solver and its objective is to minimize the weighted sum of train energy consumption and total delay. In this paper, we propose an Ant Colony Optimization algorithm for the rtECMP and we test it on the French Pierrefitte-Gonesse control area with dense mixed traffic. The results show that, in 30 seconds, a remarkable exploration of the search space is performed before convergence.

Learning assignment order in an ant colony optimiser for the university course timetabling problem

Previous studies have employed Ant Colony Optimisation to solve the University Course Timetabling task --- which requires the order of lecture assignments to be defined for its construction graph. Various heuristic or random ordering techniques have been proposed in the literature, but uncertainty remains regarding the best approach for this. We investigate the effect that permuting assignment order has on the quality of timetable produced. As part of this we develop a novel MAX-MIN Ant System including dynamic constraint handling and partial function evaluations. We also explore algorithm variants with and without Local Search and employ a form of transfer learning to identify appropriate permutations. We find that between smaller problems in the International Timetabling Competition 2007 benchmark, timetabling performance can be improved using such an approach. However we find that we lose this performance gain when attempting to transfer to considerably larger problems --- indicating that similar structures are required when using a 'learnt' permutation approach in such a framework.

Ant swarm algorithm for self-organizing complex system

This paper proposes a methodology to analyze complex systems' self-organizing characteristics by utilizing a double-layer ant swarm algorithm. The basic components comprising a complex system are defined as interacting nodes, and each node hosts multiple ants, which motions represent possible traces of the corresponding node. A representative group-state for the entire complex system is formed by randomly selecting one ant from all nodes. The methodology further proposes constructing a fitness function to guide the random ant-group moving with reduced entropy generation at the system level. Simultaneously, ants within a single node move by pheromone-based coordination. The proposed methodology is used to analyze Abelian Sandpile Model (ASM). Two types of coordinating algorithms, stochastic greedy and chaotic, are constructed to evaluate the methodologies' robustness. The results demonstrate that both coordinating algorithms can successfully capture ASM's collective characteristics, such as average sandpile height and recursive rates for different heights, although these two algorithms represent different searching dynamics.

POSTER SESSION: Posters: Complex systems (artificial life, artificial immune systems, generative and developmental systems, evolutionary robotics, evolvable hardware)

Predicting soft robot's locomotion fitness

Organisms with different body morphology and movement dynamics have distinct abilities to move through the environment. Despite such truism, there is a lack of general principles that predict which shapes and dynamics make the organisms more fit to move. Studying a minimal yet embodied soft robot model under the influence of gravity, we find three features that predict robot locomotion fitness: (1) A larger body is better. (2) Two-point contact with the ground is better than one-point contact. (3) Out-of-phase oscillating body parts increase locomotion fitness. These design principles can guide the selection rules for evolutionary algorithms to obtain robots with higher locomotion fitness.

On the use of feature-maps for improved quality-diversity meta-evolution

Quality-Diversity (QD) algorithms evolve a behaviourally diverse archive of high-performing solutions. In QD meta-evolution, one evolves a population of QD algorithms by modifying algorithmic components (e.g., the behaviour space) to optimise an archive-level objective, the meta-fitness. This paper investigates which feature-map is best for defining the behaviour space for an 8-joint robot arm. Meta-evolution with non-linear feature-maps yields a 15-fold meta-fitness improvement over linear feature-maps. On a damage recovery test, archives evolved with non-linear feature-maps outperform traditional MAP-Elites variants.

Promoting reproductive isolation through diversity in on-line collective robotics

We present a behavioral diversity selection scheme that favors reproductive isolation to promote the learning of multiple task in online embodied evolutionary robotics (EER). The scheme estimates the behavior of the controllers without the need to access the agent experience, respecting thus the online, distributed properties EER. Reproductive isolation is assessed through coalescence trees and task specialization is tested on a concurrent foraging setting.

Younger is better: a simple and efficient selection strategy for MAP-Elites

What makes a quality-diversity (QD) algorithm effective? This question is increasingly studied as such algorithms gain in popularity. Recent work on Novelty Search linked its efficiency to the evolution of a population under a dynamic selection pressure defined by the novelty metric, which allows the population to converge to highly evolvable individuals, on which spending the evolutionary budget allows to navigate the behavior space. MAP-Elites, another popular QD algorithm, does not use an explicit population, instead sampling the parent population directly from the behavioral archive at each generation. This sampling can be uniform, or biased according to criteria such as curiosity, which favor individuals that previously generated successful offsprings. In this article, we show that such improved selection schemes are efficient because they create a dynamic pseudo-population that mimics the desirable qualities of a Novelty Search population. We do this by proposing another simpler selection scheme, youth, that simply focuses the evolutionary budget on such a dynamic pseudo-population, and by showing it exhibits similar behavior and performance as curiosity without explicitly selecting for previously successful parents. This has important implications for the design of future QD algorithms taking the best of both archive-based and population-based methods.

Ad hoc teaming through evolution

Cooperative Co-evolutionary Algorithms effectively train policies in multiagent systems with a single, statically defined team. However, many real-world problems, such as search and rescue, require agents to operate in multiple teams. When the structure of the team changes, these policies show reduced performance as they were trained to cooperate with only one team. In this work, we solve the cooperation problem by training agents to fill the needs of an arbitrary team, thereby gaining the ability to support a large variety of teams. We introduce Ad hoc Teaming Through Evolution (ATTE) which evolves a limited number of policy types using fitness aggregation across multiple teams. ATTE leverages agent types to reduce the dimensionality of the interaction search space, while fitness aggregation across teams selects for more adaptive policies. In a simulated multi-robot exploration task, ATTE is able to learn policies that are effective in a variety of teaming schemes, improving the performance of CCEA by a factor of up to five times.

The impact of different tasks on evolved robot morphologies

A well-established fact in biology is that the environmental conditions have a paramount impact on the evolved life forms. In this paper we investigate this in an evolutionary robot system where morphologies and controllers evolve together. We evolve robots for two tasks independently and simultaneously and compare the outcomes. The results show that the robots evolved for multiple tasks simultaneously developed new morphologies that were not present in the robots evolved for single tasks independently.

Comparing lifetime learning methods for morphologically evolving robots

The joint evolution of morphologies and controllers of robots leads to a problem: Even if the parents have well-matching bodies and brains, the stochastic recombination can break this match and cause a body-brain mismatch in their offspring. This can be mitigated by having newborn robots perform a learning process that optimizes their inherited brain quickly after birth. An adequate learning method should work on all possible robot morphologies and be efficient. In this paper we apply Bayesian Optimization and Differential Evolution as learning algorithms and compare them on a test suite of different robot bodies.

Heterogeneous agent coordination via adaptive quality diversity and specialization

In many real-world multiagent systems, agents must learn diverse tasks and coordinate with other agents. This paper introduces a method to allow heterogeneous agents to specialize and only learn complementary divergent behaviors needed for coordination in a shared environment. We use a hierarchical decomposition of diversity search and fitness optimization to allow agents to speciate and learn diverse temporally extended actions. Within an agent population, diversity in niches is favored. Agents within a niche compete for optimizing the higher level coordination task. Experimental results in a multiagent rover exploration task demonstrate the diversity of acquired agent behavior that promotes coordination.

Reinforcement learning with rare significant events: direct policy search vs. gradient policy search

This paper shows that the CMAES direct policy search method fares significantly better than PPO gradient policy search for a reinforcement learning task where significant events are rare.

Automatic exploration of the property space of reservoirs

Reservoir Computing is an efficient implementation of a recurrent neural network for dealing with temporal/sequential data processing. However, in terms of matching reservoir dynamics to tasks, the precise balance of properties (kernel rank, generalization rank, memory capacity, size) of reservoirs will vary. To provide guidance for the generation of reservoirs, we use NSGA-II and MAP-Elites to explore the balance between those properties. We further provide three generation strategies for reservoirs: (a) the optimization of the properties of the random generator, (b) the direct optimization of general purpose reservoir, and (c) a combination of both approaches. We show that each approach can generate reservoirs with different ranges of characteristics, making them thus appropriate for different categories of tasks.

Examining forms of inductive bias towards 'simplicity' in genetic algorithms to enhance evolvability of boolean functions

Recent work has looked at the evolvability of biological networks by representing them as graphs or functions, and studying the conditions under which certain structures are feasibly reachable. One key factor in enabling evolvability appears to be some form of prior constraint or inductive bias on the evolutionary landscape, which typically represents selectivity for low-complexity structures. Here we examine two different approaches for incorporating this kind of inductive bias in genetic algorithms, and propose a simulation framework allowing us to compare them and to relate the notion of a function class to that of a function's algorithmic complexity.

Designing fitness functions for odour source localisation

Locating odour sources is a hard task that has been addressed with a large variety of AI methods to produce search strategies with different levels of efficiency and robustness. However, it is still not clear how to evaluate those strategies. Simply evaluating the robot's ability to reach the goal may produce deceptive fitness values, favouring poor strategies that do not generalise. Conversely, including prior knowledge may bias the learning process. This work studies the impact of evaluation functions with various degrees of prior knowledge, in evolving search strategies. The baseline is set by performing multiple evaluations of each strategy with a function that only evaluates the task efficiency. A function was found that is able to produce strategies with equivalent performance to those of the baseline, whilst performing a single evaluation.

How to evolve a neuron

A natural neuron can function as a nonlinear down-sampler, or in the extreme, as a coincidence detector. I will describe artificial life experiments in which agents capable of general computation evolve an artificial reaction network capable of performing coincidence detection. They also develop spike-timing-dependent plasticity, thought to be an important mechanism for learning.

Environmental impact on evolving language diversity

There has been a significant amount of research on computational modeling of language evolution to understand the origins and evolution of communication [3, 11, 14, 15]. However, there has been relatively little computational modeling of environmental factors influencing the evolution of linguistic diversity and thus the emergence and merging of dialects [1]. Using evolutionary agent-based simulation [3], this study investigates environmental factors influencing the emergence of linguistic diversity in an evolving agent-based language simulation. We used iterative agent-based naminggame [15] simulations to evaluate the impact resources, population, and environment size have on evolving language diversity [10]. A specific aim was to investigate thresholds (tipping-points) in factors that cause significant changes to linguistic diversity in populations.

Impact of energy efficiency on the morphology and behaviour of evolved robots

Most evolutionary robotics studies focus on evolving some targeted behavior without considering energy usage. In this paper, we extend our simulator with a battery model to take energy consumption into account in a system where robot morphologies and controllers evolve simultaneously. The results show that including the energy consumption in the fitness in a multi-objective fashion reduces the average size of robot bodies while reducing their speed. However, robots generated without size reduction can achieve speeds comparable to robots from the baseline set.

Illuminating the space of beatable lode runner levels produced by various generative adversarial networks

Generative Adversarial Networks (GANs) are capable of generating convincing imitations of elements from a training set, but the distribution of elements in the training set affects the difficulty of properly training the GAN and the quality of the outputs it produces. This paper looks at six different GANs trained on different subsets of data from the game Lode Runner. The quality diversity algorithm MAP-Elites was used to explore the set of quality levels that could be produced by each GAN, where quality was defined as being beatable and having the longest solution path possible. Interestingly, a GAN trained on only 20 levels generated the largest set of diverse beatable levels while a GAN trained on 150 levels generated the smallest set of diverse beatable levels, thus challenging the notion that more data is always better when training GANs.

Growing simulated robots with environmental feedback: an eco-evo-devo approach

Robots are still missing the ability to adapt to new environments. However, biological systems are able to adapt to new environments with ease; perhaps because they have the ability to react to environmental input during a growth phase with changes not only in behaviour, but also morphology. Yet within the field of robots, environmental based development of morphology is an under researched area. In this paper we use an evolutionary algorithm to evolve neural cellular automata capable of inducing environmental based developmental plasticity in robots. We use the kinetic energy of each cell and its neighbours as an input to our network, the output of which determines the position of new cell growth. We evolve our neural cellular automata first in three individual environments and then also for performance in multiple environments. We show that the networks that use environmental feedback outperform those that do not and that by introducing environmental feedback during development, more adaptive and better performing robots are potentially possible.

Pathogen dose based natural killer cell algorithm for classification

Negative selection algorithm(NSA), which is the most representative classification algorithm among immune heuristic algorithms, has been successfully applied to solve many classification problems. However, there are two obvious problems with NSA. First, NSA establishes specific antibodies for each antigen based on the mechanism of specific antigen matching antibody, which leads to too many and redundant detectors. Second, the detection stage needs to calculate the matching degree of the antigen with all the detectors, and the detection efficiency is low. This paper proposes a new non-specific natural killer cell algorithm (NKA) based on pathogen dose. NKA draws on the mechanism of NK cells constructing phenotype detectors based on pathogen dose. First, NKA defines dose and phenotype detector, and optimizes it based on the memory evolution mechanism of phenotype; then establishes k-d tree for the optimized phenotype, and pathogens only need to match the dose with the nearest phenotype detector. Experimental results show that the method proposed in this paper, NKA, can not only achieve a better performance through fewer detectors, but also has a higher efficiency in the training and detection phase, compared with three NSA-based algorithms.

POSTER SESSION: Posters: Evolutionary combinatorial optimization and metaheuristics

Optimizing a GPU-accelerated genetic algorithm for the vehicle routing problem

The capacitated vehicle routing problem (CVRP) is an NP-hard optimization problem with many applications. Genetic algorithms (GAs) are often used to solve CVRPs but require many parameters and operators to tune. Incorrect settings can result in poor solutions. In this work, a design of experiments (DOE) approach is used to determine the best settings for GA parameters. The GA runs entirely on an NVIDIA RTX 3090 GPU. The GPU execution for a 200-node benchmark shows a speed by a factor of 1700 compared to that on an octa-core i7 CPU with 64 GB RAM. The tuned GA achieved a solution for a 400-node benchmark that is 72% better than that of an arbitrarily tuned GA after only 263 generations. New best-known values for several benchmarks are also obtained. A comparison between the performance of the algorithm with different hardware and tuning sets is also reported.

Linear representation of categorical values

We propose a binary representation of categorical values using a linear map. This linear representation preserves the neighborhood structure of categorical values. In the context of evolutionary algorithms, it means that every categorical value can be reached in a single mutation. The linear representation is embedded into standard metaheuristics, applied to the problem of Sudoku puzzles, and compared to the more traditional direct binary encoding. It shows promising results in fixed-budget experiments and empirical cumulative distribution functions with high dimension instances, and also in fixed-target experiments with small dimension instances.

Effective recombination operators for the family of vehicle routing problems

We propose two new or modified recombination operators for the family of vehicle routing problems, i.e. the Capacitated Vehicle Routing Problem (CVRPTW), Pickup and Delivery Problem (PDPTW), and Team Orienteering Problem (TOPTW), with Time Windows. Starting with fitness landscape analysis we propose a new Order-based recombination operator, we extend the Selective Route Exchange Crossover (SREX) recombination operator, and we adapt the state-of-the-art Edge Assembly Crossover (EAX) operator to PDPTW and TOPTW. The proposed recombination operators are competitive to EAX and the best results may be achieved by combining different operators. Furthermore, a memetic algorithm using the proposed operators is capable of providing results competitive to state-of-the-art dedicated methods.

Introducing a hash function for the travelling salesman problem for differentiating solutions

Solution comparison is a process used in different classes of evolutionary algorithms. It can be either for choosing a better solution or differentiating a pair of solutions. While there is no doubt that the fitness function allows determining the best solution, its use to distinguish between solutions is questionable, especially for combinatorial optimisation problems. This short paper focuses on the Travelling Salesman Problem (TSP). 39 instances of the TSPLIB are chosen, two solution samples are generated for each instance. A collision analysis of the fitness function one the TSP is presented, then an introduction to an efficient hash function with almost zero collisions.

Automated configuration of parallel machine dispatching rules by machine learning

Finding optimized machine schedules is a very important task that arises in many areas of industrial manufacturing. In practice dispatching rules are often used to create schedules within short run times. This paper investigates a method for automated specification of parameters for weighted dispatching rules. We define this problem as a machine learning task and propose a novel set of features to characterize problem instances. Experimental results with a practical dispatching rule show that our approach can obtain high quality solutions in short run times.

Selecting between evolutionary and classical algorithms for the CVRP using machine learning: optimization of vehicle routing problems

Solutions for NP-hard problems are often obtained using heuristics that yield results relatively quickly, at some cost to the objective. Many different heuristics are usually available for the same problem type, and the solution quality of a heuristic may depend on characteristics of the instance being solved. This paper explores the use of machine learning to predict the best heuristic for solving Capacitated Vehicle Routing Problems (CVRPs). A set of 23 features related to the CVRP were identified from the literature. A large set of CVRP instances were generated across the feature space, and solved using four heuristics including a genetic algorithm and a novel self-organizing map. A neural network was trained to predict the best performing heuristic for a given problem instance. The model correctly selected the best heuristic for 79% of the CVRP test instances, while the single best heuristic was dominant for only 50% of the test instances.

The optimal filtering set problem with application to surrogate evaluation in genetic programming

Surrogate evaluation is common in population-based evolutionary algorithms where exact fitness calculation may be extremely time consuming. We consider a Genetic Program (GP) that evolves scheduling rules, which have to be evaluated on a training set of instances of a scheduling problem, and propose exploiting a small set of low size instances, called filter, so that the evaluation of a rule in a filter estimates the actual evaluation of the rule on the training set. The calculation of filters is modelled as an optimal subset problem and solved by a genetic algorithm. As case study, we consider the problem of scheduling jobs in a machine with time-varying capacity and show that the combination of the surrogate model with the GP termed SM-GP, outperforms the original GP.

Optimisation algorithms for parallel machine scheduling problems with setup times

Parallel machine scheduling is a problem of high practical relevance for the manufacturing industry. In this paper, we address a variant in which an unweighted combination of earliness, tardiness and setup times aggregated in a single objective function is minimised. We compare an Evolutionary Algorithm (EA) approach with a variant of local search implementing a probabilistic Best Response Dynamic algorithm (p-BRD) inspired by game theoretic considerations. Our p-BRD algorithm achieved promising results outperforming the EA on a series of test sets.

Stochastic local search for efficient hybrid feature selection

There is a need to study not only accuracy but also computational cost in machine learning. Focusing on both accuracy and computational cost of feature selection, we develop and test stochastic local search (SLS) heuristics for hybrid feature selection.

A grouping genetic algorithm for the unrelated parallel-machine scheduling problem

A large number of NP-hard grouping problems have been addressed with the Grouping Genetic Algorithm (GGA). The unrelated parallel-machine scheduling with makespan minimization, also known as R||Cmax, is an NP-hard combinatorial optimization problem that belongs to the NP-hard grouping problems family. This paper presents a GGA with variation operators specifically designed for the problem R||Cmax. The performances of the proposed GGA is assessed by solving 1400 test instances of the problem R||Cmax taken from the specialized literature. The experimental results suggest that GGA is able to find high-quality solutions outperforming the best state-of-the-art GA.

Error function learning with interpretable compositional networks for constraint-based local search

In Constraint Programming, some Constraint-Based Local Search algorithms exploit error functions, a finer representation of constraints than the usual one. However, this makes problem modeling significantly harder, since providing a set of error functions is not always easy. Here, we propose a method to automatically learn an error function corresponding to a constraint. Our method learns error functions upon a variant of neural networks we named Interpretable Compositional Networks, allowing us to get interpretable results, unlike regular artificial neural networks. Experiments show that our method can learn, over small-dimensional spaces or incomplete spaces, functions that scale on high-dimensional spaces.

A hybrid local search framework for the dynamic capacitated arc routing problem

The static capacitated arc routing problem (CARP) is a challenging combinatorial problem, where vehicles need to be scheduled efficiently for serving a set of tasks with minimal travelling costs. Dynamic CARP (DCARP) considers the occurence of dynamic events during the service process, e.g. traffic congestion, which reduce the quality of the currently applied schedule. Existing research mainly focused on scenarios with large changes but neglected the time limitations of the rescheduling process. In this paper, we investigate DCARP scenarios with small dynamic changes in which events only affect the properties of a few edges. We propose an efficient hybrid local search framework (HyLS) to reschedule the service plan in a short time for a DCARP instance. HyLS maintains an archive that enables it to cover more search areas than single local search algorithms, while its local search mechanisms enable it to find better solutions than meta-heuristic re-optimisation from scratch when restricted to a tight time budget. Our experiments demonstrate HyLS' effectiveness compared to existing local search strategies and meta-heuristic re-optimisation from scratch.

Continuous encoding for community detection in complex networks

In this paper, a novel continuous encoding method developed for multiobjective evolutionary algorithm to solve the community detection problem in complex networks is proposed. Each edge in the considered network is associated with a continuous component of an individual's genotype. Through non-linear operations, each continuous-valued genotype is transformed into a solution to the community detection problem, i.e. a partition of the network nodes. This encoding method is embedded within the algorithmic framework of the multiobjective genetic algorithm for networks (MOGA-Net). Experimental results on synthetic and real-world networks demonstrate that the developed method can improve the performance of MOGA-Net significantly.

POSTER SESSION: Posters: Evolutionary machine learning

Detecting anomalies in spacecraft telemetry using evolutionary thresholding and LSTMs

Detecting anomalies in telemetry data captured on-board satellites is a pivotal step towards their safe operation. The data-driven algorithms for this task are often heavily parameterized, and the incorrect hyperparameters can deteriorate their performance. We tackle this issue and introduce a genetic algorithm for evolving a dynamic thresholding approach that follows a long short-term memory network in an unsupervised anomaly detection system. Our experiments show that the genetic algorithm improves the abilities of a detector operating on multi-channel satellite telemetry.

Improved evolution of generative adversarial networks

Generative adversarial networks (GANs) achieved relevant results regarding the production of realistic samples. However, the training of GANs has issues that affect the stability and convergence of the algorithm. One line of research to tackle these issues uses Evolutionary Algorithms to drive the training and evolution of GANs, such as COEGAN. In this work, we propose COEGAN-v2, an extension of COEGAN that allows the use of spectral normalization and upsampling layers through the variation operators. Additionally, we use the loss functions of RaSGAN in training and also as fitness. Results show that COEGAN-v2 is more efficient and achieves better outcome quality when compared to the original COEGAN version and also regular GANs.

Sparsity-based evolutionary multi-objective feature selection for multi-label classification

Feature selection (FS) is typically an essential pre-processing step for many machine learning tasks. However, most existing FS approaches focus on single-label classification, whereas multi-label classification (MLC) is an emerging topic where each instance can be assigned to more than one class label. Sparsity-based FS is an efficient and effective method that can be applied to MLC. However, existing sparsity-based approaches based on gradient descent can get trapped at local optima, and cannot optimise the selected number of features and classification performance simultaneously that are often in conflict, thus making FS a multi-objective problem. Evolutionary multi-objective optimisation (EMO) can be applied to FS due to its potential global search ability. EMO-based algorithms have not been utilised in sparsity-based approaches for FS in MLC. This paper proposes a novel sparsity-based MLC FS method based on Multi-objective Evolutionary Algorithm with Decomposition (MOEA/D). The experimental results indicate improvement in the MLC performance in comparison to a existing multi-objective FS algorithms.

Scatter search for high-dimensional feature selection using feature grouping

In feature selection tasks, finding the optimal subset of features is unfeasible due to the increase of the search space with the dimensionality. In order to reduce the complexity of the space, feature grouping approach aims to generate subsets of correlated features. In this context, evolutionary algorithms have proven to achieve competitive solutions. In this work we propose a novel Scatter Search (SS) strategy that uses feature grouping to generate a population of diverse and high quality solutions. Solutions are evolved by applying random mechanisms in combination with the feature group structure to maintain the diversity and the quality of the solutions during the search. We test the proposed strategy on high dimensional data from biomedical domains and compare the performance against the first adaptation of the SS to the feature selection problem. Results show that our proposal is able to find smaller subsets of features while keeping a similar predictive power of the classifier models. Finally, a case of study regarding melanoma skin cancer is analysed using the proposed strategy.

Meta-learning for symbolic hyperparameter defaults

Hyperparameter optimization in machine learning (ML) deals with the problem of empirically learning an optimal algorithm configuration from data, usually formulated as a black-box optimization problem. In this work, we propose a zero-shot method to meta-learn symbolic default hyperparameter configurations that are expressed in terms of the properties of the dataset. This enables a much faster, but still data-dependent, configuration of the ML algorithm, compared to standard hyperparameter optimization approaches. In the past, symbolic and static default values have usually been obtained as hand-crafted heuristics. We propose an approach of learning such symbolic configurations as formulas of dataset properties from a large set of prior evaluations on multiple datasets by optimizing over a grammar of expressions using an evolutionary algorithm. We evaluate our method on surrogate empirical performance models as well as on real data across 6 ML algorithms on more than 100 datasets and demonstrate that our method indeed finds viable symbolic defaults.

Evo-RL: evolutionary-driven reinforcement learning

In this work, we propose a novel approach for reinforcement learning driven by evolutionary computation. Our algorithm, dubbed as Evolutionary-Driven Reinforcement Learning (Evo-RL), embeds the reinforcement learning algorithm in an evolutionary cycle, where we distinctly differentiate between purely evolvable (instinctive) behaviour versus purely learnable behaviour. Furthermore, we propose that this distinction is decided by the evolutionary process, thus allowing Evo-RL to be adaptive to different environments. In addition, Evo-RL facilitates learning on environments with reward-less states, which makes it more suited for real-world problems with incomplete information. To show that Evo-RL leads to state-of-the-art performance, we present the performance of different state-of-the-art reinforcement learning algorithms when operating within Evo-RL and compare it with the case when these same algorithms are executed independently. Results show that reinforcement learning algorithms embedded within our Evo-RL approach significantly outperform the stand-alone versions of the same RL algorithms on OpenAI Gym control problems with rewardless states constrained by the same computational budget.

Understanding evolutionary induction of decision trees: a multi-tree repository approach

This paper concerns the evolutionary induction of decision trees (DT)s. Noting that many DTs or their parts reappear during the evolution, a multi-tree concept is introduced. A multi-tree integrates all trees with the same test in the root. The differences between trees (below the root) are saved as variants, while the same DT parts are shared. All multi-trees are stored in an external repository that keeps the entire search history. We show that such a repository can achieve not only the goal of duplicated calculation prevention, but also better understanding of the evolutionary DT induction. The road to finding the best individual is easily retrieved, exploitation vs. exploration can be addressed, or similarities between individuals can be measured.

Growth and harvest induce essential dynamics in neural networks

Training neural networks with faster gradient methods brings them to the edge of stability, proximity to which improves their generalization capability. However, it is not clear how to stably approach the edge. We propose a new activation function to model inner processes inside neurons with single-species population dynamics. The function induces essential dynamics in neural networks with a growth and harvest rate to improve their generalization capability.

Permutation-based optimization using a generative adversarial network

This paper presents a new method to deal with permutation-based problems using a Generative Adversarial Network. Generative adversarial Networks (GANs) are used in many fields as models that mimic the trained data distribution. However, dealing with permutation space in GANs has not been yet investigated in detail. To reach this objective, we propose using GANs to solve the permutation-based optimization problem using an Estimation of Distribution Algorithm. We use a Random Key method to translate the generated data from continuous values to permutations. The Experiment aims to investigate the quality of solutions provided by the proposed method in two ways. Firstly, we evaluate the solutions obtained directly from the GANs, and in the second one, we use a local search method to improve the quality of the solution. In the end, We perform a set of experiments on instances of the known TSP problems. Also, we notice that the obtained results promise to extend the work on other classes of problems.

EvolMusic: towards musical adversarial examples for black-box attacks on speech-to-text

Automatic Speech Recognition (ASR) has undergone substantial improvements since the incorporation of deep learning. However, the vulnerability of neural networks to imperceptible adversarial perturbations exposes ASR-based devices to potentially serious threats. So far, imperceptibility of audio adversarial examples has been associated with small, or inaudible perturbations. In this paper, we expand the domain of viable audio adversarial examples to include audible, but inconspicuous adversarial perturbations. We present EvolMusic, the first targeted adversarial attack based on musical note-sequences. Our musical perturbations are generated via an adaptive evolutionary approach in a black-box setting. We evaluate our attack against DeepSpeech v0.9.1 using the Fluent Speech Commands dataset.

Explainability and performance of anticipatory learning classifier systems in non-deterministic environments

In the field of Reinforcement Learning, models based on neural networks are highly performing, but explaining their decisions is very challenging. Instead of seeking to open these "black boxes" to meet the increasing demand for explainability, another approach is to used rule-based machine learning models that are explainable by design, such as the Anticipatory Learning Classifier Systems (ALCS). ALCS are able to develop simultaneously a complete representation of their environment and a decision policy based on this representation to solve their learning tasks. This paper focuses on the ability of ALCS to deal with non-deterministic environments used in reinforcement learning problems, while discussing their explainability. Directions for future research are thus highlighted to improve both the performance and the explainability of the ALCS to meet the needs of critical real-world applications.

Multi-objective genetic programming for symbolic regression with the adaptive weighted splines representation

Genetic Programming (GP) for symbolic regression often generates over-complex models, which overfit the training data and have poor generalization onto unseen data. One recent work investigated controlling model complexity by using a new GP representation called Adaptive Weighted Splines (AWS), which is a semi-structured representation that can control the model complexity explicitly. This work extends this previous work by incorporating a new parsimony pressure objective to further control the model complexity. Experimental results demonstrate that the new multi-objective GP method consistently obtains superior fronts and produces better generalizing models compared to single-objective GP with both the tree-based and AWS representation as well a multi-objective tree-based GP method with parsimony pressure.

An evolutionary approach to interpretable learning

Machine Learning (ML) interpretability is a growing field of computational research, of which the goal is to shine a light on black-box predictive models. We present an evolutionary framework to improve upon existing post-hoc interpretability metrics, by quantifying feature synergy, or the strength of feature interactions in high-dimensional prediction problems. In two problem instances from bioinformatics and climate science, we validate our results with existing domain research, to show that feature synergy is a valuable metric for post-hoc interpretability.

Misclassification detection based on conditional VAE for rule evolution in learning classifier system

This paper focuses on the problem of Learning Classifier System (LCS) that is hard to guarantee to generate the "correct" output (i.e., the action in LCS) as the dimension size of data increases (which results in producing the "incorrect" output) and proposes the method that can detect the incorrect output of LCS. For this issue, this paper proposes the Misclassification Detection based on Conditional Variational Auto-Encoder (MD/C) which detects and rejects the incorrect output of LCS through a comparison between the original data and the restored data by CVAE (Conditional Variational Auto-Encoder) based on the output of LCS (as the condition to CVAE). The results of a ten-class classification problem using handwritten digits showed that MD/C properly rejects the incorrect output of LCS and achieves 99.0% correct rate.

Adopting lexicase selection for michigan-style learning classifier systems with continuous-valued inputs

In this work, two variants of lexicase selection are examined for their performance improvement potential in the context of XCS variants allowing for continuous-valued inputs. We furthermore propose a niche-specific mode of operation for lexicase selection when adopted for utilization in XCS, implemented by a dedicated classifier experience storage. To evaluate the impact of lexicase selection on XCS' classification and regression capabilities, our proposed modified variants are tested across various continuous-valued single-step tasks, including both typical toy problems and real-world datasets related to the agricultural domain as well as regression problems.

Evolving local interpretable model-agnostic explanations for deep neural networks in image classification

For deep convolutional neural networks (deep CNNs), a severe drawback is the poor interpretability. To address this drawback, this paper proposes a novel genetic algorithm-based method for the first time to automatically evolve local interpretable explanations that can assist users to decide whether to trust the predictions of deep CNNs. In the experiments, the results show that the evolved explanations can explain the predictions of deep CNNs on images by successfully capturing meaningful interpretable features.

Adaptive multi-fitness learning for robust coordination

Many long term robot exploration domains have sparse fitness functions that make it hard for agents to learn and adapt. This work introduces Adaptive Multi-Fitness Learning (A-MFL), which augments the structure of Multi-Fitness Learning (MFL) [7] by injecting new behaviors into the agents as the environment changes. A-MFL not only improves system performance in dynamic environments, but also avoids undesirable, unforeseen side-effects of new behaviors. On a multi-robot coordination problem, A-MFL provides up to 90% improvement over MFL and 100% over a one-step evolutionary approach.

POSTER SESSION: Posters: Evolutionary multiobjective optimization

MOMPA: a high performance multi-objective optimizer based on marine predator algorithm

We propose a novel and effective multi-objective marine predator algorithm (MOMPA) to solve multi-objective optimization (MOO) problems. MOMPA incorporates the non-dominated sorting approach and the reference point strategy to select elite individuals and ensures the diversity of the Pareto optimal solution sets. Also, the Gaussian perturbation mechanism is leveraged to further improve the population diversity and global search ability in MOMPA. The performance of MOMPA is evaluated and comprehensively compared with benchmark functions. The results show that MOMPA is very competitive.

The effect of offspring population size on NSGA-II: a preliminary study

Non-Dominated Sorting Genetic Algorithm (NSGA-II) is one of the most popular Multi-Objective Evolutionary Algorithms (MOEA) and has been applied to a large range of problems.

Previous studies have shown that parameter tuning can improve NSGA-II performance. However, the tuning of the offspring population size, which guides the exploration-exploitation trade-off in NSGA-II, has been overlooked so far. Previous work has generally used the population size as the default offspring population size for NSGA-II.

We therefore investigate the impact of offspring population size on the performance of NSGA-II. We carry out an empirical study by comparing the effectiveness of three configurations vs. the default NSGA-II configuration on six optimization problems based on four Pareto front quality indicators and statistical tests.

Our findings show that the performance of NSGA-II can be improved by reducing the offspring population size and in turn increasing the number of generations. This leads to similar or statistically significant better results than those obtained by using the default NSGA-II configuration in 92% of the experiments performed.

Dynamic adaptation of decomposition vector set size for MOEA/D

The Multi Objective Evolutionary Algorithm based on Decomposition (MOEA/D) is a popular algorithm for solving Multi Objective Problems (MOP). The main characteristic of MOEA/D is to use a set of weight vectors to break the MOP into a set of single-objective sub problems. It is well known that the performance of MOEA/D varies greatly depending on the number of weight vectors. However, the appropriate value for this hyper-parameter is likely to vary depending on the problem, as well as the stage of the search. In this study, we propose a robust MOEA/D variant that evaluates the progress of the search, and deletes or creates weight vectors as necessary to improve the optimization or to avoid search stagnation. The performance of the proposed algorithm is evaluated on the DTLZ and ZDT benchmark. We observed that the proposed method without needing to explicitly choose the number of weight vectors is equivalent to MOEA/D with fine tuned vectors and superior than MOEA/D with poorly tuned vectors.

Multi-criteria differential evolution: treating multitask optimization as multi-criteria optimization

Evolutionary multitask optimization (EMTO) has shown great potential and attracted increasing attention for solving multitask optimization problem (MTOP). However, most existing EMTOs still treat the internal tasks just as different related problems, rather than the component problems of the entire MTOP, which is inefficient. Therefore, this paper proposes to treat the entire MTOP as a multi-criteria optimization problem (MCOP), so as to solve the MTOP more efficiently. To be specific, the fitness function of each task in the MTOP is treated as an evaluation criterion in the corresponding MCOP. During the evolutionary process, a round-robin multi-criteria strategy is adopted to better utilize the multiple criteria. This way, we can use different evaluation criteria in different generations or stages to guide the environmental selection and population evolution in ECs, so as to find out the optimal solutions for the criteria of different tasks. Based on the above, a multi-criteria differential evolution algorithm is developed for solving MTOPs. Experiments on widely-used MTOP benchmarks and comparisons with some state-of-the-art algorithms have verified the great effectiveness and efficiency of the proposed algorithm. Therefore, treating MTOPs as MCOPs is a potential and promising direction for solving MTOPs.

An approximate MIP-DoM calculation for multi-objective optimization using affinity propagation clustering algorithm

Dominance move (DoM) is a quality indicator that compares two solution sets in a Pareto-optimal sense. The main issue related to DoM is its computational expense. A recent paper proposed a mixed-integer programming (MIP) approach for computing DoM that exhibited a computational complexity that is linear to the number of objectives and polynomial to the number of solutions. Even with this property, considering practical situations, the MIP-DoM calculation on some problems may take many hours. This paper presents an approximation method to deal with the problem using a cluster-based and divide-and-conquer strategy. Some experiments are tested, showing that the cluster based-algorithm is computationally much faster and makes a small percentage error from the original DoM value.

Generating multi-objective bilevel optimization problems with multiple non-cooperative followers

This paper presents a test problem generator for multi-objective bilevel optimization problems with multiple non-cooperative followers. In this type of search space the leader and its followers can have multiple conflicting objectives and interactions between the leader and each one of the followers. The test problem generator can be used to instantiate test problems with user-controlled features such as the number of followers, convergence and interaction complexity.

Labeling-oriented non-dominated sorting is Θ(MN3)

Non-dominated sorting is a common routine used in many evolutionary multiobjective algorithms to rank solutions based on the Pareto-dominance relation. Unlike many other problems appearing in evolutionary computation, this problem has a simple formal definition, a number of quite efficient algorithms to solve it, and is relatively well understood.

Unfortunately, a number of recent papers that propose new, and supposedly efficient, algorithms for non-dominated sorting, feature inaccurate analysis, leading to overly optimistic claims. In this paper we prove that a recent algorithm, called Labeling-Oriented Non-Dominated Sorting, or LONSA, has the worst-case running time of Θ(MN3), where N is the number of points and M is the number of objectives, which is much greater than the quadratic upper bound the authors claim. Our proof holds for all M ≥ 4 and essentially reduces LONSA to another algorithm, Deductive Sort, for which the hard test has been constructed before.

A niching framework based on fitness proportionate sharing for multi-objective genetic algorithm (MOGA-FPS)

In this paper, we propose a new niching framework based on Fitness Proportionate Sharing (FPS) to improve the diversity preservation in multi-objective optimization. The traditional sharing approach in standard Multi-Objective Genetic Algorithm (MOGA) is replaced with the proposed niching framework and the adapted MOGA is named MOGA-FPS. We also propose an algorithm which dynamically finds the most suitable niche radius. Experimental results show that MOGA-FPS significantly improves MOGA performance and maintains a well spread distribution of optimal solution set for bi-objective test functions compared with NSGA-II, MOGAS (Multi-Objective Genetic Algorithm using a new fitness sharing function).

Landmark-based multi-objective route planning for large-scale road net

In this paper, for two objectives of the minimum driving distance and the maximum number of passengers compatible, a landmark-based multi-objective route planning algorithm (LB-MORPA) is proposed. LB-MORP incorporates the idea of a bi-directional random walk into the generation of the initial population, designs a new crossover and mutation operator for the evolutionary process, and predicts a set of the nondominated solution set to provide drivers with multiple route options. The comparison experiments with other classical multi-objective evolutionary algorithms prove that the LB-MORP has good performance.

One step preference elicitation in multi-objective Bayesian optimization

We consider a multi-objective optimization problem with objective functions that are expensive to evaluate. The decision maker (DM) has unknown preferences, and so the standard approach is to generate an approximation of the Pareto front and let the DM choose from the generated non-dominated designs. However, especially for expensive to evaluate problems where the number of designs that can be evaluated is very limited, the true best solution according to the DM's unknown preferences is unlikely to be among the small set of non-dominated solutions found, even if these solutions are truly Pareto optimal. We address this issue by using a multi-objective Bayesian optimization (BO) algorithm (see, e.g., [5]) and allowing the DM to select a preferred solution from a predicted continuous Pareto front just once before the end of the algorithm rather than selecting a solution after the end. This allows the algorithm to understand the DM's preferences and make a final attempt to identify a more preferred solution. We demonstrate the idea using ParEGO [2], and show empirically that the found solutions are significantly better in terms of true DM preferences than if the DM would simply pick a solution at the end.

Two comprehensive performance metrics for overcoming the deficiencies of IGD and HV

To overcome some deficiencies of inverted generational distance (IGD) and hypervolume (HV), two comprehensive metrics are proposed in this paper, the hypercube distance (HCD), a metric based on hypercubes, and the angle-based distance (AD) for calculating the cosine values of the angles between solutions, both proposed metrics don't need Pareto Front information and have low computational complexity.

POSTER SESSION: Posters: Evolutionary numerical optimization

Estimation of von mises-fisher distribution algorithm, with application to support vector classification

The most common method in the Evolutionary Algorithm community to handle constraints is to use penalties. The simplest being the death penalty, which rejects solutions that violate constraints. However, its inefficiency in search spaces possessing small feasible regions spurred research into adaptive penalties and other competitive methods. A major criticism of these approaches is that they require the user to fine-tune parameters or design problem-dependent operators. We propose to do away with penalty functions for problems over the Euclidean space when the constraint is an equality concerning the Euclidean distance. This paper describes an evolutionary algorithm on the unit hypersphere based on representing the population with the von Mises-Fisher probability distribution from the field of Directional statistics. We demonstrate its utility by solving the support vector classification problem for a few datasets.

Reinforcement learning for dynamic optimization problems

Many real-world applications require optimization in dynamic environments where the challenge is to find optima of a time-dependent objective function while tracking them over time. Many evolutionary approaches have been developed to solve Dynamic Optimization Problems (DOPs). However, there is still a need for more efficient methods. Recently, a new interesting trend in dealing with optimization in dynamic environments has emerged toward developing new Reinforcement Learning (RL) algorithms that are expected to give a new breath in DOPs community. In this paper, a new Q-learning RL algorithm is developed to deal with DOPs based on new deifined states and actions that are mainly inspired from Evolutionary Dynamic Optimization (EDO) aiming appropriate exploitation of the strengths of both RL and EDO techniques to handle DOPs. The proposed RL model has been assessed using modified Moving Peaks Benchmark (mMPB) problem. Very competitive results have been obtained and good performance has been achieved compared with other dynamic optimization algorithm.

An empirical study of cooperative frequency in distributed cooperative co-evolution

Cooperative co-evolution (CC) is an effective technique for optimizing high dimensional problems. The scalability and efficiency of CC can be further improved by combing with distributed computing, i.e., distributed CC (dCC). How to balance the cooperation and communication among sub-problems is an important issue in dCC. In this paper, we construct series of large-scale benchmark functions, and conduct experimental studies to investigate the effect of different cooperative frequency in dCC. Experiments suggest that frequency of cooperation will not affect the performance of fully separable and partially separable problems unless frequent coordination among sub-populations consumes too many limited computing resources. And overlapping problems need a moderate cooperative frequency to get better performance.

Disease outbreaks: tuning predictive machine learning

Climate change is expected to exacerbate diarrhoea outbreaks in developing nations, a leading cause of morbidity and mortality in such regions [3]. The development of predictive models with the ability to capture complex relationships between climate factors and diarrhoea may be effective for diarrhoea outbreak control. Various supervised Machine Learning (ML) algorithms and Deep Learning (DL) methods have been used in developing predictive models for various diseases [7]. Despite their advances in a range of healthcare applications, overall method task performance still largely depends on available training data and parameter settings which is a significant challenge for most predictive machine learning methods [9]. This study investigates the impact of Relevance Estimation and Value Calibration (REVAC) [4], an evolutionary parameter optimization method applied to predictive task performance of various ML and DL methods applied to ranges of real-world and synthetic data-sets (diarrhoea and climate based) for daily diarrhoea outbreak prediction in a regional case-study (South African provinces). Preliminary results indicate that REVAC is better suited for the DL models regardless of the data-set used for making predictions.

Setup of fuzzy hybrid particle swarms: a heuristic approach

This paper presents a framework for systematically investigating and designing fuzzy rulesets for Adaptive Fuzzy Particle Swarm Optimization (AFPSO) algorithms. Training is achieved through Gaussian Process (GP) supported by Gradient Boosted Regression Trees (GBRT). Meta-objective was defined by ranks on various benchmark functions. Validation benchmarks were also performed on GECCO '20 bound-constrained optimization competition. The resulting variants, particularly those controlling hybridization with Quantum Particle Swarm Optimization (QPSO) surpassed classical Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Differential Evolution (DE) on the training functions. Some level of generalization was also observed on most of the validation set.

CMA-ES with coordinate selection for high-dimensional and ill-conditioned functions

Algorithms for black-box optimization need considering numerous properties of objective functions in advance. The covariance matrix adaptation evolution strategy (CMA-ES) is known as one of the state-of-the-art algorithms for black-box optimization. Despite its achievement, the CMA-ES fails to minimize the objective function which is high-dimensional and ill-conditioned such as 100,000-dimensional Ellipsoid function. This fact is a serious problem to apply the CMA-ES to recent high-dimensional machine learning models. We confirm that the "single" step-size for all coordinates is one of the hindrances to the adaptation of the variance-covariance matrix. To solve this, we propose a CMA-ES with coordinate selection. Coordinate selection enables us to vectorize the step-size and adapt each component of the vector to the scale of selected coordinates. Furthermore, coordinate selection based on estimated curvature reduces the condition number during updating variables in selected coordinate space. Our method is enough simple to easily apply to most of variations of CMA-ES: only execute conventional algorithms in the selected coordinate space. The experimental results show that our method applied to the CMA-ES, the sep-CMA-ES and the VD-CMA outperforms the conventional variations of CMA-ES in terms of function evaluations and an objective value in the optimization of high-dimensional and ill-conditioned functions.

Bridging kriging believer and expected improvement using bump hunting for expensive black-box optimization

For several real-world optimization problems, the evaluation of response functions may be expensive, computationally or otherwise. The number of design evaluations one can afford for such problems are therefore severely limited. Surrogate models are commonly used to guide the search for such computationally expensive optimization problems (CEOP). The surrogate models built using a limited number of true evaluations are used to identify the next infill/sampling location. Expected improvement (EI) is a well known infill criteria which balances exploration and exploitation by accounting for both mean and uncertainties in the current model. However, recent studies have shown that, somewhat counter-intuitively, a greedy ("believer") strategy can compete well with EI in solving CEOPs. In this study, we are interested in examining the relative performance of the two infill methods across a range of problems, and identify the influencing factors affecting their performance. Based on the empirical analysis, we further propose an algorithm incorporating the strengths of the two strategies. The numerical experiments demonstrate that the proposed algorithm is able to achieve a competitive performance across a range of problems with diverse characteristics; making it a strong candidate for solving black-box CEOPs.

Automated feature detection of black-box continuous search-landscapes using neural image recognition

Extracting search-landscape features of high-dimensional Black-Box Optimization (BBO) problems, by minimal number of queries, is a grand challenge. Knowledge of such features per a given BBO problem-instance offers valuable information in light of the Algorithm Selection and/or Configuration problems. In practice, by solving this challenge, the query complexity of BBO may be reduced when capitalizing on algorithm portfolios. In this study we target the automated recognition of BBO benchmark functions. We propose an identification framework for d-dimensional continuous BBO test-suites, provided as input with a set of N search-points, sampled at random, together with their query values. We address it as a supervised multi-class image recognition problem, by introducing the concept of Landscape Images, and applying the basic LeNet5 Neural Network to classify them. The solution's core lies within the encapsulation of the BBO functions' data as Landscape Images, and the application of neural image recognition to learn their features. The efficacy of our approach is numerically validated on the noiseless COCO/BBOB test-functions, which are demonstrated to be correctly classified at high precision rates (≈90%) when N is in the order of d. This successful learning is another step toward automated feature detection of BBO problems.

A complementarity analysis of the COCO benchmark problems and artificially generated problems

When designing a benchmark problem set, it is important to create a set of benchmark problems that are a good generalization of the set of all possible problems. One possible way of easing this difficult task is by using artificially generated problems. In this paper, one such single-objective continuous problem generation approach is analyzed and compared with the COCO benchmark problem set, a well know problem set for benchmarking numerical optimization algorithms. Using Exploratory Landscape Analysis and Singular Value Decomposition, we show that such representations allow us to further explore the relations between the problems by applying visualization and correlation analysis techniques, with the goal of decreasing the bias in benchmark problem assessment.

POSTER SESSION: Posters: Genetic algorithms

Three population co-evolution for generating mechanics of endless runner games

Procedural content generation (PCG) is increasingly used to generate many aspects in a variety of games. However, using PCG to generate mechanics of games is rarely attempted. Past approaches include using cooperative coevolution to generate the rules and the environment of a game, which had interesting results. Our approach extends this idea by using coevolution with three populations, also generating the evaluating player. We used this approach to generate endless runner games, for which it was able to generate novel mechanics.

Quantum genetic selection: using a quantum computer to select individuals in genetic algorithms

This paper proposes an innovative selection operator based on concepts from quantum mechanics. In particular, a quantum state is used to embody genetic individuals and their fitness values, and a quantum algorithm known as amplitude amplification is used to modify this state in order to create a quantum superposition in which the probability to measure an individual is related to its quality. The main peculiarity of this approach is related to the non-zero probability of selecting individuals do not belonging to the current population so as to create new genetic material and reduce the likelihood that genetic evolution will converge to local optima. The suitability of the proposed operator has been proved by an experimental session where a comparison with well-known selection methods has been carried out on a set of benchmark problems.

Fitness value curves prediction in the evolutionary process of genetic algorithms

The evolutionary process of a Genetic Algorithm (GA) depends on several factors, the initialization parameters are some of them [4]. One way to inspect the evolutionary process of a GA is to analyze the maximum, average, and minimum fitness of each generation. This article focused on the use of a machine learning model to predict the maximum, average, and minimum fitness values during the evolutionary process of a GA, and this, only with the knowledge of its initialization parameters. In order to accomplish this goal, a Random Forest model was trained with data from different GA executions for a given problem. The prediction process was performed with a very promising performance, where the challenge of predicting the evolutionary process of a GA was fulfilled with low error rates. This approach opens up several opportunities for advances in the segment, and in a way, contributes to the investigation and improvement of GAs, as well as demonstrating the importance of monitoring and storing the information generated during their evolutionary process.

A genetic algorithm approach to compute mixed strategy solutions for general Stackelberg games

Stackelberg games have found a role in a number of applications including modeling market competition, identifying traffic equilibrium, developing practical security applications and many others. While a number of solution approaches have been developed for these games in a variety of contexts that use mathematical optimization, analytical analysis or heuristic based solutions, literature has been quite sparse on the usage of Genetic Algorithm (GA) based techniques. In this paper, we develop a GA based solution to compute high quality mixed strategy solution for the leader to commit to, in a General Stackelberg Game (GSG). Our experiments showcase that the GA solution developed here indeed performs well in terms of scalability and provides reasonably good solution quality in terms of the average reward obtained.

ALF: a fitness-based artificial life form for evolving large-scale neural networks

Topology and Weight Evolving Artificial Neural Network (TWEANN) is a concept to find the topology and weights of Artificial Neural Networks (ANNs) using genetic algorithms. However, a well-known downside is that TWEANN algorithms often evolve inefficient large ANNs for large-scale problems and require long runtimes.

To address this issue, we propose a new TWEANN algorithm called Artificial Life Form (ALF) with the following technical advancements: (1) speciation via structural and semantic similarity to form better candidate solutions, (2) dynamic adaptation of the observed candidate solutions for better convergence properties, and (3) integration of solution quality into genetic reproduction to increase the probability of optimization success. Experiments on large-scale problems confirm that these approaches allow effective solving of these problems and lead to efficient evolved ANNs.

Optimization of multi-objective mixed-integer problems with a model-based evolutionary algorithm in a black-box setting

Mixed-integer optimization, which focuses on problems where discrete and continuous variables exist simultaneously, is a well-known and challenging area for search algorithms. Mixed-integer optimization problems are especially difficult in a black-box setting where no structural problem information is available a-prior. In this paper we bring the strengths of the recently-proposed Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT) to the multi-objective (MO) domain, and determine whether the promising performance of GAMBIT is maintained. We introduce various mechanisms designed specifically for MO optimization resulting in MO-GAMBIT. We compare performance - in terms of the number of evaluations used - and runtime with alternative techniques, particularly linear scalarization and a selection of alternative MO algorithms. To this end, we introduce a set of objective functions which vary in composition in terms of discrete and continuous variables, as well as in the strength of dependencies between variables. Our results show that MO-GAMBIT can substantially outperform the alternative MO algorithms, thereby providing a promising new approach for multi-objective mixed-integer optimization in a black-box setting.

A benchmark generator of tree decomposition Mk landscapes

We introduce the CliqueTreeMk algorithm to construct tree decomposition (TD) Mk Landscapes and to compute their global optimum efficiently. TD Mk Landscapes are well suited to serve as benchmark functions for blackbox genetic algorithms that are not given a priori the structural problem information as specified by the tree structure and their associated codomain fitness values. Specifically, for certain types of codomains the use of linkage learning might prove to be necessary in order to be able to solve these type of fitness functions.

It's the journey not the destination: building genetic algorithms practitioners can trust

This poster paper presents 2 recommendations for algorithm developers as best practices in the context of engineering design. Genetic algorithms are well suited for multi-objective optimisation problems which are common in engineering. However, the use of genetic algorithms in industry remains low. To understand why, 23 participants (N = 23) with varying degrees of expertise in the domain of engineering design took part in a 3-part mixed methods survey. The open-ended questions in the survey were analysed using reflexive thematic analysis. A common theme among participants was a lack of trust towards the results of genetic algorithms, as well as the process which the algorithms take to reach a result. Based on this, the following recommendations are made: better communication between developers and engineers, and visualising algorithm behaviour.

POSTER SESSION: Posters: General evolutionary computation and hybrids

A crossover that matches diverse parents together in evolutionary algorithms

Crossover and mutation are the two main operators that lead to new solutions in evolutionary approaches. In this article, a new method of performing the crossover phase is presented. The problem of choice is evolutionary decision tree construction. The method aims at finding such individuals that together complement each other. Hence we say that they are diversely specialized. We propose the way of calculating the so-called complementary fitness. In several empirical experiments, we evaluate the efficacy of the method proposed in four variants and compare it to a fitness-rank-based approach. One variant emerges clearly as the best approach, whereas the remaining ones are below the baseline.

A multimethod approach to multimodal function optimization

Multimodal functions play a central role in artificial intelligence. In this paper we attempt to address limitations in existing research on multimodal function optimization by developing a novel multi-method memetic algorithm (MMA). We empirically test MMA on synthetic and natural combinatorial optimization problems, including feature selection. Our initial experiments suggest that MMA preserves diversity well and consistently finds good solutions.

Elo-based similar-strength opponent sampling for multiobjective competitive coevolution

Multiobjective evolution and competitive coevolution are sub-fields of evolutionary computation which each significantly complicate the concept of fitness. Together, these make multiobjective competitive coevolution very difficult to work with by combining multiple objective values with non-absolute fitness, preventing the use of many established techniques for improving performance in multiobjective or competitive coevolutionary algorithms. Nonetheless, multiobjective scenarios arise frequently in competitive coevolution, such as whenever coevolving agents must consider costs for their actions. This paper proposes a new evaluation method of pairing opponents with similar skill levels in each objective, so that evaluations more efficiently distinguish the performance of similar individuals. This is enabled through the use of per-objective Elo ratings as a surrogate fitness function that prevents bias against individuals assigned stronger opponents. Ratings can further be assigned for asymmetric, non-zero-sum objectives such as cost, allowing individuals to be paired with opponents that incidentally challenge those asymmetric objectives. Mixed results are presented, showing significant benefits from pairing similar opponents, but finding that the use of Elo rating instead of raw fitness harms evolution. A novel statistical test for comparing multiobjective coevolutionary algorithms is also introduced.

OPTION: optimization algorithm benchmarking ontology

Many platforms for benchmarking optimization algorithms offer users the possibility of sharing their experimental data with the purpose of promoting reproducible and reusable research. However, different platforms use different data models and formats, which drastically inhibits identification of relevant data sets, their interpretation, and their interoperability. Consequently, a semantically rich, ontology-based, machine-readable data model is highly desired.

We report in this paper on the development of such an ontology, which we name OPTION (OPTImization algorithm benchmarking ONtology). Our ontology provides the vocabulary needed for semantic annotation of the core entities involved in the benchmarking process, such as algorithms, problems, and evaluation measures. It also provides means for automated data integration, improved interoperability, powerful querying capabilities and reasoning, thereby enriching the value of the benchmark data. We demonstrate the utility of OPTION by annotating and querying a corpus of benchmark performance data from the BBOB workshop data - a use case which can be easily extended to cover other benchmarking data collections.

Learning multiple defaults for machine learning algorithms

Modern machine learning methods highly depend on their hyper-parameter configurations for optimal performance. A widely used approach to selecting a configuration is using default settings, often proposed along with the publication of a new algorithm. Those default values are usually chosen in an ad-hoc manner to work on a wide variety of datasets. Different automatic hyperparameter configuration algorithms which select an optimal configuration per dataset have been proposed, but despite its importance, tuning is often skipped in applications because of additional run time, complexity, and experimental design questions. Instead, the learner is often applied in its defaults. This principled approach usually improves performance but adds additional algorithmic complexity and computational costs to the training procedure. We propose and study using a set of complementary default values, learned from a large database of prior empirical results as an alternative. Selecting an appropriate configuration on a new dataset then requires only a simple, efficient, and embarrassingly parallel search over this set. To demonstrate the effectiveness and efficiency of the approach, we compare learned sets of configurations to random search and Bayesian optimization. We show that sets of defaults can improve performance while being easy to deploy in comparison to more complex methods.

The factory must grow: automation in Factorio

The video game Factorio has a myriad of gameplay scenarios which are analogous to real-world problems, and is a useful simulator for developing solutions for them. In this paper we define the logistic transport belt problem, we provide an interface to allow optimizers in any programming language to interact with Factorio, and we provide an initial benchmark. Results are presented for three different meta-heuristic techniques to optimize solutions to this novel problem.

Leveraging benchmarking data for informed one-shot dynamic algorithm selection

A key challenge in the application of evolutionary algorithms in practice is the selection of an algorithm instance that best suits the problem at hand. What complicates this decision further is that different algorithms may be best suited for different stages of the optimization process. Dynamic algorithm selection and configuration are therefore well-researched topics in evolutionary computation. Two different settings are classically considered: hyper-heuristics and parameter control studies typically assume a setting in which the algorithm needs to be chosen and adjusted during the run, without prior information, other approaches such as hyper-parameter tuning and automated algorithm configuration assume the possibility of evaluating different configurations before making a final recommendation. In practical applications of evolutionary algorithms we are often in a middle-ground between these two settings, where one needs to decide upon the algorithm instance before the run ("oneshot" setting), but where we have (possibly lots of) data available on which we can base an informed decision.

We analyze in this work how such prior performance data can be used to infer informed dynamic algorithm selection schemes for the solution of pseudo-Boolean optimization problems. Our specific use-case considers a family of genetic algorithms.

Empirical study of correlations in the fitness landscapes of combinatorial optimization problems

One of the most common problem-solving heuristics is by analogy. For a given problem, a solver can be viewed as a strategic walk on its fitness landscape. Thus if a solver works for one problem, it is anticipated that it will be effective for problems within the same category whose fitness landscapes essentially share structural similarity with each other. However, due to the black-box nature, it is far from trivial to infer such similarity in real-world scenarios. To bridge this gap, this paper proposes two alternative approaches to empirically investigate the potential existence of structural similarity among different fitness landscapes. Specifically, we pick up three classic combinatorial optimization problems to constitute the benchmark set. We apply a local optima network construction routine to build a coarse-grained model to represent the fitness landscapes of different problems at various dimensions. Thereafter, we apply a graph embedding method, to empirically investigate the potential existence of correlations with respect to different local optima networks. From our empirical results, we are exciting to find some evidence of the existence of similarity not only for a given problem with various dimensions but also across different problems.

POSTER SESSION: Posters: Genetic programming

Genetic programming with a new representation and a new mutation operator for image classification

Due to the high dimensionality and variations of the image data, it is challenging to develop an image classification method that is able to capture useful information from images and then conduct classification effectively. This paper proposes a new GP approach to image classification, which can perform feature extraction, feature construction, and classification simultaneously. The new approach can extract and construct multiple informative features to effectively handle image variations. Furthermore, a new mutation operator is developed to dynamically adjust the size of the evolved GP programs. The experimental results show that the proposed approach achieves significantly better or similar performance than/to the baseline methods on two datasets.

Empirical analysis of variance for genetic programming based symbolic regression

Genetic programming (GP) based symbolic regression is a stochastic, high-variance algorithm. Its sensitivity to changes in training data is a drawback for practical applications.

In this work, we analyze empirically the variance of GP models on the PennML benchmarks. We measure the spread of model predictions when models are trained on slightly perturbed data. We compare the spread of models from two GP variants as well as linear, polynomial and random forest regression models.

The results show that the spread of models from GP with local optimization is significantly higher than that of all other algorithms. As a side effect of our analysis, we provide evidence that the PennML benchmark contains two groups of instances (Friedman and real-world problem instances) for which GP performs significantly different.

Fitness first and fatherless crossover

When programs are very large and fitness evaluation is fast (e.g. due to evaluating only a tiny fraction of the program) it can be efficient to evaluate fitness before creating the program, as only individuals with children need be created. The saving can be > 50% and even in highly converged populations, it saves e-2 = 13.5%.

In GP crossover one child is created from two parents but the root donating parent (mum) contributes far more than the other (dad). The subtree from the father is usually small and can be extracted from each dad and saved before crossover. This gives single parent crossover, which when combined with fitness first, further reduces the number of crossovers. Even in the worst case the reduction is exp(-1) = 37 percent.

With large trees, even in populations of similar fitness, eliminating bachelors and spinsters is feasible and can reduce both runtime and memory consumption. Storage in a (N) multi-threaded implementation for a population M is about 0.63M + N, compared to the usual M+2N, in practice saving ≥ 17%. We achieve 692 billion GP operations per second, 692 giga GPops, on an Intel i7-9800X 16 thread 3.8GHz desktop (CPU bandwidth 85 GByte/second).

"Re-ID BUFF": an enhanced similarity measurement based on genetic programming for person re-identification

Person re-identification (re-ID) is a fundamental link to ensure successful cross-camera tracking. Its overall process is divided into two stages, feature extraction and matching. In the matching stage, the simple Euclid measurement has limitations in score due to the lack of consideration of the vector directions. The majority of recent contributions focus on the use of neighborhood information of the query image, which makes the calculation process more complex. In addition, these similarity measurements are predefined and therefore evaluated. In this paper, we propose a re-ID similarity measurement based on Genetic Programming (GP). The similarity measurement formula is automatically evolved and constructed through the optimization using specific function set, terminal set and fitness function targeting re-ID. For the training process of GP, we propose the feature triplet-dataset like the triplet-loss based on the existing re-ID datasets. We conduct sufficient experiments on three benchmark datasets, comparing with features of different quality and various measurements. The proposed method has a better effect than the Euclid measurement and achieves a general improvement on the mAP and rank-1 of combination with other metrics. Therefore, our method can be used as a gain "buff" to enhance the score of other methods on the original basis.

Linear-dependent multi-interpretation neuro-encoded expression programming

Neuro-Encoded Expression Programming (NEEP) implements the continuous coding for the discrete solution through recurrent neural networks (RNNs), and smooths sharpness of the discrete coding. However, the insertion model generating linear coding in NEEP breaks the coherence of linear coding of RNNs, because the resulting symbols tend to be cluttered when RNNs learn the incoherent sequence relationships. Meanwhile, the redundancy phenomenon that different RNNs generate the same code results in that lots of solutions with the same performance exist in the search space, and causes the decrease for search efficiency. To address these problems, the linear-dependent multi-interpretation NEEP(LM-NEEP) is proposed in this research. LM-NEEP tackles the incoherence problem by employing a linear dependence strategy, and the multi-interpretation strategy is adopted to deal with the redundancy problem in search space. The capability of LM-NEEP is estimated on several symbolic regression problems. The experimental results display that the LM-NEEP significantly outperforms NEEP and some classical genetic programming methods.

Principled quality diversity for ensemble classifiers using MAP-Elites

For many supervised learning tasks, ensemble classifiers - which make predictions by combining multiple simple models - outperform single model classifiers. While genetic programming can be used to evolve populations of simple classifiers, it tends to produce populations of highly similar models. In this work we propose Neuro MAP-Elites (NME) as a method for evolving populations of high performing models which produce diverse predictions, making them suitable for constructing ensembles.

Improving estimation of distribution genetic programming with novelty initialization

Estimation of distribution genetic programming (EDA-GP) replaces the standard variation operations of genetic programming (GP) by learning and sampling from a probabilistic model. Unfortunately, many EDA-GP approaches suffer from a rapidly decreasing population diversity which often leads to premature convergence. However, novelty search, an approach that searches for novel solutions to cover sparse areas of the search space, can be used for generating diverse initial populations. In this work, we propose novelty initialization and test this new method on a generalization of the royal tree problem and compare its performance to ramped half-and-half (RHH) using a recent EDA-GP approach. We find that novelty initialization provides a higher diversity than RHH and the EDA-GP also achieves a better average fitness using novelty initialization.

GLEAM: genetic learning by extraction and absorption of modules

Efforts to evolve modularity in genetic programming are as old as the field itself. While many techniques have been proposed to evolve programs that make use of simultaneously-evolved modules, the general problem of evolving large-scale, modular software still stands as a central challenge to the field. In this paper, we present GLEAM, a mechanism of achieving modularity, whereby an evolving program uses a local library of modules. The program can refer to the modules during execution and the ones not used by the program are eventually replaced by code segments extracted from the program itself. Modules can also be absorbed by the program when their references in the program get expanded to the full code segments. We show that this simple mechanism improves the success rate on a number of program synthesis problems.

Improving the generalisation of genetic programming models with evaluation time and asynchronous parallel computing

In genetic programming (GP), controlling complexity often means reducing the size of evolved expressions. However, previous studies show that size reduction may not avoid model overfitting. Therefore, in this study, we use the evaluation time --- the computational time required to evaluate a GP model on data --- as the estimate of model complexity. The evaluation time depends not only on the size of evolved expressions but also their composition, thus acting as a more nuanced measure of model complexity than size alone. To constrain complexity using this measure of complexity, we employed an explicit control technique and a method that creates a race condition. We used a hybridisation of GP and multiple linear regression (MLRGP) that discovers useful features to boost training performance in our experiments. The improved training increases the chances of overfitting and facilitates a study of how managing complexity with evaluation time can address overfitting. Also, MLRGP allows us to observe the relationship between evaluation time and the number of features in a model. The results show that constraining evaluation time of MLRGP leads to better generalisation than both plain MLRGP and with an effective bloat-control.

Neurally guided transfer learning for genetic programming

A key challenge in GP is how to learn from the past, so that the successful synthesis of simple programs can feed in to more challenging unsolved programs. In this work we present a transfer learning (TL) mechanism for GP which extracts fragments from programs it synthesises, then employs deep neural networks to identify new problems to deploy them into, to boost performance. This end-to-end system requires no human identification of which programs to use as donors for TL, the system only needs IO examples.

Adversarial bandit gene expression programming for symbolic regression

Gene expression programming (GEP) is a commonly used approach in symbolic regression (SR). However, GEP often falls into a premature convergence and may only reach a local optimum. To solve the premature convergence problem, we propose a novel algorithm based on an adversarial bandit technique, named AB-GEP. AB-GEP segments the mathematical space into many subspaces. It leverages a new selection method, AvgExp3, to enhance the population jump between segmented subspaces while maintaining the population diversity. AvgExp3 dynamically estimates a subspace by rewards generated from AB-GEP without any assumption about the distribution of subspace rewards, making AB-GEP choose the appropriate subspace that contains the correct results. The experimental evaluation shows that the proposed AB-GEP method can maintain the population diversity and obtain better results than canonical GEPs.

POSTER SESSION: Posters: Neuroevolution

Evolving reservoir weights in the frequency domain

Reservoir Computing models are a class of recurrent neural networks that have enjoyed recent attention, in particular, their main family, Echo State Networks (ESNs). These models have a large number of hidden-hidden weights (in the so-called reservoir) forming a recurrent topology. The reservoir is randomly connected with fixed weights during learning: only readout parameters (from reservoir to output neurons) are trained; the reservoir weights are frozen after initialized. Since the reservoir structure is fixed during learning, only its initialization process has an impact on the model's performance.

In this work, we introduce an evolutionary method for adjusting the reservoir non-null weights. Moreover, the evolutionary process runs on the frequency space corresponding to a Fourier transformation of the weights. We combine an evolutionary search in the Fourier space with supervised learning for the readout weights. The resulting algorithm, called EvoESN (Evolutionary ESN), obtains promising results modeling two well-known problems of the chaotic time-series area.

Evolving transformer architecture for neural machine translation

The transformer models have achieved great success on neural machine translation tasks in recent years. However, the hyper-parameters of the transformer are often manually designed by expertise, where the layer is often regularly stacked together without exploring potentially promising ordering patterns. In this paper, we propose a transformer architecture design algorithm based on genetic algorithm, which can automatically find the proper layer ordering pattern and hyper-parameters for the tasks at hand. The experimental results show that the models designed by the proposed algorithm outperform the vanilla transformer on the widely used machine translation benchmark, which reveals that the performance of transformer architecture can be improved by adjusting layer ordering pattern and hyper-parameters by the proposed algorithm.

Growth and evolution of deep neural networks from gene regulatory networks

A simple population based Evolutionary Algorithm (EA) was used to evolve convolutional neural networks for solving an image classification problem (CIFAR10). Each member of the population was defined by a genome. This work proposes the construction of a genome based closely on the naturel world. The genes within such a genome regulate each other's expression and hence build a gene regulatory network (GRN). In the proposed approach, the genome contains no information from the problem space and could be applied to any application in principle. The genome behaves as an evolved program that grows multi-cellular organisms through a developmental process from an initial single cell. The cellular structure is an intermediate phenotype which is then mapped to its final form, a convolutional neural network in this case. The proposed GRN approach was able to evolve successful networks whose level of performance is comparable to a LeNet5 implementation.

A NEAT-based multiclass classification method with class binarization

Multiclass classification is a fundamental and challenging task in machine learning. Class binarization is a popular method to achieve multiclass classification by converting multiclass classification to multiple binary classifications. NeuroEvolution, such as NeuroEvolution of Augmenting Topologies (NEAT), is broadly used to generate Artificial Neural Networks by applying evolutionary algorithms. In this paper, we propose a new method, ECOC-NEAT, which applies Error-Correcting Output Codes (ECOC) to improve the multiclass classification of NEAT. The experimental results illustrate that ECOC-NEAT with a considerable number of binary classifiers is highly likely to perform well. ECOC-NEAT also shows significant benefits in a flexible number of binary classifiers and strong robustness against errors.

On the exploitation of neuroevolutionary information

The final outcome of neuroevolutionary processes commonly is the best structure found during the search, and a good amount of residual information from which valuable knowledge that can be extracted is usually omitted. We propose an approach that extracts this information from neuroevolutionary runs, and use it to build a Bayesian network-based metamodel that could positively impact future neural architecture searches. The metamodel is learned from the best found solutions in previous GAN structural searches and it is used to improve subsequent neuroevolutionary searches.

Modeling the evolution of retina neural network

Vital to primary visual processing, retinal circuitry shows many similar structures across a very broad array of species, both vertebrate and non-vertebrate, especially functional components such as lateral inhibition. This surprisingly conservative pattern raises a question of how evolution leads to it, and whether there is any alternative that can also prompt helpful preprocessing. Here we design a method using genetic algorithm that, with many degrees of freedom, leads to architectures whose functions are similar to biological retina, as well as effective alternatives that are different in structures and functions. We compare this model to natural evolution and discuss how our framework can inspire neuroevolution.

A coevolutionary approach to deep multi-agent reinforcement learning

Traditionally, Deep Artificial Neural Networks (DNN's) are trained through gradient descent. Recent research shows that Deep Neuroevolution (DNE) is also capable of evolving multi-million-parameter DNN's, which proved to be particularly useful in the field of Reinforcement Learning (RL). This is mainly due to its excellent scalability and simplicity compared to the traditional MDP-based RL methods. So far, DNE has only been applied to complex single-agent problems. As evolutionary methods are a natural choice for multi-agent problems, the question arises whether DNE can also be applied in a complex multi-agent setting. In this paper, we describe and validate a new approach based on coevolution. To validate our approach, we benchmark two Deep coevolutionary Algorithms on a range of multi-agent Atari games and compare our results against the results of Ape-X DQN. Our results show that these Deep coevolutionary algorithms (1) can be successfully trained to play various games, (2) outperform Ape-X DQN in some of them, and therefore (3) show that coevolution can be a viable approach to solving complex multi-agent decision-making problems.

Evolving neuronal plasticity rules using cartesian genetic programming

We formulate the search for phenomenological models of synaptic plasticity as an optimization problem. We employ Cartesian genetic programming to evolve biologically plausible human-interpretable plasticity rules that allow a given network to successfully solve tasks from specific task families. While our evolving-to-learn approach can be applied to various learning paradigms, here we illustrate its power by evolving plasticity rules that allow a network to efficiently determine the first principal component of its input distribution. We demonstrate that the evolved rules perfom competitively with known hand-designed solutions. We explore how the statistical properties of the datasets used during the evolutionary search influences the form of the plasticity rules and discover new rules which are adapted to the structure of the corresponding datasets.

A transfer learning based evolutionary deep learning framework to evolve convolutional neural networks

The manual design of CNNs has become exceptionally complex due to the more sophisticated CNN architectures. Thankfully, more and more researchers endeavour to mitigate the difficulty of manual design by designing automated process, but the computational cost of the automatic methods is extremely high due to the huge search space. In this paper, an evolutionary deep learning framework based on transfer learning is proposed to reduce the computational cost, while maintaining the classification at a competitive level. The main idea is to evolve a CNN block from smaller datasets, and then increasing the capacities of the evolved block to handle larger datasets. The proposed method obtains good CNNs with less than 40 GPU-hours. It also achieves a promising error rate of 3.46% on the CIFAR-10 dataset.

Neuroevolution of a recurrent neural network for spatial and working memory in a simulated robotic environment

We evolved weights in a recurrent neural network (RNN) to replicate the behavior and neural activity observed in rats during a spatial and working memory task. The rat was simulated using a robot simulator to navigate a virtual maze. After evolving weights from sensory inputs to the RNN, within the RNN, and from the RNN to the robot's motors, the robot successfully navigated the space to reach four reward arms with minimal repeats before the timeout. Our current findings suggest that it is the RNN dynamics that are key to performance, and that performance is not dependent on any one sensory type, which suggests that neurons in the RNN are performing mixed selectivity and conjunctive coding. The RNN activity resembles spatial information and trajectory-dependent coding observed in the hippocampus. The evolved RNN exhibits navigation skills, spatial memory, and working memory.

POSTER SESSION: Posters: Real world applications

Selecting miners within blockchain-based systems using evolutionary algorithms for energy optimisation

In this paper, we represent the problem of selecting miners within a blockchain-based system as a subset selection problem. We formulate the problem of minimising blockchain energy consumption as an optimisation problem with two conflicting objectives: energy consumption and trust. The proposed model is compared across different algorithms to demonstrate its performance.

Resource planning for hospitals under special consideration of the COVID-19 pandemic: optimization and sensitivity analysis

Pandemics pose a serious challenge to health-care institutions. To support the resource planning of health authorities from the Cologne region, BaBSim.Hospital, a tool for capacity planning based on discrete event simulation, was created. The predictive quality of the simulation is determined by 29 parameters with reasonable default values obtained in discussions with medical professionals. We aim to investigate and optimize these parameters to improve BaBSim.Hospital using a surrogate-based optimization approach and an in-depth sensitivity analysis.

Risk aware optimization of water sensor placement

This paper addresses the identification of optimal "sensing spots", within a network for monitoring the spread of "effects" triggered by "events". Many real-world problems fit into this general framework: we focused on the early detection of contamination events in Water Distribution Networks (WDN). We model the sensor placement as a bi-objective optimization problem, aiming at minimizing the mean and standard deviation of detection time over a set of different simulated contamination events and solved using NSGA-II. A problem-specific data structure is proposed enabling a deeper analysis of empirical convergence of the population.

Distributed evolutionary design of HIFU treatment plans

High-Intensity Focused Ultrasound (HIFU) is a modern and still evolving technique used to treat a variety of solid malignant cells in a well-defined volume. Using HIFU treatment allows a noninvasive and non-ionising approach, in comparison to more conventional cancer treatments, that can result in a multitude of complications after treatment.

In recent years, a realistic thermal model using patient-specific materials was introduced and an evolutionary strategy was employed to design a HIFU treatment plan. However, the execution times of such a model were too prohibitive to allow routine use. In this poster, we present the latest results of experiments carried on a further optimized fitness model using a distributed evolutionary algorithm. The experiments showed up to a total of 6 time decrease in evaluation time.

An optimal oil skimmer assignment based on a genetic algorithm with minimal mobilized locations

Oil spill cleanups in the ocean often involve oil skimmers to be mobilized from all the reserved locations, which is not efficient. In this study, optimization was performed to minimize the mobilization points of the arrangement in the existing study. For this purpose, a simulation was run to validate the solution produced by the genetic algorithm based on scenarios similar to the actual situation. The scenarios are based on 19 of the largest oil spills that have occurred in South Korea and are compared with the existing work time minimization strategy. By utilizing the mobilization point minimization strategy, the number of areas required for a given work time was reduced by approximately 12.38% on average and approximately 7.08% on average in terms of work time.

Diagnosing autonomous vehicle driving criteria with an adversarial evolutionary algorithm

We repurposed an adversarial evolutionary algorithm, Gremlin, from finding driving scenarios where a model of an autonomous vehicle drove poorly to troubleshooting driving quality evaluation criteria. We evaluated the driving performance of a "perfect driver" robot in a virtual town environment using the same fitness criteria intended for a deep learner (DL) trained driver. We found that the fitness evaluation criteria poorly handled turns, and used Gremlin to iteratively improve that criteria. We were confident that the same criteria could then be applied to the DL-based models as originally intended, and that this approach could be used as a general means of troubleshooting autonomous vehicle driving criteria.

Optimizing the parameters of a physical exercise dose-response model: an algorithmic comparison

The purpose of this research was to compare the robustness and performance of a local and global optimization algorithm applied to the problem of fitting the parameters of a non-linear dose-response model utilized in the field of exercise physiology. Traditionally the parameters of dose-response models utilised in exercise physiology have been fit with non-linear least squares procedures in combination with local optimization algorithms. These algorithms have demonstrated limitations in their ability to converge on a globally optimal solution. This research purposes the use of an evolutionary computation based algorithm as an alternative method to fit a nonlinear dose-response model. The results of our comparison over 1000 experimental runs demonstrated the superior performance of the evolutionary computation based algorithm to consistently achieve a more consistent model fit and holdout evaluation performance in comparison to the local search algorithm. This initial research would suggest that global evolutionary computation based optimization algorithms are a fast and more robust alternative to local optimization algorithms when fitting the parameters of nonlinear dose-response models.

Dealing with a problematic roundabout by optimizing a traffic light system through evolutionary computation

The Padre Anchieta roundabout (Tenerife) takes on a substantial amount of traffic which, at peak times, often causes significant traffic jams. With the aim of alleviating traffic congestion, this paper analyses the possibility of installing a set of traffic lights in the roundabout. The duration of the phases of traffic lights are optimized through a genetic algorithm. SUMO, an open source traffic simulator, is used to evaluate the traffic of the area by considering real traffic data. Seven different scenarios are evaluated with respect to the roundabout, three of them without traffic lights and the other four with them. Simultaneously, the number of pedestrians, and the particular location of the traffic lights, are modified in each of the aforementioned scenarios, when applicable. To determine which parameters of the evolutionary algorithm provided the best results, a previous parameter setting study based on a statistical comparison procedure was performed. Particularly, the crossover operator and the population size, were analyzed. From the results obtained in the experimental assessment, we conclude that the use of an optimized traffic light system would not improve traffic flow in the roundabout. Another important conclusion is that the larger the number of pedestrians, the slower the traffic flow.

Weighted ensemble of gross error detection methods based on particle swarm optimization

Gross errors, a kind of non-random error caused by process disturbances or leaks, can make reconciled estimates can be very inaccurate and even infeasible. Detecting gross errors thus prevents financial loss from incorrectly accounting and also identifies potential environmental consequences because of leaking. In this study, we develop an ensemble of gross error detection (GED) methods to improve the effectiveness of the gross error identification on measurement data. We propose a weighted combining method on the outputs of all constituent GED methods and then compare the combined result to a threshold to conclude about the presence of the gross error. We generate a set of measurements with or without gross error and then minimize the GED error rate of the proposed ensemble on this set with respect to the combining weights and threshold. The Particle Swarm Optimization method is used to solve this optimization problem. Experiments conducted on a simulated system show that our ensemble is better than all constituent GED methods and two ensemble methods.

Wastewater systems planned maintenance scheduling using multi-objective optimisation

Water utilities (WU) in the UK are responsible for providing sewerage disposal for customers across the country. Blockages within sewer networks can be disruptive for customers, damaging to the local environment and costly to rectify. Effective scheduling of preventative maintenance (PM) is an important for WU to prevent blockages, reduce costs and protect the environment. In this paper, we describe a novel multi-objective optimisation methodology to the scheduling of PM applied to a case study in Swansea, Wales. The results of real-world trials demonstrate that solutions generated by the proposed method achieve a 13.8% increase in jobs completed for compared to the standard approach used in November 2019.

Evolving potential field parameters for deploying UAV-based two-hop wireless mesh networks

We attack the problem of dynamic UAV-based ad-hoc mesh network deployment to find and connect people in an area of interest. Genetic algorithms tune the parameters of potential fields that control UAV movement in order to optimize people coverage and network longevity. We extend earlier work that assumed one-hop communication between UAVs to the more realistic two-hop case and find significant increase in coverage and network longevity. Experimental results show that enabling two-hop communications between UAVs improves the performance on average between 4% to 32.19%.

ARCH-Elites: quality-diversity for urban design

This paper introduces ARCH-Elites, a MAP-Elites implementation that can reconfigure large-scale urban layouts at real-world locations via a pre-trained surrogate model instead of costly simulations. In a series of experiments, we generate novel urban designs for two real-world locations in Boston, Massachusetts. Combining the exploration of a possibility space with real-time performance evaluation creates a powerful new paradigm for architectural generative design that can extract and articulate design intelligence.

Optimising the introduction of connected and autonomous vehicles in a public transport system using macro-level mobility simulations and evolutionary algorithms

The past five years have seen a rapid development of plans and test pilots aimed at introducing connected and autonomous vehicles (CAVs) in public transport systems around the world. Using a real-world scenario from the Leeds Metropolitan Area as a case study, we demonstrate an effective way to combine macro-level mobility simulations based on open data (i.e., geographic information system information and transit timetables) with evolutionary optimisation techniques to discover realistic optimised integration routes for CAVs. The macro-level mobility simulations are used to assess the quality (i.e., fitness) of a potential CAV route by quantifying geographic accessibility improvements using an extended version of Dijkstra's algorithm on an abstract multi-modal transport network.

Structural damage identification under non-linear EOV effects using genetic programming

The methods to perform long-term condition monitoring of structures can be generally divided into two main categories, i.e. output-only and input-output methods. The former seeks to detect any anomaly in the structural modal data caused by damage without knowing the cause. Input-output normalisation methods have been used for removing Environmental and Operational Variations (EOV) effects, some examples of which include multiple linear regression [1], artificial neural networks [4] and support vector regression [2].

Towards higher order fairness functionals for smooth path planning

Smoothness of mobile and vehicle navigation has become relevant to ensure the safety and the comfortability of riding. The robotics community has been able to render smooth trajectories in mobile robots by using non-linear optimization approaches and well-known fairness metrics considering the curvature variations along the path. In this paper, we introduce the possibility of computing smooth paths from observed mobile robot trajectories from higher order non-linear fairness functionals. Our approach is potential to enable the generation of simple and computationally-efficient path planning smoothing for navigation in mobile robots.

Novelty search for evolving interesting character mechanics for a two-player video game

Computational generation of video game content, often referred to as procedural content generation (PCG), holds much promise for generating character mechanics. Character mechanics refers to how characters are allowed to move and behave in a computer game, rather than aesthetics such as graphics and audio. In this paper we study how to generate character mechanics automatically, by means of novelty search. Our results show that some of the auto-generated characters are, by human subjects, perceived as more interesting than built-in game characters.

Optimising pheromone communication in a UAV swarm

Communications in multi-robot systems is a key factor, especially when aiming for real-world applications. In this article we address the optimisation of the communications in a swarm of unmanned aerial vehicles for surveillance applications. More precisely a genetic algorithm is introduced to optimize the exchange of pheromone maps used in the CACOC (Chaotic Ant Colony Optimisation for Coverage) mobility model which enhance the vehicles' routes in order to achieve unpredictable trajectories as well as maximise area coverage.

A new pathway to approximate energy expenditure and recovery of an athlete

This work proposes to use evolutionary computation as a pathway to allow a new perspective on the modeling of energy expenditure and recovery of an individual athlete during exercise.

We revisit a theoretical concept called the "three component hydraulic model" which is designed to simulate metabolic systems during exercise and which is able to address recently highlighted shortcomings of currently applied performance models. This hydraulic model has not been entirely validated on individual athletes because it depends on physiological measures that cannot be acquired in the required precision or quantity.

This paper introduces a generalized interpretation and formalization of the three component hydraulic model that removes its ties to concrete metabolic measures and allows to use evolutionary computation to fit its parameters to an athlete.

Unit-aware multi-objective genetic programming for the prediction of the stokes flow around a sphere

In this article we apply a unit-aware Genetic Programming (GP) approach to solve a problem from the area of fluid-dynamics: The Stokes flow around a sphere. We formulate 6 test instances with different complexities and explore the capabilities of single- and multi-objective GP variants to solve this problem with physically correct units of measurement. The study is a starting point to investigate the amount of information necessary to solve fluid-dynamics-related problems, and whether the inclusion of physical dimensions is advantageous or not for such optimization tasks. From the simple flow presented in this study we aim to extend this research to more complex flows with multiple spheres and finite Reynolds numbers.

POSTER SESSION: Posters: Search-based software engineering

Neurogenetic programming framework for explainable reinforcement learning

Automatic programming, the task of generating computer programs compliant with a specification without a human developer, is usually tackled either via genetic programming methods based on mutation and recombination of programs, or via neural language models. We propose a novel method that combines both approaches using a concept of a virtual neuro-genetic programmer, or scrum team. We demonstrate its ability to provide performant and explainable solutions for various OpenAI Gym tasks, as well as inject expert knowledge into the otherwise data-driven search for solutions.

Using knowledge of human-generated code to bias the search in program synthesis with grammatical evolution

Recent studies show that program synthesis with GE produces code that has different structure compared to human-generated code, e.g., loops and conditions are hardly used. In this article, we extract knowledge from human-generated code to guide evolutionary search. We use a large code-corpus that was mined from the open software repository service GitHub and measure software metrics and properties describing the code-base. We use this knowledge to guide the search by incorporating a new selection scheme. Our new selection scheme favors programs that are structurally similar to the programs in the GitHub code-base. We find noticeable evidence that software metrics can help in guiding evolutionary search.

POSTER SESSION: Posters: Theory

On the effectiveness of restarting local search

Premature convergence can be detrimental to the performance of search methods, which is why many search algorithms include restart strategies to deal with it. While it is common to perturb the incumbent solution with diversification steps of various sizes with the hope that the search method will find a new basin of attraction leading to a better local optimum, it is usually unclear whether this strategy is effective. To establish a connection between restart effectiveness and properties of a problem, we introduce a new property of fitness landscapes termed Neighbours with Similar Fitness. We conjecture that this property is true for many PLS-complete problems, and we argue that the effectiveness of a restart strategy depends on this property.

Affine OneMax

A new class of test functions for black box optimization is introduced. Affine OneMax (AOM) functions are defined as compositions of OneMax and invertible affine maps on bit vectors. The black box complexity of the class is upper bounded by a polynomial of large degree in the dimension. Tunable complexity is achieved by expressing invertible linear maps as finite products of transvections. Finally, experimental results are given to illustrate the performance of search algorithms on AOM functions.

Time complexity analysis of the deductive sort in the best case

Non-dominated sorting is one of the important step in multiobjective evolutionary algorithms (MOEAs) which are based on Pareto dominance concept. Though non-dominated sorting can be performed in polynomial time, it remains an asymptotical bottleneck in many of these MOEAs. Here we show that an algorithm, Deductive Sort from the paper "Deductive Sort and Climbing Sort: New Methods for Non-Dominated Sorting" by McClymont et al., has the best-case time complexity of [EQUATION].

TUTORIAL SESSION: Tutorials: Introductory tutorials

Benchmarking: state-of-the-art and beyond

Benchmarking is a compulsory task to assess the performance of a (new) optimization algorithm. While this appears as a mainly technical task, there are surprisingly many methodological problems that arise when benchmarking algorithms. Over the past decade, there was a great effort towards improving the benchmarking methodology for (gradient-free) optimization. It was started for continuous optimization problems and then extended to multi-objective and mix-integer problems.

In this tutorial, we will present and discuss these key methodological ideas emphasizing the importance of quantitative measurement, the use of instances of problems as well as choosing well the testbed to not bias results towards too easy problems. We will particularly review the advantages of presenting data using the empirical cumulative distribution of runtimes, a tool that everyone assessing the performance of an algorithm should know.

We will then review how this methodology is implemented within the COCO software and show how COCO can and should be used to benchmark an algorithm and write a scientific paper.

This tutorial is intended for young researchers starting in the field who need benchmarking for their research as well as for researchers that wish to get up-to-date with the latest methodological developments in benchmarking methodology.

Recent advances in particle swarm optimization analysis and understanding 2021

A gentle introduction to theory (for non-theoreticians)

Runtime analysis of evolutionary algorithms: basic introduction

Evolution of neural networks

Genetic programming: a tutorial introduction

Replicability and reproducibility in evolutionary optimization

Representations for evolutionary algorithms

Introductory mathematical programming for EC

Learning classifier systems: from principles to modern systems

Hyper-heuristics tutorial

Model-based evolutionary algorithms

Theoretical foundations of evolutionary computation for beginners and veterans

TUTORIAL SESSION: Tutorials: Advanced tutorials

CMA-ES and advanced adaptation mechanisms

Benchmarking multiobjective optimizers 2.0

Advanced Learning Classifier Systems

Constraint-handling techniques used with evolutionary algorithms

Quality-diversity optimisation

Evolutionary multi- and many-objective optimization: methodologies, applications and demonstration

Statistical analyses for meta-heuristic stochastic optimization algorithms

Genetic improvement: taking real-world source code and improving it using genetic programming

Dynamic multi-objective optimization: introduction, challenges, applications and future directions

Lexicase Selection

Runtime analysis of population-based evolutionary algorithms

Decomposition multi-objective optimisation: current developments and future opportunities

Recent advances in landscape analysis for optimisation and learning

Evolutionary submodular optimisation

Sequential experimentation by evolutionary algorithms

Automated algorithm configuration and design

Coevolutionary computation for adversarial deep learning

TUTORIAL SESSION: Tutorials: Specialized tutorials

Evolutionary art and design: representation, fitness and interaction

Search based software engineering: challenges, opportunities and recent applications

Applications of dynamic parameter control in evolutionary computation

Evolutionary computation and machine learning in cryptology

Push

Towards a green AI: evolutionary solutions for an ecologically viable artificial intelligence

Evolutionary computation for feature selection and feature construction

Evolutionary computation and evolutionary deep learning for image analysis, signal processing and pattern recognition

WORKSHOP SESSION: Workshop: Analysing algorithmic behaviour of optimisation heuristics

Quantifying the impact of boundary constraint handling methods on differential evolution

Constraint handling is one of the most influential aspects of applying metaheuristics to real-world applications, which can hamper the search progress if treated improperly. In this work, we focus on a particular case - the box constraints, for which many boundary constraint handling methods (BCHMs) have been proposed. We call for the necessity of studying the impact of BCHMs on metaheuristics' performance and behavior, which receives seemingly little attention in the field. We target quantifying such impacts through systematic benchmarking by investigating 28 major variants of Differential Evolution (DE) taken from the modular DE framework (by combining different mutation and crossover operators) and 13 commonly applied BCHMs, resulting in 28 X 13 = 364 algorithm instances after pairing DE variants with BCHMs. After executing the algorithm instances on the well-known BBOB/COCO problem set, we analyze the best-reached objective function value (performance-wise) and the percentage of repaired solutions (behavioral) using statistical ranking methods for each combination of mutation, crossover, and BBOB function group. Our results clearly show that the choice of BCHMs substantially affects the empirical performance as well as the number of generated infeasible solutions, which allows us to provide general guidelines for selecting an appropriate BCHM for a given scenario.

On the genotype compression and expansion for evolutionary algorithms in the continuous domain

This paper investigates the influence of genotype size on evolutionary algorithms' performance. We consider genotype compression (where genotype is smaller than phenotype) and expansion (genotype is larger than phenotype) and define different strategies to reconstruct the original variables of the phenotype from both the compressed and expanded genotypes. We test our approach with several evolutionary algorithms over three sets of optimization problems: COCO benchmark functions, modeling of Physical Unclonable Functions, and neural network weight optimization. Our results show that genotype expansion works significantly better than compression, and in many scenarios, outperforms the original genotype encoding. This could be attributed to the change in the genotype-phenotype mapping introduced with the expansion methods: this modification beneficially transforms the domain landscape and alleviates the search space traversal.

Design of large-scale metaheuristic component studies

Metaheuristics employ a variety of different components using a wide array of operators to execute their search. This determines their intensification, diversification and all other behavioural features and is thus critical for success on different optimisation problems. Choosing the right components with the right operators remains a difficult task. In this paper we propose a design of experiments that should be used for extensive component studies. We demonstrate the applicability of this design by exploring the differences in operator specific performance in two closely related metaheuristic frameworks---the well-known (μ + λ)-Evolution Strategy and the strongly metaphor-focussed Invasive Weed Optimisation---where operators show varying degrees of similarity in different components. This experiment shows that similarity of operators does not comprehensively account for similarity in performance. Presumably small changes of an operator can influence the algorithmic behaviour more than the utilisation of a completely different operator in another component. Even when employed in different combinations, these influences remain strong. This emphasises the need for a more detailed analysis of the specific effects of components and their respective operators on the search process.

Benchmark generator for TD Mk landscapes

We introduce a publicly available benchmark generator for Tree Decomposition (TD) Mk Landscapes. TD Mk Landscapes were introduced by Whitley et al. to get rid of unnecessary restrictions of Adjacent NK Landscapes while still allowing for the calculation of the global optimum in polynomial time. This makes TD Mk Landscapes more lenient while still being as convenient as Adjacent NK Landscapes. Together, these properties make it very suitable for benchmarking blackbox algorithms. Whitley et al., however, introduced a construction algorithm that only constructs Adjacent NK Landscapes. Recently, Thierens et al. introduced an algorithm, CliqueTreeMk, to construct any TD Mk Landscape and find its optimum. In this work, we introduce CliqueTreeMk in more detail, implement it for public use, and show some results for LT-GOMEA on an example TD Mk Landscape problem. The results show that deceptive trap problems with higher overlap do not necessarily decrease performance and effectiveness for LT-GOMEA.

Emergence of structural bias in differential evolution

Heuristic optimisation algorithms are in high demand due to the overwhelming amount of complex optimisation problems that need to be solved. The complexity of these problems is well beyond the boundaries of applicability of exact optimisation algorithms and therefore require modern heuristics to find feasible solutions quickly. These heuristics and their effects are almost always evaluated and explained by particular problem instances. In previous works, it has been shown that many such algorithms show structural bias, by either being attracted to a certain region of the search space or by consistently avoiding regions of the search space, on a special test function designed to ensure uniform 'exploration' of the domain. In this paper, we analyse the emergence of such structural bias for Differential Evolution (DE) configurations and, specifically, the effect of different mutation, crossover and correction strategies. We also analyse the emergence of the structural bias during the run-time of each algorithm. We conclude with recommendations of which configurations should be avoided in order to run DE unbiased.

Is there anisotropy in structural bias?

Structural Bias (SB) is an important type of algorithmic deficiency within iterative optimisation heuristics. However, methods for detecting structural bias have not yet fully matured, and recent studies have uncovered many interesting questions. One of these is the question of how structural bias can be related to anisotropy. Intuitively, an algorithm that is not isotropic would be considered structurally biased. However, there have been cases where algorithms appear to only show SB in some dimensions. As such, we investigate whether these algorithms actually exhibit anisotropy, and how this impacts the detection of SB. We find that anisotropy is very rare, and even in cases where it is present, there are clear tests for SB which do not rely on any assumptions of isotropy, so we can safely expand the suite of SB tests to encompass these kinds of deficiencies not found by the original tests.

We propose several additional testing procedures for SB detection and aim to motivate further research into the creation of a robust portfolio of tests. This is crucial since no single test will be able to work effectively with all types of SB we identify.

WORKSHOP SESSION: Workshop: Black box optimization benchmarking

DMS and MultiGLODS: black-box optimization benchmarking of two direct search methods on the bbob-biobj test suite

Direct Multisearch (DMS) and MultiGLODS are two derivative-free solvers for approximating the entire set of Pareto-optimal solutions of a multiobjective (blackbox) problem. They both follow the search/poll step approach of direct search methods, employ Pareto dominance to avoid aggregating objectives, and have theoretical limit guarantees. Although the original publications already compare the two algorithms empirically with a variety of multiobjective solvers, an analysis on their scaling behavior with dimension was missing. Here, we run the publicly available implementations on the bbob-biobj test suite of the COCO platform and by investigating their performances in more detail, observe (i) a small defect in the default initialization of DMS, (ii) for both algorithms a decrease in relative performance to other algorithms of the original studies (even matching the performance of random search for MultiGLODS in higher dimension), and (iii) consequently, an under-performance to previously untested stochastic solvers from the evolutionary computation field, especially when the dimension is higher.

Benchmarking SHADE algorithm enhanced with model based optimization on the BBOB noiseless testbed

In this paper we evaluate the SHADE-LM algorithm on the BBOB noiseless testbed. The algorithm hybridizes the SHADE algorithm with a model based optimization. This hybridization is performed in a transparent manner for both optimizers, with SHADE having access to the samples provided by model based optimization, and models of square functions are fitted on the current population. The paper compares this extended version with the performance of the version of SHADE by Tanabe and Fukunaga.

WORKSHOP SESSION: Workshop: Decomposition techniques in evolutionary optimization

An abstract interface for large-scale continuous optimization decomposition methods

Decomposition methods are valuable approaches to support the development of divide-and-conquer metaheuristics. When the problem structure is unknown, such as in black-box problems, this structure can be inferred through several decomposition mechanisms. In the context of continuous optimization, the most efficient meta-heuristics to deal with a large number of decision variables involve decomposition methods. However, choosing a suitable decomposition method is not a trivial task since each strategy requires an appropriate set of parameters. In this context, this paper proposes a C++ library called Continuous Optimization Problem Decomposition (COPD) that provides the most recent decomposition methods, interfaces for new methods, and integration with solvers. Furthermore, the proposed library can aid related studies since the decomposition for continuous optimization problems can be easily applied with different methods. The experimental results demonstrate a high grouping accuracy for most methods on large-scale problems.

The bee-benders hybrid algorithm with application to transmission expansion planning

This paper introduces a novel hybrid optimisation algorithm that combines elements of both metaheuristic search and integer programming. This new matheuristic combines elements of Benders decomposition and the Bees Algorithm, to create the Bee-Benders Hybrid Algorithm (BBHA) which retains many of the advantages of both methods. Specifically, it is designed to be easily parallelisable, to produce good solutions quickly while still retaining a guarantee of optimality when run for a sufficiently long time. The algorithm is tested using a transmission network expansion and energy storage planning model, a challenging and very large scale mixed integer linear programming problem. The BBHA is shown to be a highly effective hybrid matheuristic algorithm for this challenging combinatorial optimisation problem that performs at least as well as either Benders decomposition or the Bees Algorithm on their own, and significantly improves upon the individual approaches in many instances. While the paper demonstrates the effectiveness on an electricity network planning problem, the algorithm could be readily applied to any mixed integer linear program, and is expected to work particularly well whenever this has a structure that is amenable to Benders decomposition.

Population-based coordinate descent algorithm with majority voting

Many real-world optimization problems belong to the class of expensive problems and the costly process of computing fitness value or gradient of the objective function may cause the failure of various optimization algorithms to solve them quickly. Because of the low computation and memory requirements of Coordinate Descent (CD) search methods they are suitable algorithms to optimize these problems. Despite the efficiency of CD methods, searching a large-scale search space just by using one candidate solution decreases the exploration capability of the algorithm. In this paper, a novel population-based version of the CD algorithm called Population-Based Coordinate Descent (PBCD) is proposed which is an efficacious method for tackling such problems using the collective intelligence and collaboration of the population. It takes advantage of three phases of locating the region of interest, folding the search space, and communication among the population members with majority voting to find more promising regions in the search space. As it shrinks the search space swiftly, it needs a low computational budget for finding the optimal value per coordinate and ultimately in overall. To investigate its performance, we benchmarked it on CEC-2017 test suite consisting of 29 low-scale problems with dimensions of 30, 50, and 100.

WORKSHOP SESSION: Workshop: Evolutionary algorithms and HPC

An efficient fault-tolerant communication algorithm for population-based metaheuristics

Parallel and distributed computing systems have been seeing rapid growth in the number of processing cores as progress on single-core performance has stagnated. The larger the system, the greater the challenge for application scalability and system stability. Aiming at addressing both challenges in the context of distributed metaheuristic optimization algorithms, in this work, we propose a scalable and fault-tolerant peer-to-peer communication algorithm tailored for population-based metaheuristics. In the algorithm, messages exchanging are carried out by multiple threads asynchronously in background and the minimal algorithm's overhead can be entirely hidden by overlapping communication with computation. Results from controlled benchmarks corroborate the efficiency of the algorithm and also hint that thread oversubscription can further improve scalability thanks to the high degree of idleness of communication operations. The proposed algorithm contributes to the important yet not sufficiently explored performance aspects of distributed metaheuristics.

Improving the scalability of distributed neuroevolution using modular congruence class generated innovation numbers

The asynchronous master-worker model is a classic method used to distribute evolutionary algorithms, as it can allow for decoupling of population size from the number of available processors while at the same time being naturally load balanced. While easy to implement, it suffers from an unavoidable choke point: the master process, which must process all results and generate tasks for workers. This work investigates a method for improving the performance of distributed neuroevolution algorithms, which commonly use such a model, that involves offloading costly crossover and mutation operations to the worker processes. To accomplish this, a novel modular congruence class based strategy for generating unique innovation numbers was developed, which requires no additional communication overhead. Experimental results designed to stress test the master process were generated using the Evolutionary eXploration of Augmenting Memory Models (EXAMM) neuroevolution algorithm, after discovering in preliminary results that it suffered from a bottleneck preventing scalability past 432 cores in a high performance computing environment. The results show a statistically significant improvement in throughput (genome evaluations per second) and scalability past 864 cores using this offloading method. Further, this methodology is generic and could be applied to any neuroevolution algorithm which utilize NEAT-inspired innovation numbers.

Generating combinations on the GPU and its application to the k-subset sum

Efficiently representing and generating combinations can allow the seamless visualization, sampling, and evaluation of combinatorial architectures. In this paper, being relevant to tackle resource allocation problems ubiquitously, we address the subset sum problem by (1) using gradient-free optimization with a number-based representation of the combinatorial search space and by (2) generating combinations with minimal change order through parallel reductions in the GPU.

Our computational experiments consisting of a relevant set of problem instances and gradient-free optimization algorithms show that (1) it is possible to generate combinations in the GPU efficiently, with quasi-linear complexity, (2) it is possible to tackle instances of the subset sum problem within a reasonable number of function evaluations, and (3) Particle Swarm Optimization with Fitness Euclidean Ratio converges faster.

Since the search space of number-based representations is one-dimensional and amenable to parallelization schemes (e.g., GPU), we believe our work opens the door to tackle further combinatorial problems.

X-Aevol: GPU implementation of an evolutionary experimentation simulator

X-Aevol is the GPU port of the Aevol model, a bio-inspired genetic algorithm designed to study the evolution of micro-organisms and its effects on their genome structure. This model is used for in-silico experimental evolution that requires the computation of populations of thousands of individuals during tens of millions of generations. As the model is extended with new features and experiments are conducted with larger populations, computational time becomes prohibitive.

X-Aevol is a response to the need of more computational power. It was designed to leverage the massive parallelization capabilities of GPU. As Aevol exposes an irregular and dynamic computational pattern, it was not a straightforward process to adapt it for massively parallel architectures. In this paper, we present how we have adapted the Aevol underlying algorithms to GPU architectures. We implement our new algorithms with CUDA programming language and test them on a representative benchmark of Aevol workloads. To conclude, we present our performance evaluation on NVIDIA Tesla V100 and A100. We show how we reach a speed-up of 1,000 over a sequential execution on a CPU and the speed-up gain up to 50% from using the newer Ampere micro-architecture in comparison with Volta one.

WORKSHOP SESSION: Workshop: Evolutionary algorithms for problems with uncertainty

Maximising hypervolume and minimising ϵ-indicators using Bayesian optimisation over sets

Bayesian optimisation methods have been widely used to solve problems with computationally expensive objective functions. In the multi-objective case, these methods have been successfully applied to maximise the expected hypervolume improvement of individual solutions. However, the hypervolume, and other unary quality indicators such as multiplicative ϵ-indicator, measure the quality of an approximation set and the overall goal is to find the set with the best indicator value. Unfortunately, the literature on Bayesian optimisation over sets is scarce. This work uses a recent set-based kernel in Gaussian processes and applies it to maximise hypervolume and minimise ϵ-indicators in Bayesian optimisation over sets. The results on benchmark problems show that maximising hypervolume using Bayesian optimisation over sets gives a similar performance than non-set based methods. The performance of using ϵ indicator in Bayesian optimisation over sets needs to be investigated further. The set-based method is computationally more expensive than the non-set-based ones, but the overall time may be still negligible in practice compared to the expensive objective functions.

RARE: evolutionary feature engineering for rare-variant bin discovery

Features with rare states, such as rare genetic variants, pose a significant challenge for both statistical and machine learning analyses due to limited detection power and uncertainty surrounding the nature of their role (e.g., additive, heterogeneous, or epistatic) in predicting outcomes such as disease phenotype. Rare variant 'bins' (RVBs) hold the potential to increase association detection power. However, previously proposed binning approaches relied on prior-knowledge assumptions, instead of data-driven techniques, and ignored the potential for multivariate interactions. We present the Relevant Association Rare-variant-bin Evolver (RARE), the first evolutionary algorithm for automatically constructing and evaluating RVBs with either univariate or epistatic associations. We evaluate RARE's ability to correctly bin simulated rare-variant associations over a variety of algorithmic and dataset scenarios. Specifically, we examine (1) ability to detect RVBs of univariate effect (with or without noise), (2) using fixed vs. adaptable bins sizes, (3) employing expert knowledge to initialize bins, and (4) ability to detect RVBs interacting with a separate common variant. We present preliminary results demonstrating the feasibility, efficacy, and limitations of this proposed rare-variant feature engineering algorithm.

A new acquisition function for robust Bayesian optimization of unconstrained problems

A new acquisition function is proposed for solving robust optimization problems via Bayesian Optimization. The proposed acquisition function reflects the need for the robust instead of the nominal optimum, and is based on the intuition of utilizing the higher moments of the improvement. The efficacy of Bayesian Optimization based on this acquisition function is demonstrated on four test problems, each affected by three different levels of noise. Our findings suggest the promising nature of the proposed acquisition function as it yields a better robust optimal value of the function in 6/12 test scenarios when compared with the baseline.

WORKSHOP SESSION: Workshop: Evolutionary computation and decision making

A divide and conquer approach for web services location allocation problem

The appropriate choice of locations for the deployment of web services is of significant importance. The placement of a web service closer to user centers minimizes the response time, however deployment cost may increase. The placement becomes more challenging when multiple web services are involved. In this paper, we address the problem of placing multiple web services with the aim of simultaneously minimizing conflicting objectives of total deployment cost and network latency. We solve the location allocation problem for each web service independently and combine the resulting solutions using a novel merge algorithm. We demonstrate through extensive experiments and simulations that the proposed approach is not only computationally efficient but also produces good quality solutions. Further, the proposed merge algorithm is generic and could be easily adapted to tackle any bi-objective optimization problem that can be decomposed into non-overlapping sub-problems.

Model learning with personalized interpretability estimation (ML-PIE)

High-stakes applications require AI-generated models to be interpretable. Current algorithms for the synthesis of potentially interpretable models rely on objectives or regularization terms that represent interpretability only coarsely (e.g., model size) and are not designed for a specific user. Yet, interpretability is intrinsically subjective. In this paper, we propose an approach for the synthesis of models that are tailored to the user by enabling the user to steer the model synthesis process according to her or his preferences. We use a bi-objective evolutionary algorithm to synthesize models with trade-offs between accuracy and a user-specific notion of interpretability. The latter is estimated by a neural network that is trained concurrently to the evolution using the feedback of the user, which is collected using uncertainty-based active learning. To maximize usability, the user is only asked to tell, given two models at the time, which one is less complex. With experiments on two real-world datasets involving 61 participants, we find that our approach is capable of learning estimations of interpretability that can be very different for different users. Moreover, the users tend to prefer models found using the proposed approach over models found using non-personalized interpretability indices.

WORKSHOP SESSION: Workshop: Evolutionary computation for the automated design of algorithms

Towards large scale automated algorithm design by integrating modular benchmarking frameworks

We present a first proof-of-concept use-case that demonstrates the efficiency of interfacing the algorithm framework ParadisEO with the automated algorithm configuration tool irace and the experimental platform IOHprofiler. By combing these three tools, we obtain a powerful benchmarking environment that allows us to systematically analyze large classes of algorithms on complex benchmark problems. Key advantages of our pipeline are fast evaluation times, the possibility to generate rich data sets to support the analysis of the algorithms, and a standardized interface that can be used to benchmark very broad classes of sampling-based optimization heuristics.

In addition to enabling systematic algorithm configuration studies, our approach paves a way for assessing the contribution of new ideas in interplay with already existing operators---a promising avenue for our research domain, which at present may have a too strong focus on comparing entire algorithm instances.

Tuning as a means of assessing the benefits of new ideas in interplay with existing algorithmic modules

Introducing new algorithmic ideas is a key part of the continuous improvement of existing optimization algorithms. However, when introducing a new component into an existing algorithm, assessing its potential benefits is a challenging task. Often, the component is added to a default implementation of the underlying algorithm and compared against a limited set of other variants. This assessment ignores any potential interplay with other algorithmic ideas that share the same base algorithm, which is critical in understanding the exact contributions being made. We explore a more extensive procedure, which uses hyperparameter tuning as a means of assessing the benefits of new algorithmic components. This allows for a more robust analysis by not only focusing on the impact on performance, but also by investigating how this performance is achieved. We implement our suggestion in the context of the Modular CMA-ES framework, which was redesigned and extended to include some new modules and several new options for existing modules, mostly focused on the step-size adaptation method. Our analysis highlights the differences between these new modules, and identifies the situations in which they have the largest contribution.

Automated design of accurate and robust image classifiers with brain programming

Foster the mechanical design of artificial vision requires a delicate balance between high-level analytical methods and the discovery through metaheuristics of near-optimal functions working towards complex visual problems. Evolutionary computation and swarm intelligence have developed strategies that automatically design meaningful deep convolutional neural network architectures to create better image classifiers. However, these architectures have not surpassed hand-craft models working with outdated problems with datasets of icon images. Nowadays, recent concerns about deep convolutional neural networks to adversarial attacks in the form of modifications to the input image can manipulate their output to make them untrustworthy. Brain programming is a hyper-heuristic whose aim is to work at a higher level of abstraction to develop automatically artificial visual cortex algorithms for a problem domain like image classification. This work's primary goal is to employ brain programming to design an artificial visual cortex to produce accurate and robust image classifiers in two problems. We analyze the final models designed by brain programming with the assumption of fooling the system using two adversarial attacks. In both experiments, brain programming constructed artificial brain models capable of competing with hand-crafted deep convolutional neural networks without any influence in the predictions when an adversarial attack is present.

A selection hyperheuristic guided by Thompson sampling for numerical optimization

Selection hyper-heuristics have been increasingly and successfully applied to numerical and discrete optimization problems. This paper proposes HHTS, a hyper-heuristic (HH) based on the Thompson Sampling (TS) mechanism to select combinations of low-level heuristics aiming to provide solutions for various continuous single-objective optimization benchmarks. Thompson Sampling is modeled in the present paper as a Beta Bernoulli sampler considering the increase/decrease of diversity among population individuals to measure the success/failure during the search. In the experiments, HHTS (a generic evolutionary algorithm generated by TS) is compared with five well-known evolutionary algorithms. Results indicate that, despite requiring less computational effort, HHTS's performance is similar or better than the other algorithm for most instances and in 50% of the cases it is capable of achieving the global optimum.

Which hyperparameters to optimise?: an investigation of evolutionary hyperparameter optimisation in graph neural network for molecular property prediction

Most GNNs for molecular property prediction are proposed based on the idea of learning the representations for the nodes by aggregating the information of their neighbour nodes in graph layers. Then, the representations can be passed to subsequent task-specific layers to deal with individual downstream tasks. Facing real-world molecular problems, the hyperparameter optimisation for those layers are vital. In this research, we focus on the impact of selecting two types of GNN hyperparameters, those belonging to graph layers and those of task-specific layers, on the performance of GNN for molecular property prediction. In our experiments, we employed a state-of-the-art evolutionary algorithm (i.e., CMA-ES) for HPO. The results reveal that optimising the two types of hyperparameters separately can improve GNNs' performance, but optimising both types of hyperparameters simultaneously will lead to predominant improvements.

WORKSHOP SESSION: Workshop: Evolutionary computation for permutation problems

Towards the landscape rotation as a perturbation strategy on the quadratic assignment problem

Recent work in combinatorial optimisation have demonstrated that neighbouring solutions of a local optima may belong to more favourable attraction basins. In this sense, the perturbation strategy plays a critical role on local search based algorithms to kick the search of the algorithm into more prominent areas of the space. In this paper, we investigate the landscape rotation as a perturbation strategy to redirect the search of an stuck algorithm. This technique rearranges the mapping of solutions to different objective values without altering important properties of the problem's landscape such as the number and quality of optima, among others. Particularly, we investigate two rotation based perturbation strategies: (i) a profoundness rotation method and (ii) a broadness rotation method. These methods are applied into the stochastic hill-climbing heuristic and tested and compared on different instances of the quadratic assignment problem against other algorithm versions. Performed experiments reveal that the landscape rotation is an efficient perturbation strategy to shift the search in a controlled way. Nevertheless, an empirical investigation of the landscape rotation demonstrates that it needs to be cautiously manipulated in the permutation space since a small rotation does not necessarily mean a small disturbance in the fitness landscape.

On the symmetry of the quadratic assignment problem through elementary landscape decomposition

When designing meta-heuristic strategies to optimize the quadratic assignment problem (QAP), it is important to take into account the specific characteristics of the instance to be solved. One of the characteristics that has been pointed out as having the potential to affect the performance of optimization algorithms is the symmetry of the distance and flow matrices that form the QAP.

In this paper, we further investigate the impact of the symmetry of the QAP on the performance of meta-heuristic algorithms, focusing on local search based methods. The analysis is carried out using the elementary landscape decomposition (ELD) of the problem under the swap neighborhood. First, we study the number of local optima and the relative contribution of the elementary components on a benchmark composed of different types of instances. Secondly, we propose a specific local search algorithm based on the ELD in order to experimentally validate the effects of the symmetry. The analysis carried out shows that the symmetry of the QAP is a relevant feature that influences both the characteristics of the elementary components and the performance of local search based algorithms.

Generating instances with performance differences for more than just two algorithms

In recent years, Evolutionary Algorithms (EAs) have frequently been adopted to evolve instances for optimization problems that pose difficulties for one algorithm while being rather easy for a competitor and vice versa. Typically, this is achieved by either minimizing or maximizing the performance difference or ratio which serves as the fitness function. Repeating this process is useful to gain insights into strengths/weaknesses of certain algorithms or to build a set of instances with strong performance differences as a foundation for automatic per-instance algorithm selection or configuration.

We contribute to this branch of research by proposing fitness-functions to evolve instances that show large performance differences for more than just two algorithms simultaneously. As a proof-of-principle, we evolve instances of the multi-component Traveling Thief Problem (TTP) for three incomplete TTP-solvers. Our results point out that our strategies are promising, but unsurprisingly their success strongly relies on the algorithms' performance complementarity.

Exploratory analysis of the Monte Carlo tree search for solving the linear ordering problem

Monte-Carlo Tree Search has delivered great results in two-player game-playing and its current success has turned it into a popular choice of study in different use cases. Recently, many works have applied MCTS and, especially, its neural variant, as an end-to-end approach to solve Combinatorial Optimization Problems. However, its efficiency for solving regular Combinatorial Problems has still to be studied.

In this paper, we investigate the capability of the Monte-Carlo Tree Search algorithm to optimize permutation-based problems, making use of problem-specific knowledge. Particularly, we focus on the well-known Linear Ordering Problem (LOP), taking advantage of the easy computation of the expected and upper bound fitness of partial permutations. Moreover, we introduce a Multi-Objective Optimization approach to deal with the exploration-exploitation dilemma during the tree search. Conducted experiments show that MCTS obtains better results than classical constructive algorithms, though its performance is not obviously comparable to state-of-the-art results. Based on its ability for guiding structured searches, its scalability, convergence and search space coverage, MCTS could open new research trends in the optimization area.

Hybrid linkage learning for permutation optimization with Gene-pool optimal mixing evolutionary algorithms

Linkage learning techniques are employed to discover dependencies between problem variables. This knowledge can then be leveraged in an Evolutionary Algorithm (EA) to improve the optimization process. Of particular interest is the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) family, which has been shown to exploit linkage effectively. Recently, Empirical Linkage Learning (ELL) techniques were proposed for binary-encoded problems. While these techniques are computationally expensive, they have the benefit of never reporting spurious dependencies (false linkages), i.e., marking two independent variables as being dependent. However, previous research shows that despite this property, for some problems, it is more suitable to employ more commonly-used Statistical-based Linkage Learning (SLL) techniques. Therefore, we propose to use both ELL and SLL in the form of Hybrid Linkage Learning (HLL). We also propose (for the first time) a variant of ELL for permutation problems. Using a wide range of problems and different GOMEA variants, we find that also for permutation problems, in some cases, ELL is more advantageous to use while SLL is more advantageous in other cases. However, we also find that employing the proposed HLL leads to results that are better or equal than the results obtained with SLL for all the considered problems.

An empirical evaluation of permutation-based policies for stochastic RCPSP

While optimization methods used for deterministic scheduling such as Linear Programming, Constraint Programming or Evolutionary Algorithms can be very successful at optimizing scheduling problems, the resulting schedules may not always be feasible and applicable at execution on domains with uncertainty. In this paper, we focus on the Stochastic Resource Constrained Project Scheduling Problem (SRCPSP) and propose several adaptive scheduling policies that use task priorities as input to a schedule generation scheme (SGS) at execution. In particular, we focus on the use of Genetic Programming (GP) to evolve robust heuristics that can assign priority levels to scheduling tasks online. The benefit of this approach is two-fold. First, it enables the adaptation of the SGS permutation input during the execution. Second, because the evolved heuristic uses task features rather than domain features, it offers the advantage to be applicable on completely unseen domains, potentially of higher dimension and complexity. Experiments on domains with stochastic durations show that using GP-evolved heuristics yield better makespan than using fixed permutations derived from optimal schedules. They also demonstrate that the update of the permutation is key to getting full benefit from SGS policies.

Solving job shop scheduling problems without using a bias for good solutions

The most basic concept of (meta-)heuristic optimization is to prefer better solutions over worse ones. Algorithms utilizing Frequency Fitness Assignment (FFA) break with this idea and instead move towards solutions whose objective value has been encountered less often so far. We investigate whether this approach can be applied to solve the classical Job Shop Scheduling Problem (JSSP) by plugging FFA into the (1+1)-EA, i.e., the most basic local search. As representation, we use permutations with repetitions. Within the budget chosen in our experiments, the resulting (1+1)-FEA can obtain better solutions in average on the Fisher-Thompson, Lawrence, Applegate-Cook, Storer-Wu-Vaccari, and Yamada-Nakano benchmark sets, while performing worse on the larger Taillard and Demirkol-Mehta-Uzsoy benchmarks. We find that while the simple local search with FFA does not outperform the pure algorithm, it can deliver surprisingly good results, especially since it is not directly biased towards searching for them.

WORKSHOP SESSION: Workshop: Evolutionary data mining and optimization over graphs

Graph-aware evolutionary algorithms for influence maximization

Social networks represent nowadays in many contexts the main source of information transmission and the way opinions and actions are influenced. For instance, generic advertisements are way less powerful than suggestions from our contacts. However, this process hugely depends on the influence of people who disseminate these suggestions. Therefore modern marketing often involves paying some targeted users, or influencers, for advertising products or ideas. Finding the set of nodes in a social network that lead to the highest information spread -the so-called Influence Maximization (IM) problem- is therefore a pressing question and as such it has recently attracted a great research interest. In particular, several approaches based on Evolutionary Algorithms (EAs) have been proposed, although they are known to scale poorly with the graph size. In this paper, we tackle this limitation in two ways. Firstly, we use approximate fitness functions to speed up the EA. Secondly, we include into the EA various graph-aware mechanisms, such as smart initialization, custom mutations and node filtering, to facilitate the EA convergence. Our experiments show that the proposed modifications allow to obtain a relevant runtime gain and also improve, in some cases, the spread results.

Focusing on the hybrid quantum computing - Tabu search algorithm: new results on the Asymmetric Salesman Problem

Quantum Computing is an emerging paradigm which is gathering a lot of popularity in the current scientific and technological community. Widely conceived as the next frontier of computation, Quantum Computing is still at the dawn of its development. Thus, current solving systems suffer from significant limitations in terms of performance and capabilities. Some interesting approaches have been devised by researchers and practitioners in order to overcome these barriers, being quantum-classical hybrid algorithms one of the most often used solving schemes. The main goal of this paper is to extend the results and findings of the recently proposed hybrid Quantum Computing - Tabu Search Algorithm for partitioning problems. To do that, we focus our research on the adaptation of this method to the Asymmetric Traveling Salesman Problem. In overall, we have employed six well-known instances belonging to TSPLIB to assess the performance of Quantum Computing - Tabu Search Algorithm in comparison to QBSolv. Furthermore, as an additional contribution, this work also supposes the first solving of the Asymmetric Traveling Salesman Problem using a Quantum Computing based method. Aiming to boost whole community's research in QC, we have released the project's repository as open source code for further application and improvements.

WORKSHOP SESSION: Workshop: Evolutionary reinforcement learning

Novelty and MCTS

Novelty search has become a popular technique in different fields such as evolutionary computing, classical AI planning, and deep reinforcement learning. Searching for novelty instead of, or in addition to, directly maximizing the search objective, aims at avoiding dead ends and local minima, and overall improving exploration. We propose and test the integration of novelty into Monte Carlo Tree Search (MCTS), a state-of-the-art framework for online RL planning, by linearly combining value estimates with novelty scores during the selection phase of MCTS. Three different novelty measures are adapted from the literature, integrated into MCTS, and tested in four different board games. The initial results are promising and point towards potential for novelty as "online generalization for uncertainty" in more challenging search settings.

Using deep Q-network for selection hyper-heuristics

Hyper-Heuristics is an active research field that aims to automatically select (or generate) the best low-level heuristic in each step of the search process. This work investigates a Hyper-Heuristic with a Deep Q-Network (DQN) selection strategy and compares it with two state-of-the-art approaches, namely the Dynamic MAB and the Fitness-Rate-Rank MAB. The experiments conducted on two domains from the HyFlex framework showed that the DQN approach outperformed the others on the Vehicle Routing Problem and was competitive on the Traveling Salesman Problem. This indicates that the DQN is a robust selection strategy that is less sensitive to the domain than the MAB based approaches.

Coordinate ascent MORE with adaptive entropy control for population-based regret minimization

Model-based Relative Entropy Policy Search (MORE) is a population-based stochastic search algorithm with desirable properties such as a well defined policy search objective, i.e., it optimizes the expected return, and exact closed form information theoretic update rules. This is in contrast with existing population-based methods, that are often referred to as evolutionary strategies, such as CMA-ES. While these methods work very well in practice, the updates of the search distribution are often based on heuristics and they do not optimize the expected return of the population but instead implicitly optimize the return of elite samples, which may yield a poor expected return and unreliable or risky solutions. We show that the MORE algorithm can be improved with distinct updates based on coordinate ascent on the mean and covariance of the search distribution, which considerably improves the convergence speed while maintaining the exact closed form updates. In this way, we can match the performance of elite samples of CMA-ES while also showing a considerably improved performance of the sample average. We evaluate our new algorithm on simulated robotic tasks and compare to the state of the art CMA-ES.

On the challenges of jointly optimising robot morphology and control using a hierarchical optimisation scheme

We investigate a hierarchical scheme for the joint optimisation of robot bodies and controllers in a complex morphological space. An evolutionary algorithm optimises body-plans while a separate learning algorithm is applied to each body generated to learn a controller. We investigate the interaction of these processes using a weak and then strong learning method. Results show that the weak learner leads to more body-plan diversity but that both learners cause premature convergence of body-plans to local optima. We conclude with suggestions as the framework might be adapted to address these issues in future.

Using reinforcement learning for tuning genetic algorithms

Genetic algorithms (GAs) are a subclass of evolutionary algorithms often used to solve difficult combinatorial or non-linear problems. However, most GAs have to be configured for a particular problem type, and even then, the performance depends on many hyperparameters and reproduction operators. In this paper, a reinforcement learning (RL) approach is designed to adaptively set parameters for a GA used for solving a Capacitated Vehicle Routing Problem (CVRP). An RL agent interacts with the GA environment by taking actions that affect the parameters governing its evolution, starting from a given initial point. The results obtained by this RL-GA procedure are then compared with those obtained by alternate static parameter values. For a set of benchmark problems, the solutions obtained by the RL-GA are better (up to 11% improvement) than those obtained for the set as compared to the alternate approach. Examination of the results shows that the RL-GA maintains great diversity in the population pool, especially as the iterations accrue. Computational runs are traced to show how the RL agent learns from population diversity and solution improvements over time, leading to near-optimal solutions.

Evolutionary reinforcement learning for sparse rewards

Temporal logic (TL) is an expressive way of specifying complex goals in reinforcement learning (RL), which facilitates the design of reward functions. However, the combination of these two techniques is prone to generate sparse rewards, which might hinder the learning process. Evolutionary algorithms (EAs) hold promise in tackling this problem by encouraging the diversification of policies through exploration in the parameter space. In this paper, we present GEATL, the first hybrid on-policy evolutionary-based algorithm that combines the advantages of gradient learning in deep RL with the exploration ability of evolutionary algorithms, in order to solve the sparse reward problem pertaining to TL specifications. We test our approach in a delayed reward scenario. Differently from previous baselines combining RL and TL, we show that GEATL is able to tackle complex TL specifications even in sparse-reward settings.

WORKSHOP SESSION: Workshop: Evolutionary computation software systems

AI programmer: autonomously creating software programs using genetic algorithms

In this paper, we present AI Programmer, a machine learning (ML) system that can automatically generate full software programs, while requiring only minimal human guidance. At its core, AI Programmer uses a genetic algorithm (GA), coupled with a tightly constrained programming language that minimizes the overhead of its ML search space. Part of AI Programmer's novelty stems from (i) its unique system design, including an embedded, hand-crafted interpreter for efficiency and security and (ii) its augmentation of classic GA to include instruction-gene randomization bindings and programming language-specific genome construction and elimination techniques. We provide a detailed examination of AI Programmer's system design, several examples detailing how the system works, and experimental data demonstrating its software generation capabilities and performance using only mainstream CPUs.

Paradiseo: from a modular framework for evolutionary computation to the automated design of metaheuristics: 22 years of Paradiseo

The success of metaheuristic optimization methods has led to the development of a large variety of algorithm paradigms. However, no algorithm clearly dominates all its competitors on all problems. Instead, the underlying variety of landscapes of optimization problems calls for a variety of algorithms to solve them efficiently. It is thus of prior importance to have access to mature and flexible software frameworks which allow for an efficient exploration of the algorithm design space. Such frameworks should be flexible enough to accommodate any kind of metaheuristics, and open enough to connect with higher-level optimization, monitoring and evaluation softwares. This article summarizes the features of the Paradiseo framework, a comprehensive C++ free software which targets the development of modular metaheuristics. Paradiseo provides a highly modular architecture, a large set of components, speed of execution and automated algorithm design features, which are key to modern approaches to metaheuristics development.

Component-based design of multi-objective evolutionary algorithms using the Tigon optimization library

Multi-objective optimization problems involve several conflicting objectives that have to be optimized simultaneously. Generating a complete Pareto-optimal front (POF) can be computationally expensive or even infeasible, and for that reason there has been an enormous interest in using multi-objective evolutionary algorithms (MOEAs), which are known to generate a good approximation of the POF. MOEAs can be difficult to implement, and even for experienced optimization experts it can be a very time consuming task. For this reason several optimization libraries exist in the literature, providing off-the-shelf access to the most popular MOEAs. Some optimization libraries also provide a framework to design MOEAs. However, existing frameworks can be too stringent and do not provide sufficient flexibility for the design of more sophisticated MOEAs. To address this, a recently proposed optimization library, known as Tigon, features a component-based framework for the design of MOEAs with a focus on flexibility and re-usability. This paper demonstrates the generality of this new framework by showing how to implement different types of MOEAs, covering several paradigms in evolutionary computation. The work in this paper serves as a guide for researchers, and others alike, to build their own MOEAs by using the Tigon optimization library.

EBIC.JL: an efficient implementation of evolutionary biclustering algorithm in Julia

Biclustering is a data mining technique which searches for local patterns in numeric tabular data with main application in bioinformatics. This technique has shown promise in multiple areas, including development of biomarkers for cancer, disease subtype identification, or gene-drug interactions among others. In this paper we introduce EBIC.JL - an implementation of one of the most accurate biclustering algorithms in Julia, a modern highly parallelizable programming language for data science. We show that the new version maintains comparable accuracy to its predecessor EBIC while converging faster for the majority of the problems. We hope that this open source software in a high-level programming language will foster research in this promising field of bioinformatics and expedite development of new biclustering methods for big data.

WORKSHOP SESSION: Workshop: Industrial applications of metaheuristics

A multi-objective genetic algorithm for jacket optimization

Jackets are massive steel towers supporting offshore installations such as oil platforms and wind turbines. Due to the high costs of material, construction, and installation, there is an interest in optimizing such jacket designs. This is an example of the broader problem of structural design optimization.

In this paper, we describe underlying concepts related to the problem of jacket design as well as previous research on jacket design optimization. Motivated by the complexity of the problem, including the multiple objectives typically involved, we develop a novel multi-objective genetic algorithm, NSGA-J, which is tailored to jacket design optimization. NSGA-J is based on the prominent Non-Dominated Sorting Genetic Algorithm (NSGA-II), but tailors it to the problem of designing jackets. Experimentally, we study a cloud-based implementation of NSGA-J and present our results and experiences. The paper ends with a discussion of lessons learned and sketches opportunities for future research. We hope to inspire future work on complex applications of structural design optimization including jacket design optimization.

Using grammatical evolution for modelling energy consumption on a computer numerical control machine

Discrete manufacturing is known to be a high consumer of energy and much work has been done in continuous improvement and energy saving methods addressing this issue. Computer Numerical Control (CNC) machines, commonly used in the manufacturing of metal parts, are highly energy-demanding because of many required sub-systems, such as cooling, lubrication, logical interfaces and electric motors. For this reason, there is a large body of work focusing on modelling the energy needs of this class of machine.

This paper applies Grammatical Evolution (GE) for developing auto-regressive models for the energy consumption of a CNC machine. Empirical data from three 24-hour work shifts comprising three different types of products are used as inputs. We also introduce an autocorrelation-informed approach for the grammar, which benefits from a prior analysis of the training data for better capturing periodic or close to periodic behaviour. Finally, we compare the outcomes from real and predicted energy profiles through the use of an existing analysis tool, which is capable of extracting production-related information such as total and average KW consumption, number of parts produced and breakdown of production and idle hours. Results show that GE yields accurate and explainable models for the analysed scenario.

A heuristic approach to feasibility verification for truck loading

This paper introduces an industrial decision problem emerging from daily operations of a company which provides its clients in Europe and Latin America with machine rentals. The company faces several challenges in distribution of these machines using a fleet of trucks. The trucks with a non-convex loading surface are loaded with multiple machines of irregular shapes to be picked up or delivered to/from customer or depot locations. Within the algorithmic framework utilized by the company, for each route generated, one must verify whether or not it is possible to load the corresponding machines onto the truck without violating certain restrictions.

The aim of this paper is to develop a competitive and automated algorithm capable of efficiently classifying load plans in a very restrictive scenario. For this purpose, we extend a well-known heuristic algorithm from the two-dimensional (2D) packing literature and compare its performance with two other methods: the original 2D packing method and the current method employed by the company in practice. The computational experiments are performed in a set of candidate load plans provided by the company. We observe that the newly introduced adaptation outperforms the other methods and correctly classifies 90% of the given load plans provided.

Trustworthy AI for process automation on a Chylla-Haase polymerization reactor

In this paper, genetic programming reinforcement learning (GPRL) is utilized to generate human-interpretable control policies for a Chylla-Haase polymerization reactor. Such continuously stirred tank reactors (CSTRs) with jacket cooling are widely used in the chemical industry, in the production of fine chemicals, pigments, polymers, and medical products. Despite appearing rather simple, controlling CSTRs in real-world applications is quite a challenging problem to tackle. GPRL utilizes already existing data from the reactor and generates fully automatically a set of optimized simplistic control strategies, so-called policies, the domain expert can choose from. Note that these policies are white-box models of low complexity, which makes them easy to validate and implement in the target control system, e.g., SIMATIC PCS 7. However, despite its low complexity the automatically-generated policy yields a high performance in terms of reactor temperature control deviation, which we empirically evaluate on the original reactor template.

Multi tree operators for genetic programming to identify optimal energy flow controllers

Genetic programming is known to be able to find nearly optimal solutions for quite complex problems. So far, the focus was more on solution candidates that hold just one symbolic regression tree. For complex problems like controlling the energy flows of a building in order to minimize its energy costs, this is often not sufficient. This is why this work presents a solution candidate implementation in HeuristicLab where they hold multiple symbolic regression trees. Additionally, also new crossover and mutation operators were implemented as the existing ones cannot handle multiple trees in one solution candidate. The first type of operators applies them on all trees in the solution candidate, whereas the second one only applies them to one of the trees. It is found that applying the mutator to only one of the trees significantly reduces the training duration. Applying the crossover to one of the trees instead of all needs longer training times but can also achieve better results.

Simulation-based scheduling of a large-scale industrial formulation plant using a heuristics-assisted genetic algorithm

Research has brought forth several promising approaches, e.g. Mixed-integer linear programming [8] or Constraint Programming [13] to represent industrial batch production plants and to optimize their production schedules. But still, after decades of research in the field, scheduling is done (semi)-manually in industry in almost all cases. Main reasons for this besides the intrinsic combinatorial complexity are that model development and maintenance require expert knowledge. This work tackles these challenges by using a simulation-based optimization approach in which a Genetic Algorithm provides high-level encodings of schedules and an industrial-strength discrete-event simulator that provides a detailed model of the plant and is used as a fitness evaluator. To enable this approach to solve problems of industrial complexity, the scheduling heuristics are used to reduce the search space to a reasonable size that still contains the important degrees of freedom so that still optimal or at least high-quality solution are obtained. The case study that is considered here is an industrial two-stage formulation plant which is modeled in detail, down to the level of shiftmodels of the operators and mass-balances from source to sink, thus ensuring that the computed schedules are directly applicable at the real-world plant.

Addressing the multiplicity of solutions in optical lens design as a niching evolutionary algorithms computational challenge

Optimal Lens Design constitutes a fundamental, long-standing real-world optimization challenge. Potentially large number of optima, rich variety of critical points, as well as solid understanding of certain optimal designs per simple problem instances, provide altogether the motivation to address it as a niching challenge. This study applies established Niching-CMA-ES heuristic to tackle this design problem (6-dimensional Cooke triplet) in a simulation-based fashion. The outcome of employing Niching-CMA-ES 'out-of-the-box' proves successful, and yet it performs best when assisted by a local searcher which accurately drives the search into optima. The obtained search-points are corroborated based upon concrete knowledge of this problem-instance, accompanied by gradient and Hessian calculations for validation. We extensively report on this computational campaign, which overall resulted in (i) the location of 19 out of 21 known minima within a single run, (ii) the discovery of 540 new optima. These are new minima similar in shape to 21 theoretical solutions, but some of them have better merit function value (unknown heretofore), (iii) the identification of numerous infeasibility pockets throughout the domain (also unknown heretofore). We conclude that niching mechanism is well-suited to address this problem domain, and hypothesize on the apparent multidimensional structures formed by the attained new solutions.

Advanced mine optimisation under uncertainty using evolution

In this paper, we investigate the impact of uncertainty in advanced mine optimisation. We consider Maptek's software system Evolution which optimizes extraction sequences based on evolutionary computation techniques and quantify the uncertainty of the obtained solutions with respect to the ore deposit based on predictions obtained by ensembles of neural networks. Furthermore, we investigate the impact of staging on the obtained optimized solutions and discuss a wide range of components for this large scale stochastic optimisation problem which allow us to mitigate the uncertainty in the deposit while maintaining high profitability.

Determining a consistent experimental setup for benchmarking and optimizing databases

The evaluation of the performance of an IT system is a fundamental operation in its benchmarking and optimization. However, despite the general consensus on the importance of this task, little guidance is usually provided to practitioners who need to benchmark their IT system. In particular, many works in the area of database optimization do not provide an adequate amount of information on the setup used in their experiments and analyses. In this work we report an experimental procedure that, through a sequence of experiments, analyzes the impact of various choices in the design of a database benchmark, leading to the individuation of an experimental setup that balances the consistency of the results with the time needed to obtain them. We show that the minimal experimental setup we obtain is representative also of heavier scenarios, which make it possible for the results of optimization tasks to scale.

Multi-objective evolutionary product bundling: a case study

Product bundling is a strategy conducted by marketing decision-makers to combine items or services for targeted sales in today's competitive business environment. Targeted sales can be in various forms, like increasing the likelihood of a purchase, promoting some products among a specific customer segment, or improving user experience. In this study, we propose an evolutionary product bundle generation strategy that is based on the NSGA-II algorithm. The proposed approach is designed as a multi-objective optimization procedure where the objectives are designed in terms of desired bundle feature distributions. The designed genetic algorithm is flexible and allows decision-makers to specify objectives such as price, season, item similarity and association with bundle size constraints. In the experiments, we show that the evolutionary approach enables us to generate Pareto solutions compared to the initial population.

WORKSHOP SESSION: Workshop: International workshop on learning classifier systems

A genetic fuzzy system for interpretable and parsimonious reinforcement learning policies

Reinforcement learning (RL) is experiencing a resurgence in research interest, where Learning Classifier Systems (LCSs) have been applied for many years. However, traditional Michigan approaches tend to evolve large rule bases that are difficult to interpret or scale to domains beyond standard mazes. A Pittsburgh Genetic Fuzzy System (dubbed Fuzzy MoCoCo) is proposed that utilises both multiobjective and cooperative coevolutionary mechanisms to evolve fuzzy rule-based policies for RL environments. Multiobjectivity in the system is concerned with policy performance vs. complexity. The continuous state RL environment Mountain Car is used as a testing bed for the proposed system. Results show the system is able to effectively explore the trade-off between policy performance and complexity, and learn interpretable, high-performing policies that use as few rules as possible.

An experimental comparison of explore/exploit strategies for the learning classifier system XCS

When determining the actions to execute, reinforcement learners are constantly faced with the decision of either exploiting existing knowledge or exploring new options, risking short-term costs but potentially improving performance in the long run. This paper describes and experimentally evaluates four existing explore/exploit strategies for the learning classifier system XCS. The evaluation takes place on three well-known learning problems - two multiplexers and one maze environment. An automized parameter optimization is conducted, showing that different environments require different parametrization of the strategies. Further, our results indicate that none of the strategies is superior to the others. It turns out that multi-step problems with scarce rewards are challenging for the selected strategies, highlighting the need to develop more reliable explore/exploit strategies to tackle such environments.

An overview of LCS research from 2020 to 2021

The International Workshop on Learning Classifier Systems (IWLCS) is an annual workshop at the GECCO conference where new concepts and results regarding learning classifier systems (LCSs) are presented and discussed. One recurring part of the workshop agenda is a presentation that reviews and summarizes the advances made in the field over the last year; this is intended to provide an easy entry point to the most recent progress and achievements. The 2020 presentation was accompanied by a survey workshop paper, a practice which we hereby continue. We give an overview of all the LCS-related publications from 11 March 2020 to 10 March 2021. The 46 publications we review are grouped into seven overall topics: Formal theoretic advances, contributions to LCS-based multi-agent reinforcement learning, approaches to setting and adapting LCS hyperparameters, new LCS architectures and adaptations, LCS implementations, improvements to existing LCSs and applications of LCSs.

WORKSHOP SESSION: Workshop: Landscape-aware heuristic search

Understanding parameter spaces using local optima networks: a case study on particle swarm optimization

A major challenge with utilizing a metaheuristic is finding optimal or near optimal parameters for a given problem instance. It is well known that the best performing control parameters are often problem dependent, with poorly chosen parameters even leading to algorithm failure. What is not obvious is how strongly the complexity of the parameter landscape itself is coupled with the underlying objective function the metaheuristic is attempting to solve. In this paper local optima networks (LONs) are utilized to visualize and analyze the parameter landscapes of particle swarm optimization (PSO) over an array of objective functions. It was found that the structure of the parameter landscape is affected by the underlying objective function, and in some cases by a considerable degree across multiple metrics. Furthermore, despite PSO's parameter landscape having a relatively simple macro structure, the LONs demonstrate that there is actually a considerable amount of complexity at a micro level; making parameter tuning harder for PSO than would have been initially thought. Apart from the PSO specific findings this paper also provides a formalism of parameter landscapes and demonstrates that LONs can be used as an effective tool in the analysis and visualization of parameter landscapes of metaheuristics.

Investigating the landscape of a hybrid local search approach for a timetabling problem

Curriculum-Based Course Timetabling is an NP-hard problem that can be efficiently solved by metaheuristics. The International Time-tabling Competition (ITC) 2007 was won by a hybrid local search (HLS) combining Hill Climbing, Great Deluge and Simulated Annealing. HLS remains one of the best local search algorithms to solve this problem. In this paper, we investigate the search landscape of 21 instances to analyze the behavior of the HLS components. We also propose a new distance metric that aims to be more robust and be less influenced by symmetry. Experiments show that the HLS and the embedded simulated annealing have the same general behavior but HLS leads to better robustness. This analysis strongly suggests that the HLS components and/or parameter values should be automatically configured to further improve performance.

Towards population-based fitness landscape analysis using local optima networks

A fitness landscape describes the interaction of a search domain, a cost function on designs drawn from the domain, and a neighbourhood function defining the adjacency of designs --- induced by the optimisation method used. Fitness landscapes can be represented in a compact form as Local Optima Networks (LONs). Although research has been conducted on LONs in continuous domains, the majority of work has focused on combinatorial landscapes. LONs are often used to understand the landscape encountered by population-based search heuristics, but are usually constructed via point-based search. This paper proposes the first construction of LONs by a population-based algorithm, applied to continuous optimisation problems. We construct LONs for three benchmark functions with well-known global structure using the widely used Nelder-Mead downhill simplex algorithm, and contrast these to the LONs from a point-based approach. We also investigate the sensitivity of the LON visualisation to the downhill simplex algorithm's hyperparameters, by varying the initial step size of the simplex and the step size for connectivity of optima. Our results suggest that large initial simplex sizes fragment the landscape structure, and exclude some local optima from the fitness landscape.

Dissipative polynomials

Limited precision floating point computer implementations of large polynomial arithmetic expressions are nonlinear and dissipative. They are not reversible (irreversible, lack conservation), lose information, and so are robust to perturbations (anti-fragile) and resilient to fluctuations. This gives a largely stable locally flat evolutionary neutral fitness search landscape. Thus even with a large number of test cases, both large and small changes deep within software typically have no effect and are invisible externally. Shallow mutations are easier to detect but their RMS error need not be simple.

Analysing the loss landscape features of generative adversarial networks

Generative adversarial networks (GANs) have recently gained popularity in artificial intelligence research due to their superior generation, enhancement and style transfer of content compared to other generative models. Introduced in 2014, GANs have been used in the fields of computer vision, natural language processing, medical applications, and cyber security, with the number of use cases rapidly growing. GANs are, however, difficult to train in practice due to their inherent high dimensionality, and the complexity associated with the adversarial learning task. Loss landscape analysis can aid in unravelling reasons for difficulty in training GANs as the analysis creates a topology of the search space. The vanilla and deep convolutional GAN architectures are examined in this study to gain a deeper understanding of their loss landscapes during training. The GAN loss landscape features are visualised through the use of loss gradient clouds (LGCs). The LGC analysis showcases the importance of volatility in the training of GANs, as a range of gradient magnitudes allows more exploration in finding an appropriate middle ground in balancing the loss objectives of the GAN.

Dynamic landscape analysis for open-ended stacking

Fitness landscape analysis (FLA) is a useful tool in the domain of (meta-)heuristic optimization but depends on explicitly knowing what fitness value is assigned to each solution. Dynamic optimization problems often do not provide their fitness landscape in such an explicit form, but by employing problem-specific knowledge, information about the problem itself and its current state can still be obtained. In this paper, a type of gray-box analysis of states of the open-ended stacking problem in two variations is presented. The current states obtained by monitoring the problem and algorithm during optimization are described via statistical measures similar to FLA measures. From this the distribution of possible states (the state landscape) and the transitions between problem states are analyzed. Visualization of the empirically obtained results reveals insights into algorithm-problem dynamics.

WORKSHOP SESSION: Workshop: Neuroevolution at work

Prediction of personalized blood glucose levels in type 1 diabetic patients using a neuroevolution approach

Diabetes mellitus is a lifelong disease in which either the pancreas fails to produce insulin or the produced amount is insufficient to control blood sugar levels. A way to tackle this malfunctioning is to devise an artificial pancreas endowed with a personalized control algorithm able to regulate the insulin dosage. A crucial step in realizing such a device is to effectively forecast future glucose levels starting from past glucose values, the knowledge of the food intake, and of the basal and the injected insulin. The increasing availability of medical diabetes data sets is providing unprecedented opportunities to identify correlations inside these data even harnessing innovative investigation methods, such as deep learning.

As an alternative to the deep learning methods successfully used as forecasting models, we exploit a neuroevolution algorithm to model and predict future personalized blood glucose levels. The discovered subjective regression model can represent the control algorithm of an artificial pancreas. This model is assessed through experiments performed on a real-world database containing data of six patients suffering from Type 1 diabetes. To further evaluate the effectiveness of the predictions derived from the proposed approach, the results are also compared against those obtained by other state-of-the-art recently proposed methods.

Evolving neural selection with adaptive regularization

Over-parameterization is one of the inherent characteristics of modern deep neural networks, which can often be overcome by leveraging regularization methods, such as Dropout [31]. Usually, these methods are applied globally and all the input cases are treated equally. However, given the natural variation of the input space for real-world tasks such as image recognition and natural language understanding, it is unlikely that a fixed regularization pattern will have the same effectiveness for all the input cases. In this work, we demonstrate a method in which the selection of neurons in deep neural networks evolves, adapting to the difficulty of prediction. We propose the Adaptive Neural Selection (ANS) framework, which evolves to weigh neurons in a layer to form network variants that are suitable to handle different input cases. Experimental results show that the proposed method can significantly improve the performance of commonly-used neural network architectures on standard image recognition benchmarks. Ablation studies also validate the effectiveness and contribution of each component in the proposed framework.

Pareto-optimal progressive neural architecture search

Neural Architecture Search (NAS) is the process of automating architecture engineering, searching for the best deep learning configuration. One of the main NAS approaches proposed in the literature, Progressive Neural Architecture Search (PNAS), seeks for the architectures with a sequential model-based optimization strategy: it defines a common recursive structure to generate the networks, whose number of building blocks rises through iterations. However, NAS algorithms are generally designed for an ideal setting without considering the needs and the technical constraints imposed by practical applications. In this paper, we propose a new architecture search named Pareto-Optimal Progressive Neural Architecture Search (POPNAS) that combines the benefits of PNAS to a time-accuracy Pareto optimization problem. POPNAS adds a new time predictor to the existing approach to carry out a joint prediction of time and accuracy for each candidate neural network, searching through the Pareto front. This allows us to reach a trade-off between accuracy and training time, identifying neural network architectures with competitive accuracy in the face of a drastically reduced training time.

Neuroevolution of recurrent neural networks for time series forecasting of coal-fired power plant operating parameters

This work presents how the Evolutionary eXploration of Augmenting Memory Models (EXAMM) neuroevolution algorithm is incorporated into Microbeam Technologies' condition-based monitoring power plant optimization software using a workflow that integrates coal-fired power plant data collection, evolved RNN predictions and analytic performance indices predictions. To the authors' knowledge, it is the first use of a neuroevolution strategy to evolve recurrent neural networks (RNNs) for forecasting of power plant parameters where the evolved networks have been incorporated into production software used at a coal-fired power plant. A preliminary exploration of the plant's performance shows that after incorporating this software, the amount of revenue lost due to power plant derates and outages decreased by $7.3 million, a savings of 42%, and increased efficiency under medium and low load conditions. A further investigation of the effect of training sequence length and time series data normalization methods on evolving and training RNNs for this system is given, providing practical results useful for real world time series forecasting. It is shown that dividing long time series sequences up into shortened training sequences can dramatically speed up training, and that using different normalization methods (min-max vs. z-score) can provide statistically significant results, dependent on the data sets.

On the effects of pruning on evolved neural controllers for soft robots

Artificial neural networks (ANNs) are commonly used for controlling robotic agents. For robots with many sensors and actuators, ANNs can be very complex, with many neurons and connections. Removal of neurons or connections, i.e., pruning, may be desirable because (a) it reduces the complexity of the ANN, making its operation more energy efficient, and (b) it might improve the generalization ability of the ANN. Whether these goals can actually be achieved in practice is however still not well known. On the other hand, it is widely recognized that pruning in biological neural networks plays a fundamental role in the development of brains and their ability to learn. In this work, we consider the case of Voxel-based Soft Robots, a kind of robots where sensors and actuators are distributed over the body and that can be controlled with ANNs optimized by means of neuroevolution. We experimentally characterize the effect of different forms of pruning on the effectiveness of neuroevolution, also in terms of generalization ability of the evolved ANNs. We find that, with some forms of pruning, a large portion of the connections can be pruned without strongly affecting robot capabilities. We also observe sporadic improvements in generalization ability.

Behavior-based neuroevolutionary training in reinforcement learning

In addition to their undisputed success in solving classical optimization problems, neuroevolutionary and population-based algorithms have become an alternative to standard reinforcement learning methods. However, evolutionary methods often lack the sample efficiency of standard value-based methods that leverage gathered state and value experience. If reinforcement learning for real-world problems with significant resource cost is considered, sample efficiency is essential. The enhancement of evolutionary algorithms with experience exploiting methods is thus desired and promises valuable insights. This work presents a hybrid algorithm that combines topology-changing neuroevolutionary optimization with value-based reinforcement learning. We illustrate how the behavior of policies can be used to create distance and loss functions, which benefit from stored experiences and calculated state values. They allow us to model behavior and perform a directed search in the behavior space by gradient-free evolutionary algorithms and surrogate-based optimization. For this purpose, we consolidate different methods to generate and optimize agent policies, creating a diverse population. We exemplify the performance of our algorithm on standard benchmarks and a purpose-built real-world problem. Our results indicate that combining methods can enhance the sample efficiency and learning speed for evolutionary approaches.

Hybrid encodings for neuroevolution of convolutional neural networks: a case study

The Neuroevolution of Convolutional Neural Networks have yield into highly competitive results in the field of visual recognition in contemporary years. Some of the most recent advances in this field have been related to the design of neural encodings to represent these highly complex Deep Learning structures. Hybrid encodings have shown potential at distributing the representation of these networks into different sub-structures and thus improving the search. In this paper, we propose a compact hybrid encoding, which is used in an evolutionary framework called Deep Genetic Algorithm (DeepGA). We assess the performance of our simple representation against a hybrid encoding based on DenseBlocks, to evaluate how certain encodings might bias the search towards larger CNNs, in both single- and multi-objective scenarios. Our case study is the classification of lung conditions in chest X-ray images.

WORKSHOP SESSION: Workshop: Parallel and distributed evolutionary inspired methods

A partially asynchronous global parallel genetic algorithm

Population-based meta-heuristics such as Genetic Algorithms (GA) are ideal for exploiting multiple processor cores. With parallel architectures now standard computationally intensive methods need to harness them to best effect. A synchronous globally parallel GA creates and evaluates population members in parallel at each generation resulting in considerable processor time spent waiting for threads. An asynchronous approach whereby parallel threads continue evolution without waiting addresses this issue but can result in memory conflicts. This paper introduces an asynchronous global GA model for shared memory CPUs without memory conflicts. Experiments demonstrate performance gains of 1.35 to 12 fold dependant on problem and population sizes. However, an asynchronous model leads to non-uniform evolution reducing accuracy. Consequently, this paper demonstrates that combining synchronous and asynchronous methods into a partially asynchronous model retains a speed advantage whilst improving solution accuracy.

An operation to promote diversity in evolutionary algorithms in a dynamic hybrid island model

Currently, there is a considerable variety of Evolutionary Algorithms (EAs) and due to their performances some of them become more popular. EAs can be implemented in different ways, such as the Island Model (IM). However, despite the good performance of some EAs and the possibilities of varying their implementations, they can converge to a local optimum mainly because of the loss of diversity in the population. This work proposes an operation for a dynamic hybrid IM (D-IM), aiming to promote diversity to the population if it is converging to a certain portion of the search space. Thus, the D-IM reacts to the possible local convergence of its population, in addition to adjust the topology according to the EAs in the islands. The results demonstrated that the proposed operation can improve the efficiency of the D-IM search process and be competitive for solving bounded constrained optimization problems.

Solving QUBO with GPU parallel MOPSO

The Quadratic Unconstrained Binary Optimization problem (QUBO) is an NP-hard optimization problem. QUBO can be reduced from many other combinatorial optimization problems. Hence, if we can solve QUBO, we can also solve many other problems. The paper proposes a novel method to solve QUBO by reducing it into the Bi-objective Bound-constrained Continuous Optimization problem (BBCO) and then solving the BBCO with Multi-Objective Particle Swarm Optimization (MOPSO). Using 45 benchmark problem instances, the paper also shows that a GPU parallel implementation of the proposed method on the CUDA architecture finds solutions with low relative errors for the benchmark instances at most 100 variables (76% of the benchmark instances) and runs up to 202 times faster than the corresponding multi-threaded CPU program on four physical cores.

Conduit: a C++ library for best-effort high performance computing

Developing software to effectively take advantage of growth in parallel and distributed processing capacity poses significant challenges. Traditional programming techniques allow a user to assume that execution, message passing, and memory are always kept synchronized. However, maintaining this consistency becomes increasingly costly at scale. One proposed strategy is "best-effort computing", which relaxes synchronization and hardware reliability requirements, accepting nondeterminism in exchange for efficiency. Although many programming languages and frameworks aim to facilitate software development for high performance applications, existing tools do not directly provide a prepackaged best-effort interface. The Conduit C++ Library aims to provide such an interface for convenient implementation of software that uses best-effort inter-thread and inter-process communication. Here, we describe the motivation, objectives, design, and implementation of the library. Benchmarks on a communication-intensive graph coloring problem and a compute-intensive digital evolution simulation show that Conduit's best-effort model can improve scaling efficiency and solution quality, particularly in a distributed, multi-node context.

Island model in ActoDatA: an actor-based implementation of a classical distributed evolutionary computation paradigm

In this paper, we make a first assessment of the performance of ActoDatA, a novel actor-based software library for distributed data analysis and machine learning in Java that we have recently developed. To do so we have implemented an evolutionary machine learning application based on a distributed island model. The model implementation is compared to an equivalent implementation in ECJ, a popular general-purpose evolutionary computation library that provides support for distributed computing.

The testbed used for comparing the two distributed versions has been an application of Sub-machine code Genetic Programming to the design of efficient low-resolution binary image classifiers. The results we have obtained show that the ActoDatA implementation is more efficient than the corresponding ECJ implementation.

WORKSHOP SESSION: Workshop: Real-world applications of continuous and mixed-integer optimization

House price prediction using clustering and genetic programming along with conducting a comparative study

One of the most important tasks in machine learning is prediction. Data scientists use different regression methods to find the most appropriate and accurate model for each type of datasets. This study proposes a method to improve accuracy in regression and prediction. In common methods, different models are applied to the whole data to find the best model with higher accuracy. In our proposed approach, first, we cluster data using different methods such as K-means, DBSCAN, and agglomerative hierarchical clustering algorithms. Then, for each clustering method and for each generated cluster we apply various regression models including linear and polynomial regressions, SVR, neural network, and symbolic regression in order to find the most accurate model and study the genetic programming potential in improving the prediction accuracy. This model is a combination of clustering and regression. After clustering, the number of samples in each created cluster, compared to the number of samples in the whole dataset is reduced, and consequently by decreasing the number of samples in each group, we lose accuracy. On the other hand, specifying data and setting similar samples in one group enhances the accuracy and decreases the computational cost. As a case study, we used real estate data with 20 features to improve house price estimation; however, this approach is applicable to other large datasets.

A matheuristic approach for finding effective base locations and team configurations for north west air ambulance

North West Air Ambulance (NWAA) provides helicopter emergency medical service to the Northwest of England. Their three healthcare teams provide their service from two bases with three helicopters. They face some research questions to understand the impact of the air ambulance base locations and the healthcare teams' composition on their services. This paper aims to address those questions by modelling their operations into a location-related decision problem. Then we developed a matheuristic approach to solving the model and generated many realistic instances from historical data to validate our proposed approach's robustness. With the help of our experimental results, we examine the effect of adjusting air ambulance base locations as well as team configurations on the service quality measures to answer the research questions.

Project portfolio selection with defense capability options

This paper proposes a novel formulation of the project portfolio selection and scheduling problem inspired by the Future Defense Force Design process in the context of the Australian Defence Force capability development. The core objective of the problem is to maximize the total capability portfolio value attained by the selection and scheduling of a set of capability projects, grouped in various subsets referred to as capability options, while adhering to budgetary, scheduling, and operational constraints. To provide initial solutions to the proposed model, a custom heuristic is developed and used to seed an initial population for a Genetic Algorithm.

Black-box adversarial attacks using evolution strategies

In the last decade, deep neural networks have proven to be very powerful in computer vision tasks, starting a revolution in the computer vision and machine learning fields. However, deep neural networks, usually, are not robust to perturbations of the input data. In fact, several studies showed that slightly changing the content of the images can cause a dramatic decrease in the accuracy of the attacked neural network. Several methods able to generate adversarial samples make use of gradients, which usually are not available to an attacker in real-world scenarios. As opposed to this class of attacks, another class of adversarial attacks, called black-box adversarial attacks, emerged, which does not make use of information on the gradients, being more suitable for real-world attack scenarios. In this work, we compare three well-known evolution strategies on the generation of black-box adversarial attacks for image classification tasks. While our results show that the attacked neural networks can be, in most cases, easily fooled by all the algorithms under comparison, they also show that some black-box optimization algorithms may be better in "harder" setups, both in terms of attack success rate and efficiency (i.e., number of queries).

Evolutionary algorithms in high-dimensional radio access network optimization

This article describes the project result of modeling and optimizing Radio Access Network. We have proposed a solution for controlling a large number of antennas in the conditions of engineering constraints and a large search space dimension. For estimating the performance, a virtual environment has been developed, that allows changing the parameters of Radio Access antennas to control the coverage and signal quality for all User Equipments. To optimize the Radio Access network, we have analyzed DE, CMA-ES, MOS, self-adaptive surrogate CMA-ES, lq-CMA-ES, BIPOP CMA-ES, sep-CMA-ES, lm-CMA-ES, HMO-CMA-ES, JADE, PSO, which have been adapted to the constraints of the task. To reduce dimension, graph clustering methods - Spectral clustering, Label propagation, Markov Clustering - are compared in dividing the network into groups. The experiments illustrate the efficiency of optimizing a large Radio Access network by the cluster approach.

WORKSHOP SESSION: Workshop: Surrogate-assisted evolutionary optimisation

Preferential Bayesian optimisation with skew gaussian processes

Preferential Bayesian optimisation (PBO) deals with optimisation problems where the objective function can only be accessed via preference judgments, such as "this is better than that" between two candidate solutions (like in A/B tests). The state-of-the-art approach to PBO uses a Gaussian process to model the preference function and a Bernoulli likelihood to model the observed pair-wise comparisons. Laplace's method is then employed to compute posterior inferences and, in particular, to build an appropriate acquisition function. In this paper, we prove that the true posterior distribution of the preference function is a Skew Gaussian Process (SkewGP), with highly skewed pairwise marginals and, thus, show that Laplace's method usually provides a very poor approximation. We then derive an efficient method to compute the exact SkewGP posterior and use it as surrogate model for PBO employing standard acquisition functions (Upper-Credible-Bound, etc.). We illustrate the benefits of our exact PBO-SkewGP in a variety of experiments, by showing that it consistently outperforms PBO based on Laplace's approximation both in terms of convergence speed and computational time. We also show that our framework can be extended to deal with mixed preferential-categorical BO, where binary judgments (valid or non-valid) together with preference judgments are available.

Black-box mixed-variable optimisation using a surrogate model that satisfies integer constraints

A challenging problem in both engineering and computer science is that of minimising a function for which we have no mathematical formulation available, that is expensive to evaluate, and that contains continuous and integer variables, for example in automatic algorithm configuration. Surrogate-based algorithms are very suitable for this type of problem, but most existing techniques are designed with only continuous or only discrete variables in mind. Mixed-Variable ReLU-based Surrogate Modelling (MVRSM) is a surrogate-based algorithm that uses a linear combination of rectified linear units, defined in such a way that (local) optima satisfy the integer constraints. Unlike other methods, it also has a constant run-time per iteration. This method outperforms the state of the art on several synthetic benchmarks with up to 238 continuous and integer variables, and achieves competitive performance on two real-life benchmarks: XG-Boost hyperparameter tuning and Electrostatic Precipitator optimisation.

How Bayesian should Bayesian optimisation be?

Bayesian optimisation (BO) uses probabilistic surrogate models - usually Gaussian processes (GPs) - for the optimisation of expensive black-box functions. At each BO iteration, the GP hyperparameters are fit to previously-evaluated data by maximising the marginal likelihood. However, this fails to account for uncertainty in the hyperparameters themselves, leading to overconfident model predictions. This uncertainty can be accounted for by taking the Bayesian approach of marginalising out the model hyperparameters. We investigate whether a fully-Bayesian treatment of the Gaussian process hyperparameters in BO (FBBO) leads to improved optimisation performance. Since an analytic approach is intractable, we compare FBBO using three approximate inference schemes to the maximum likelihood approach, using the Expected Improvement (EI) and Upper Confidence Bound (UCB) acquisition functions paired with ARD and isotropic Matérn kernels, across 15 well-known benchmark problems for 4 observational noise settings. FBBO using EI with an ARD kernel leads to the best performance in the noise-free setting, with much less difference between combinations of BO components when the noise is increased. FBBO leads to over-exploration with UCB, but is not detrimental with EI. Therefore, we recommend that FBBO using EI with an ARD kernel as the default choice for BO.

A two-phase surrogate approach for high-dimensional constrained discrete multi-objective optimization

This paper presents a two-phase surrogate approach for high-dimensional constrained discrete multi-objective optimization. In Phase I, the algorithm searches for a feasible point using surrogates for the constraints and objectives. In Phase I iterations, the algorithm identifies the infeasible points that are nondominated according to three criteria: number of constraint violations, maximum constraint violation, and sum of squares of constraint violations. Moreover, the function evaluation point is chosen from a large number of trial points in the neighborhood of a current nondominated point according to the predicted values of the above criteria. In Phase II, the algorithm searches for Pareto optimal solutions using surrogates for the objectives and constraints. In Phase II iterations, the function evaluation point is chosen from trial points that are predicted to be feasible and nondominated in the neighborhood of a current nondominated point using distance criteria in the objective and decision spaces. The algorithm is implemented using RBF surrogates and tested on the Mazda benchmark problem that has 222 discrete variables, 54 constraints and 2 objectives. The proposed method found feasible points much more quickly and obtained much better sets of nondominated objective vectors than an NSGA-II implementation given a budget of only 3330 simulations.

Augmenting high-dimensional nonlinear optimization with conditional GANs

Many mathematical optimization algorithms fail to sufficiently explore the solution space of high-dimensional nonlinear optimization problems due to the curse of dimensionality. This paper proposes generative models as a complement to optimization algorithms to improve performance in problems with high dimensionality. To demonstrate this method, a conditional generative adversarial network (C-GAN) is used to augment the solutions produced by a genetic algorithm (GA) for a 311-dimensional nonconvex multi-objective mixed-integer nonlinear optimization. The C-GAN, composed of two networks with three fully connected hidden layers, is trained on solutions generated by GA, and then given sets of desired labels (i.e., objective function values), generates complementary solutions corresponding to those labels. Six experiments are conducted to evaluate the capabilities of the proposed method. The generated complementary solutions are compared to the original solutions in terms of optimality and diversity. The generative model generates solutions with objective functions up to 79% better, and with hypervolumes up to 58% higher, than the original solutions. These findings show that a C-GAN with even a simple training approach and architecture can, with a much shorter runtime, highly improve the diversity and optimality of solutions found by an optimization algorithm for a high-dimensional nonlinear optimization problem. [GitHub repository: https://github.com/PouyaREZ/GAN_GA]

WORKSHOP SESSION: Workshop: Genetic and evolutionary computation in defense, security, and risk management

Multi-objective evolutionary algorithms for distributed tactical control of heterogeneous agents

Controlling large numbers of heterogeneous agents in dynamic environments has a number of civilian and defense applications, but presents challenges in cooperation among agents and decision making in uncertain environments. This paper investigates a new problem representation using genetic algorithm tuned potential fields and a new multi-objective problem formulation that evolves distributed control for large numbers of cooperating and competing heterogeneous agents in dynamic environments. Using real-time strategy game-like simulation as a test-bed, results show that the proposed approach scales to controlling a number of and different types of agents. Our representation uses influence maps to choose a target and a set of potential fields to control the maneuverability of agents in real-time. We formulated this problem as a multi-objective optimization problem and used an evolutionary multi objective optimization technique to maximize two conflicting objectives in simulation skirmishes. Results indicate that our evolutionary algorithm based representation produces good cooperative behavior and generalized well across groups composed from several different types of agents.

Deceiving neural source code classifiers: finding adversarial examples with grammatical evolution

This work presents an evolutionary approach for assessing the robustness of a system trained in the detection of software vulnerabilities. By applying a Grammatical Evolution genetic algorithm, and using the output of the system being assessed as the fitness function, we show how we can easily change the classification decision (i.e. vulnerable or not vulnerable) for a given instance by simply injecting evolved features that in no wise affect the functionality of the program. Additionally, by means of the same technique, that is by simply modifying the program instances, we show how we can significantly decrease the accuracy measure of the whole system on the dataset used for the test phase.

Finally we remark that these methods can be easily customized for applications in different domains and also how the underlying ideas can be exploited for different purposes, such as the exploration of the behaviour of a generic neural system.

Competitive coevolution for defense and security: Elo-based similar-strength opponent sampling

Competitive coevolution is an important technique for fields such as defense and security which inherently involve adversarial games. One advantage that the field of computer security has in particular is that games in this space are often naturally able to be simulated at a high fidelity by interacting with the involved software or hardware directly. However, such high-fidelity evaluations are typically slow, so it is especially important in these cases to get as much useful information out of as few evaluations as possible. This paper proposes a new competitive coevolutionary evaluation method of Similar-Strength Opponent Sampling, which selects opponent pairings of similar skill levels so that evaluations can more efficiently distinguish the performances of similar individuals. This is enabled through the use of Elo ratings as a surrogate fitness function that prevents bias against individuals assigned stronger opponents. Care is taken to ensure that this technique is applicable to complex games where there is no explicit winner or loser, allowing ratings to be based on relative fitness. Mixed results are presented, showing that significant benefits are gained from pairing similar-strength opponents, but finding that the use of Elo rating instead of raw fitness harms evolution for intransitive games.

Simulating a logistics enterprise using an asymmetrical wargame simulation with soar reinforcement learning and coevolutionary algorithms

We demonstrate an innovative framework (CoEvSoarRL) that leverages machine learning algorithms to optimize and simulate a resilient and agile logistics enterprise to improve the readiness and sustainment, as well as reduce the operational risk. The CoEvSoarRL is an asymmetrical wargame simulation that leverages reinforcement learning and coevolutionary algorithms to improve the functions of a total logistics enterprise value chain. We address two of the key challenges: (1) the need to apply holistic prediction, optimization, and wargame simulation to improve the total logistics enterprise readiness; (2) the uncertainty and lack of data which require large-scale systematic what-if scenarios and analysis of alternatives to simulate potential new and unknown situations. Our CoEvSoarRL learns a model of a logistic enterprise environment from historical data with Soar reinforcement learning. Then the Soar model is used to evaluate new decisions and operating conditions. We simulate the logistics enterprise vulnerability (risk) and evolve new and more difficult operating conditions (tests); meanwhile we also coevolve better logistics enterprise decision (solutions) to counter the tests. We present proof-of-concept results from a US Marine Corps maintenance and supply chain data set.

WORKSHOP SESSION: Workshop: Swarm intelligence algorithms: Foundations, perspectives and challenges

Self-organizing migrating algorithm with clustering-aided migration and adaptive perturbation vector control

The paper proposes the Self-organizing Migrating Algorithm with CLustering-aided migration and adaptive Perturbation vector control (SOMA-CLP). The SOMA-CLP is the next iteration of the SOMA-CL algorithm, further enhanced by the linear adaptation of the prt control parameter used to generate a perturbation vector. The latest CEC 2021 benchmark set on a single objective bound-constrained optimization was used for the performance measurement of the improved variant. The proposed algorithm SOMA-CLP results were compared and tested for statistical significance against four other SOMA variants.

HCS-BBD: an effective population-based approach for multi-level thresholding

Thresholding is one of the most common techniques for image segmentation where an image is partitioned into several parts based on its histogram of pixel intensities. Conventional algorithms work efficiently for bi-level thresholding where an image is divided into fore- and background, but their efficiency drastically declines for the more complex case of multi-level thresholding due to the exhaustive search that is employed. To address this problem, in this paper we consider multi-level thresholding as an optimisation problem and propose a novel population-based algorithm, HCS-BBD, which is based on cuckoo search (CS) and biogeography-based optimisation (BBO). To this end, HCS-BBD integrates a heterogeneous cuckoo search strategy with a biogeography-based discovery operator. Our findings in comparison to state-of-the-art and recent population-based algorithms on different images convincingly demonstrate HCS-BBD's excellent capability in finding optimal threshold values.

A population-based automatic clustering algorithm for image segmentation

Clustering is one of the prominent approaches for image segmentation. Conventional algorithms such as k-means, while extensively used for image segmentation, suffer from problems such as sensitivity to initialisation and getting stuck in local optima. To overcome these, population-based metaheuristic algorithms can be employed. This paper proposes a novel clustering algorithm for image segmentation based on the human mental search (HMS) algorithm, a powerful population-based algorithm to tackle optimisation problems. One of the advantages of our proposed algorithm is that it does not require any information about the number of clusters. To verify the effectiveness of our proposed algorithm, we present a set of experiments based on objective function evaluation and image segmentation criteria to show that our proposed algorithm outperforms existing approaches.

A differential particle scheme and its application to PID parameter tuning of an inverted pendulum

Gradient-free stochastic optimization algorithms are well-known for finding suitable parameter configurations over independent runs ubiquitously. Attaining low variability of convergence performance through independent runs is crucial to allow further generalization over distinct problem domains. This paper investigates the performance of a differential particle system in stabilizing a nonlinear inverted pendulum under diverse and challenging initial conditions. Compared to the relevant algorithms in the literature, our experiments show the feasibility of achieving lower convergence variability to stabilize a nonlinear pendulum over independent runs and initial conditions within a reasonable computational load.

Explaining SOMA: the relation of stochastic perturbation to population diversity and parameter space coverage

The Self-Organizing Migrating Algorithm (SOMA) is enjoying a renewed interest of the research community, following recent achievements in various application areas and renowned performance competitions. In this paper, we focus on the importance and effect of the perturbation operator in SOMA as the perturbation is one of the fundamental inner principles of SOMA. In this in-depth study, we present data, visualizations, and analysis of the effect of the perturbation on the population, its diversity and average movement patterns. We provide evidence that there is a direct relation between the perturbation intensity (set by control parameter prt) and the rate of diversity loss. The perturbation setting further affects the exploratory ability of the algorithm, as is demonstrated here by analysing the parameter space coverage of the population. We aim to provide insight and explanation of the impact of perturbation in SOMA for future researchers and practitioners.

WORKSHOP SESSION: Workshop: Visualisation methods in genetic and evolutionary computation

Visualizing fitnesses and constraint violations in single-objective optimization

In this study, we investigated visualization of search behavior in single-objective optimization function, where the objective function is composed of distinct components, either explicitly (as terms in the objective function or as components of a hybrid function) or implicitly (as constraints). We proposed a visualization method for constrained single-objective optimization in which the constraint violations and the term-by-term values of the polynomial objective function are separately calculated by RadViz and plotted in 3D. The proposed method is superior to the two-dimensional RadViz visualization in that it shows degasement of the fitness and constraint violations over time in the benchmark problem and can display them separately. Similarly, for the hybrid function, which is a benchmark problem consisting of multiple terms with different objective subfunctions, the difference in the timing of the decrease in fitness for each term is visualized by RadViz.

Many-objective population visualisation with geons

This paper explores the use of geometrical ions (called geons) to represent solutions in the approximated Pareto front generated by multi- and many-objective optimisers. The construction of geons based objects (GBOs)for solutions to a 3- and 5-objective problem is outlined, and the visualisation is embedded in a tool that has been tested with expert users. The findings suggest that our approach is promising, with all users successfully engaging with the given tasks and 4 out of 6 managing to complete some of the tasks they were assigned. Results indicate that the use of geometry, rather than colour as is often used to convey properties of Pareto front approximations, is a useful way of embedding multi-objective data.

WORKSHOP SESSION: Workshop: Student workshop

CLAHC - custom late acceptance hill climbing: first results on TSP

The Late Acceptance Hill Climbing heuristic is a Hill Climbing algorithm that uses a record of the history of objective values of previously encountered solutions in order to decide whether to accept a new solution. Literature has shown that Late Acceptance Hill Climbing is generally better at not getting stuck in local optima because of the history. In this paper, we propose and investigate a simple, yet effective, modification to Late Acceptance Hill Climbing, where we change how values in the history are replaced. In our tests, referring to the Traveling Salesman Problem, we analyze the behavior of the proposed approach for different history sizes. We also show that the simple change in the algorithm allows the heuristic to find better solutions than the original one on most of the instances tested.

Negative learning ant colony optimization for the minimum positive influence dominating set problem

Recent research has shown that adding negative learning to ant colony optimization, in addition to the traditional positive learning mechanism, may improve the algorithms' performance significantly. In this paper we consider the application of this novel ant colony optimization variant to an NP-hard combinatorial optimization problem known as the minimum positive influence dominating set problem. This problem has applications especially in the context of social networks. Our results show, first, that the negative learning variant significantly improves over the standard ant colony optimization variant. Second, the obtained results show that our algorithm outperforms all competitors from the literature.

A parallel genetic algorithm to speed up the resolution of the algorithm selection problem

Deciding which optimisation technique to use for solving a particular optimisation problem is an arduous task that has been faced in the field of optimisation for decades. The above problem is known as the Algorithm Selection Problem (ASP). The optimisation techniques considered in previous works have been, mainly, approaches that can be executed rapidly. However, considering more sophisticated optimisation approaches for solving the ASP, such as Evolutionary Algorithms, drastically increases the computational cost. We are interested in solving the ASP by considering different configurations of a Genetic Algorithm (GA) applied to the well-known 0/1 Knapsack Problem (KNP). This involves the execution of a significant number of configurations of the GA, in order to evaluate their performance, when applied to a wide range of instances with different features of the KNP, which is a computationally expensive task. Therefore, the main aim of the current work is to provide, as first step for solving the ASP, an efficient parallel GA, which is able to attain competitive results, in terms of the optimal objective value, in a short amount of time. Computational results show that our approach is able to scale efficiently and considerably reduces the average elapsed time for solving KNP instances.

Automated parameter choice with exploratory landscape analysis and machine learning

The performance of an evolutionary algorithm on a particular optimization problem highly depends on the choice of the algorithm's parameters. Fitness landscape analysis can help learn important characteristics of the optimization problem that may influence the optimal parameter values. Using this knowledge for parameter choice can be an important step in designing fitness-aware algorithms.

In this paper, we present an approach to automatic parameter choice that uses exploratory landscape analysis and machine learning. Using a neural network trained on a dataset of optimal parameter choices for particular landscape features of the W-model problem we choose parameters for the (1 + (λ, λ)) genetic algorithm. The experimental results suggest that the neural network is capable of providing good parameter choices based on the landscape features computed by the flacco package.

The lower bounds on the runtime of the (1 + (λ, λ)) GA on the minimum spanning tree problem

Plenty of inspiring runtime analysis results have been recently obtained for the (1 + (λ, λ)) genetic algorithm (GA) on different benchmark functions. These results showed the efficiency of this GA, but we still do not have much understanding of its behavior on the real-world problems. To shed some light on this problem, we analyze the (1 + (λ, λ)) GA on the minimum spanning tree problem. We prove a lower bound of Ω (m2/λ) fitness evaluations on its runtime, which shows that the considered GA with constant values of parameter λ does not significantly outperform the simple (1 + 1) EA on this problem.

Concurrent neural tree and data preprocessing AutoML for image classification

Deep Neural Networks (DNN's) are a widely-used solution for a variety of machine learning problems. However, it is often necessary to invest a significant amount of a data scientist's time to pre-process input data, test different neural network architectures, and tune hyper-parameters for optimal performance. Automated machine learning (autoML) methods automatically search the architecture and hyper-parameter space for optimal neural networks. However, current state-of-the-art (SOTA) methods do not include traditional methods for manipulating input data as part of the algorithmic search space. We adapt the Evolutionary Multi-objective Algorithm Design Engine (EMADE), a multi-objective evolutionary search framework for traditional machine learning methods, to perform neural architecture search. We also integrate EMADE's signal processing and image processing primitives. These primitives allow EMADE to manipulate input data before ingestion into the simultaneously evolved DNN. We show that including these methods as part of the search space shows potential to provide benefits to performance on the CIFAR-10 image classification benchmark dataset.

Population network structure impacts genetic algorithm optimisation performance

A genetic algorithm (GA) is a search method that optimises a population of solutions by simulating natural evolution. Good solutions reproduce together to create better candidates. The standard GA assumes that any two solutions can mate. However, in nature and social contexts, social networks can condition the likelihood that two individuals mate. This impact of population network structure over GAs performance is unknown. Here we introduce the Networked Genetic Algorithm (NGA) to evaluate how various random and scale-free population networks influence the optimisation performance of GAs on benchmark functions. We show evidence of significant variations in performance of the NGA as the network varies. In addition, we find that the best-performing population networks, characterised by intermediate density and low average shortest path length, significantly outperform the standard complete network GA. These results may constitute a starting point for network tuning and network control: seeing the network structure of the population as a parameter that can be tuned to improve the performance of evolutionary algorithms, and offer more realistic modelling of social learning. 1