GECCO '22: Proceedings of the Genetic and Evolutionary Computation Conference Companion

Full Citation in the ACM Digital Library

SESSION: Competition entry: Competition evolutionary computation in the the energy domain: Risk-based energy scheduling

Applying ring cellular encode-decode UMDA to risk-based energy scheduling

One of the challenges to making cities and human settlements inclusive, safe, resilient, and sustainable is to manage energy efficiently. Thus, developing algorithms to address smart grid management-related problems is still an open research area. In this context, the "Competition on Evolutionary Computation in the Energy Domain", launched in 2017, proposes the risk-based energy scheduling problem in the 2022 edition. In this paper, the algorithm RCED-UMDA is applied to solve the risk-based energy scheduling. Experimental results reveal that RCED-UMDA achieves better behavior than the top three algorithms of each track of the past 2021 competition edition.

SESSION: Competition entry: Competition the abstractswarm multi-agent logistics competition

AbstractSwarm multi-agent logistics competition entry: QPlus

The AbstractSwarm Framework [1] was developed to study multi-agent simulation for optimizing logistics scenarios, with a special focus on (but not restricted to) hospital logistics. The basics of a solution to the accompanying competition [2] and the included benchmark problems is to be presented in this paper. To achieve this, Q-Learning [5, 7, 8] is used to determine scores for possible actions an agent can take depending on the state of the AbstractSwarm Graph representing the environment. Every action receives multiple such scores which are in turn aggregated, hence the name "QPlus".

SESSION: Competition entry: Competition real parameter single objective bound constrained optimization

NL-SOMA-CLP for real parameter single objective bound constrained optimization

This paper presents a competition entry for the Real Parameter Single Objective Bound Constrained Optimization at The Genetic Evolutionary Computation Conference (GECCO) 2022. A new variant of Self-organizing Migrating Algorithm with CLustering-aided migration and adaptive perturbation vector control (SOMA-CLP) is proposed in this paper, which is titled Non-Linear Population Size Reduction and Nearest Neighbor Leader Selection SOMA-CLP(NL-SOMA-CLP). Overall, NL-SOMA-CLP improves the SOMA-CLP by using two techniques: a) a non-linear automatic reduction of population size and b) a nearest neighbor leader selection strategy.

Opposite learning and multi-migrating strategy-based self-organizing migrating algorithm with the convergence monitoring mechanism

This paper introduces a novel competition entry on Real Parameter Single Objective Bound Constrained Optimization at GECCO 2022 conference, namely Opposite learning and Multi-migrating strategy-based Self-Organizing Migrating Algorithm with Convergence monitoring mechanism (OMC_SOMA). Specifically, OMC_SOMA induces three techniques to improve the basic SOMA in the aspects of population initialization, convergence monitoring, and migrating strategies. Preliminary experiments on the 12 test functions reveal a good performance of our proposed method.

Zeroth-order covariance matrix adaptation evolution strategy for single objective bound constrained numerical optimization competition

The paper represents a competition entry for the competition on bound constrained single objective numerical optimization at The Genetic and Evolutionary Computation Conference (GECCO) 2022[2] by a hybrid algorithm named Zeroth-Order Covariance Matrix Adaptation Evolution Strategy (ZO-CMAES).

SESSION: Competition entry: Competition open optimization competition 2022: Better benchmarking of sampling-based optimization algorithms

Implementing the cumulative difference plot in the IOHanalyzer

The IOHanalyzer is a web-based framework that enables an easy visualization and comparison of the quality of stochastic optimization algorithms. IOHanalyzer offers several graphical and statistical tools analyze the results of such algorithms. In this work, we implement the cumulative difference plot in the IOHanalyzer. The cumulative difference plot [1] is a graphical approach that compares two samples through the first-order stochastic dominance. It improves upon other graphical approaches with the ability to distinguish between a small magnitude of difference and high uncertainty.

SESSION: Hot off the press

Precise runtime analysis for plateau functions: (hot-off-the-press track at GECCO 2022)

To gain a better theoretical understanding of how evolutionary algorithms (EAs) cope with plateaus of constant fitness, we propose the n-dimensional Plateauk function as benchmark and analyze how different variants of the (1 + 1) EA optimize it. The Plateauk function has a plateau of second-best fitness in a ball of radius k around the optimum. As EA, we regard the (1 + 1) EA using an arbitrary unbiased mutation operator. Denoting by α the random number of bits flipped in an application of this operator and assuming that Pr[α = 1] has at least some small sub-constant value, we show that for all constant k ≥ 2, the runtime T follows a distribution close to the geometric one with success probability equal to the probability to flip between 1 and k bits divided by the size of the plateau. Consequently, the expected runtime is the inverse of this number, and thus only depends on the probability to flip between 1 and k bits, but not on other characteristics of the mutation operator. Our result also implies that the optimal mutation rate for standard bit mutation here is approximately k/(en). Our main analysis tool is a combined analysis of the Markov chains on the search point space and on the Hamming level space, an approach that promises to be useful also for other plateau problems.

Evolutionary generation of metamorphic relations for cyber-physical systems

A problem when testing Cyber-Physical Systems (CPS) is the difficulty of determining whether a particular system output or behaviour is correct or not. Metamorphic testing alleviates such a problem by reasoning on the relations expected to hold among multiple executions of the system under test, which are known as Metamorphic Relations (MRs). However, the development of effective MRs is often challenging and requires the involvement of domain experts. This paper summarizes our recent publication: "Generating Metamorphic Relations for Cyber-Physical Systems with Genetic Programming: An Industrial Case Study", presented at ESEC/FSE 2021. In that publication we presented GAssertMRs, the first technique to automatically generate MRs for CPS, leveraging GP to explore the space of candidate solutions. We evaluated GAssertMRs in an industrial case study, outperforming other baselines.

Efficient configuration of optimization algorithms

We propose a set of capping methods to speed-up the automatic configuration of optimization algorithms. First, we build a performance envelope based on previous executions of known configurations, which defines the minimum required performance for new configurations. Then, we use the performance envelope to evaluate new configurations, stopping poor performers early. We propose different methods to aggregate previous executions into a performance envelope, and evaluate them on several configuration scenarios. The proposed methods produce solutions of the same or better quality as configuring without capping, but reduce the effort required to configure optimization algorithms.

This manuscript for the Hot-Off-the-Press track at GECCO 2022 is based on the article "Capping methods for the automatic configuration of optimization algorithms", published in Computers & Operations Research [2].

Highlights of semantics in multi-objective genetic programming

Semantics is a growing area of research in Genetic programming (GP) and refers to the behavioural output of a Genetic Programming individual when executed. This research expands upon the current understanding of semantics by proposing a new approach: Semantic-based Distance as an additional criteriOn (SDO), in the thus far, somewhat limited researched area of semantics in Multi-objective GP (MOGP). Our work included an expansive analysis of the GP in terms of performance and diversity metrics, using two additional semantic-based approaches, namely Semantic Similarity-based Crossover (SCC) and Semantic-based Crowding Distance (SCD). Each approach is integrated into two evolutionary multi-objective (EMO) frameworks: Non-dominated Sorting Genetic Algorithm II (NSGA-II) and the Strength Pareto Evolutionary Algorithm 2 (SPEA2), and along with the three semantic approaches, the canonical form of NSGA-II and SPEA2 are rigorously compared. Using highly-unbalanced binary classification datasets, we demonstrated that the newly proposed approach of SDO consistently generated more non-dominated solutions, with better diversity and improved hypervolume results.

This Hot-off-the-Press paper summarises "Semantics in Multi-objective Genetic Programming" by Edgar Galván, Leonardo Trujillo and Fergal Stapleton, published in the journal of Applied Soft Computing 2022 [9], https://doi.org/10.1016/j.asoc.2021.108143.

Measuring the ability of lexicase selection to find obscure pathways to optimality

This Hot-off-the-Press paper summarizes our recently published work, "An Exploration of Exploration: Measuring the Ability of Lexicase Selection to Find Obscure Pathways to Optimality," published as a chapter in Genetic Programming Theory and Practice XVIII [3]. In evolutionary search, selection schemes drive populations through a problem's search space, often trading off exploitation with exploration. Indeed, problem-solving success depends on how a selection scheme balances search space exploitation with exploration. In [3], we introduce an "exploration diagnostic" that measures a selection scheme's ability to explore different pathways in a search space. We use our exploration diagnostic to investigate the exploratory capacity of lexicase selection and several of its variants: epsilon lexicase, down-sampled lexicase, cohort lexicase, and novelty lexicase. We verify that lexicase selection out-explores tournament selection, and we demonstrate that lexicase selection's ability to explore a search space is sensitive to the ratio between population size and the number of test cases used for evaluating candidate solutions. We find that relaxing lexicase selection's elitism with epsilon lexicase can further improve search space exploration. Additionally, we find that both down-sampled and cohort lexicase---two methods of applying random subsampling to test cases---substantially degrade lexicase's exploratory capacity; however, cohort partitioning better preserves exploration than down-sampling. Finally, we find evidence that the addition of novelty-based test cases can degrade lexicase selection's exploratory capacity.

Phylogenetic diversity predicts future success in evolutionary computation

In our recent paper, "What can phylogenetic metrics tell us about useful diversity in evolutionary algorithms" [3], we explore the relationship between metrics of diversity based on phylogenetic topology and fitness. Our contribution is two-fold. First, we identify a technique for quantifying the relationship between diversity and fitness. Previous efforts to draw hard conclusions about this relationship had been stymied by the fact that these two properties are locked in a tight feedback loop. Second, we use this technique to assess the extent to which phylogenetic diversity leads to high-fitness solutions. We find that phylogenetic diversity is often more informative of future success in evolutionary algorithms than more commonly used diversity metrics, suggesting that is an underutilized tool in evolutionary computation research.

Tag-based module regulation for genetic programming

This Hot-off-the-Press paper summarizes our recently published work, "Tag-based regulation of modules in genetic programming improves context-dependent problem solving," published in Genetic Programming and Evolvable Machines [1]. We introduce and experimentally demonstrate tag-based genetic regulation, a genetic programming (GP) technique that allows programs to dynamically adjust which code modules to express. Tags are evolvable labels that provide a flexible naming scheme for referencing code modules. Tag-based regulation extends tag-based naming schemes to allow programs to "promote" and "repress" code modules to alter module execution patterns. We find that tag-based regulation improves problem-solving success on problems where programs must adjust how they respond to current inputs based on prior inputs; indeed, some of these problems could not be solved until regulation was added. We also identify scenarios where the correct response to an input does not change over time, rendering tag-based regulation an unnecessary functionality that can sometimes impede evolution. Broadly, tag-based regulation adds to our repertoire of techniques for evolving more dynamic computer programs and can easily be incorporated into existing tag-enabled GP systems.

Genetic programming convergence

We study both genotypic and phenotypic convergence in GP floating point continuous domain symbolic regression over thousands of generations. Subtree fitness variation across the population is measured and shown in many cases to fall. In an expanding region about the root node, both genetic opcodes and function evaluation values are identical or nearly identical. Bottom up (leaf to root) analysis shows both syntactic and semantic (including entropy) similarity expand from the outermost node. Despite large regions of zero variation, fitness continues to evolve and near zero crossover disruption suggests improved GP systems. W. B. Langdon. 2022. Genetic Programming Convergence. GP & EM 23, 1, 71--104.

Long-term evolution experiment with genetic programming [hot of the press]

We evolve floating point Sextic polynomial populations of genetic programming binary trees for up to a million generations. We observe continued innovation but this is limited by their depth and suggest deep expressions are resilient to learning as they disperse information, impeding evolvability and the adaptation of highly nested organisms and instead we argue for open complexity. Programs with more than 2 000 000 000 instructions (depth 20 000) are created by crossover. To support unbounded long-term evolution experiments LTEE in GP we use incremental fitness evaluation and both SIMD parallel AVX 512 bit instructions and 16 threads to yield performance equivalent of up to 1.1 trillion GP operations per second, 1.1 tera-GPops, on an Intel Xeon Gold 6136 CPU 3.00GHz server.

A comparison of rule compaction algorithms for michigan style learning classifier systems

Learning Classifier Systems (LCSs) excel in data mining tasks, e.g. an LCS optimal model contains patterns that can precisely reveal how features identify classes for the explored problem. EC is stochastic, which leads to LCSs producing and keeping redundant rules that obscure the patterns. Thus, LCSs employ rule compaction methods to improve models by removing problematic rules. However, former compaction methods often fail to highlight the patterns and reduce accuracy after compaction. A survey of compaction methods is provided to investigate why compaction algorithms are incompetent in achieving expected performance. This work discovers a new LCS's optimal solution format, which refers to a model consisting of all correct unsubsumable rules in a given search space. This optimal solution is accurate in arbitrary noiseless datasets and contains interpretable patterns naturally. Thus, two compaction methods, specifying the production of such optimal solutions, are proposed. Successful compaction is demonstrated on complex and real problems with noiseless datasets, e.g. the 11-bits Majority-On problem that requires 924 different interacting rules in the optimum solution to be uniquely identified to enable correct visualization of the discovered knowledge. For the first time, the patterns contained in learned models for the large-scale 70-bits Multiplexer problem are visualized successfully.

Targeting requirements violations of autonomous driving systems by dynamic evolutionary search (HOP at GECCO'22)

Autonomous Driving Systems (ADSs) must satisfy multiple requirements. In some cases, satisfying all of them may not be possible due to environmental conditions. Therefore, ADSs usually make tradeoffs among the requirements, resulting in one or more requirements violations; the correctness of such trade-offs must be evaluated during testing. We propose a new approach, named EMOOD, that can effectively generate test scenarios exposing as many requirements violation combinations as possible. EMOOD first uses a prioritization technique that sorts all combinations to search for according to their criticality. Then, it iteratively applies a many-objective optimization algorithm to find scenarios exposing these combinations. In each iteration, the targeted combination is determined by a technique giving preference to combinations with higher criticality and likelihood to occur. The approach is evaluated on an industrial ADS. This Hot-off-the-Press paper summarizes the paper [1]: Y. Luo, X. Zhang, P. Arcaini, Z. Jin, H. Zhao, F. Ishikawa, R. Wu and T. Xie, "Targeting Requirements Violations of Autonomous Driving Systems by Dynamic Evolutionary Search", 36th International Conference on Automated Software Engineering (ASE 2021).

Empirical linkage learning for non-binary discrete search spaces in the optimization of a large-scale real-world problem

Empirical linkage learning (ELL) is a new type of problem decomposition techniques proposed recently for Black-box optimization. ELL techniques are proven never to report false gene dependencies (denoted as false linkage), which is their main advantage. The linkage learning based on local optimization (3LO) (the first ELL technique proposed) was shown to bring a significant potential of reaching high-quality results. However, 3LO applies only to binary search spaces, this paper examines the 3LO for discrete non-binary search spaces (3LO-nb). 3LO-nb applies to any problem using a discrete encoding. On the base of NP-hard, large-scale, real-world problem, we show that using 3LO-nb may lead to outstanding results.

Archivers for single- and multi-objective evolutionary optimization algorithms

In this article, we briefly summarize some of the main results of the book "Archiving Strategies for Evolutionary Multi-objective Optimization Algorithms" by O. Schütze and C. Hernández, Springer, 2021. In this book, several archiving strategies are proposed and analyzed that yield certain approximation qualities of the solution sets (mainly the Pareto front and the set of approximate solutions) in the limit, where these approximation qualities can be determined a priori. Further, the benefits of the use of these strategies as external archives to given evolutionary algorithms are demonstrated.

Program synthesis with evolutionary algorithms: Status quo: hot off the press track (GECCO 2022)

Program synthesis techniques offer the potential to allow non-programmers to create computer programs. Approaches based on evolutionary algorithms are known for their performance in program synthesis. Since 2015, the general program synthesis benchmark suite offers a common pool of benchmark problems, which makes different approaches comparable. To analyze what has been achieved so far, we identified in a recent literature review the main evolutionary program synthesis approaches, determined the difficulty of the common benchmark problems, and discussed current challenges. In this short paper, we present the difficulty ranking of the benchmark problems and summarize the current challenges in program synthesis with evolutionary algorithms.

Modular grammatical evolution for the generation of artificial neural networks: (hot-off-the-press track at GECCO 2022)

This paper proposes a NeuroEvolution algorithm, Modular Grammatical Evolution (MGE), that enables the evolution of both topology and weights of neural networks for more challenging classification benchmarks like MNIST and Letter with 10 and 26 class counts. The success of MGE is mainly due to (1) restricting the solution space to regular network topologies with a special form of modularity, and (2) improving the search properties of state-of-the-art GE methods by improving the mapping locality and the representation scalability. We have defined and evaluated five forms of structural constraints and observe that single-layer modular restriction of solution space helps in finding smaller and more efficient neural networks faster. Our experimental evaluations on ten well-known classification benchmarks demonstrate that MGE-generated neural networks provide better classification accuracy with respect to other NeuroEvolution methods. Finally our experimental results indicate that MGE outperforms other GE methods in terms of locality and scalability properties.

This Hot-off-the-Press paper summarizes "Modular Grammatical Evolution for The Generation of Artificial Neural Networks" by K. Soltanian, A. Ebnenasir, and M. Afsharchi [9], accepted for publication in Evolutionary Computation journal of the MIT press.

A verified application of genetic programming: QoS time series modeling and forecasting for web services

In both academia and industry, Web services (WSs) technology (also called Web APIs in recent years) has been a fervid research target for years. In industry, software developers now often develop their application systems with diverse WSs on the internet (i.e., these developers often use service-oriented software engineering, SOSE). However, many companies and organizations, including most tech giants, such as Google, Meta, and Amazon, expose their services and business functionalities in the form of RESTful WSs (Web APIs) for external access (e.g., Amazon Web Services). As a focus of research in services computing, many topics and subjects regarding WSs have been identified, investigated, and addressed. A practical research issue that receives broad attention is the understanding (modeling) and prediction of the time-aware (time-varying) dynamic quality of service (QoS) of WSs.

Based on a comprehensive survey and investigation presented in [1]: Y. Syu and C. M. Wang, "QoS Time Series Modeling and Forecasting for Web Services: A Comprehensive Survey", IEEE Transactions on Network and Service Management (TNSM), Vol. 18, P.P. 926--944, 2021, this abstract paper concisely reports to the GECCO community a justified application of genetic programming (GP) on the modeling and forecasting of WS QoS time series. By introducing this employment of GP and the targeted research problem to the community, the purpose is to encourage and attract people in this field to further revise and improve GP to obtain a better solution to the problem or to try to use other evolutionary techniques to better address the problem.

Choosing the right algorithm with hints from complexity theory: (hot-off-the-press track at GECCO 2022)

Choosing a suitable algorithm from the myriads of different search heuristics is difficult when faced with a novel optimization problem. In this work, we argue that the purely academic question of what could be the best possible algorithm in a certain broad class of black-box optimizers can give fruitful indications in which direction to search for good established optimization heuristics. We demonstrate this approach on the recently proposed DLB benchmark, for which the only known results are O(n3) runtimes for several classic evolutionary algorithms and an O(n2 log n) runtime for an estimation-of-distribution algorithm. Our finding that the unary unbiased black-box complexity is only O(n2) suggests the Metropolis algorithm as an interesting candidate and we prove that it solves the DLB problem in quadratic time. Since we also prove that better runtimes cannot be obtained in the class of unary unbiased algorithms, we shift our attention to algorithms that use the information of more parents to generate new solutions. An artificial algorithm of this type having an O(n log n) runtime leads to the result that the significance-based compact genetic algorithm (sig-cGA) can solve the DLB problem also in time O(n log n).

This paper for the Hot-of-the-Press track at GECCO 2022 summarizes the work Shouda Wang, Weijie Zheng, Benjamin Doerr: Choosing the Right Algorithm With Hints From Complexity Theory. IJCAI 2021: 1697--1703 [11].

Generating failing test suites for quantum programs with search (hot off the press track at GECCO 2022)

The inherent complexity of quantum programs, due to features such as superposition and entanglement, makes their testing particularly challenging. To tackle these challenges, we present a search-based approach, called Quantum Search-Based Testing (QuSBT), for automatically generating test suites of a given size that possibly expose failures of the quantum program under test. QuSBT encodes a test suite as a search individual, and tries to maximize the objective function that counts the number of failing tests in the test suite. Due to non-deterministic nature of quantum programs, the approach repeats the execution of each test multiple times, and uses suitable statistical tests to assess if a test passes or fails. QuSBT employs a genetic algorithm to perform the search. Experiments on 30 faulty quantum programs show that QuSBT is statistically better than random search, and is able to efficiently generate maximal failing test suites.

This is an extended abstract of the paper [1]: X. Wang, P. Arcaini, T. Yue, and S. Ali "Generating Failing Test Suites for Quantum Programs With Search", 13th International Symposium on Search-Based Software Engineering (SSBSE 2021).

IOHanalyzer: Detailed performance analyses for iterative optimization heuristics: hot-off-the-press track @ GECCO 2022

This paper summarizes our work "IOHanalyzer: Detailed Performance Analyses for Iterative Optimization Heuristics", to appear in ACM Transactions on Evolutionary Learning and Optimization.

IOHanalyzer is a new user-friendly tool for analyzing, comparing, and visualizing performance data of iterative optimization heuristics (IOHs). Key advantages of IOHanalyzer over other performance analysis packages are its highly interactive graphical user interface, which allows users to specify the performance measures, the ranges, and granularity of the displayed data that are most useful for their experiments, and the possibility to analyze not only performance traces, but also the evolution of dynamic parameters of IOHs.

IOHanalyzer can directly process performance data from the main benchmarking platforms, including the COCO platform, Nevergrad, the SOS platform, and IOHexperimenter. An R programming interface is provided for users preferring to have a finer control over the implemented functionalities.

Implemented in R and C, IOHanalyzer is fully open source and available on CRAN and GitHub. Our paper has two reproducibility badges, the one for "Artifacts Available" and the one for "Artifacts Evaluated - Functional v1.1".

Automated configuration of genetic algorithms by tuning for anytime performance: hot-off-the-press track at GECCCO 2022

This paper summarizes our work "Automated Configuration of Genetic Algorithms by Tuning for Anytime Performance", to appear in IEEE Transactions on Evolutionary Computation.

Finding the best configuration of algorithms' hyperparameters for a given optimization problem is an important task in evolutionary computation. We compare in our work the results of four different hyperparameter optimization approaches for a family of genetic algorithms on 25 diverse pseudo-Boolean optimization problems. More precisely we compare previously obtained results from a grid search with those obtained from three automated configuration techniques: iterated racing, mixed-integer parallel efficient global optimization, and mixed-integer evolutionary strategies. Using two different cost metrics, expected running time and the area under the empirical cumulative distribution function curve, we find that in several cases the best configurations with respect to expected running time are obtained when using the area under the empirical cumulative distribution function curve as the cost metric during the configuration process.

Our results suggest that even when interested in expected running time performance, it might be preferable to use anytime performance measures for the configuration task. We also observe that tuning for expected running time is much more sensitive with respect to the budget that is allocated to the target algorithms.

A first mathematical runtime analysis of the non-dominated sorting genetic algorithm II (NSGA-II): (hot-off-the-press track at GECCO 2022)

The non-dominated sorting genetic algorithm II (NSGA-II) is the most intensively used multi-objective evolutionary algorithm (MOEA) in real-world applications. However, in contrast to several simple MOEAs analyzed also via mathematical means, no such study exists for the NSGA-II so far. In this work, we show that mathematical runtime analyses are feasible also for the NSGA-II. As particular results, we prove that with a population size larger than the Pareto front size by a constant factor, the NSGA-II with two classic mutation operators and three different ways to select the parents satisfies the same asymptotic runtime guarantees as the SEMO and GSEMO algorithms on the basic OneMinMax and LOTZ benchmark functions. However, if the population size is only equal to the size of the Pareto front, then the NSGA-II cannot efficiently compute the full Pareto front (for an exponential number of iterations, the population will always miss a constant fraction of the Pareto front). Our experiments confirm the above findings.

This paper for the Hot-off-the-Press track at GECCO 2022 summarizes the work Weijie Zheng, Yufei Liu, Benjamin Doerr: A First Mathematical Runtime Analysis of the Non-dominated Sorting Genetic Algorithm II (NSGA-II). AAAI2022, accepted [17].

SESSION: Late-breaking abstract

Evolving a cloud-robust water index with genetic programming

Over the years, remote sensing (RS) experts created many indices to help them study satellite imagery by highlighting characteristics like vegetation, water, or burnt areas, among others. In this work, we study water indices. Although there is a large number of water indices that work perfectly in unclouded imagery, clouds and shadows cast by clouds are often mistaken for water. This work is focused on the automatic feature construction using genetic programming (GP), in an attempt to make features that are more robust to these issues. To do this, we use a dataset containing pixels from areas where we could find these issues to evolve models that learn how to classify those pixels correctly. The results indicate improvements when comparing evolved features with indices, but further improvements are required to tackle other issues found.

CNN structure optimization using differential evolution with individual dependent mechanism

While Convolutional Neural Network (CNN) is known to be a prominent method, there is still a need for appropriate structures to achieve high accuracy. Structure optimization is usually handcrafted, and there is a raising need for an automated optimization method. In this research, we introduce an auto-encoder to create new structures and apply Differential Evolution with an Individual-Dependent Mechanism (IDE) to a simple CNN Structure optimization problem. Experiments using the proposed framework have been conducted on the Cifar10 dataset. Experimental results showed that the proposed method is fit as a structure optimizer.

A genetic algorithm for classifying metagenomic data

The goal of metagenomic analysis is to extract relevant information concerning the organisms that have left their genetic traces in an environmental sample. Each sample is subject to nucleotide sequencing, and obtained DNA fragments are decomposed into k-mers---short sequences of k nucleotides. Based on the found k-mers and their occurrence frequencies, it is possible to identify the organisms present in the sample---this allows for further analysis, but requires using large taxonomic datasets. Alternatively, depending on the specific goal of the analysis, the whole sample may be classified directly based on its k-mer profile. However, this is challenging due to a large number of possible k-mers, and choosing the most valuable ones remains an open research problem. In this paper, we propose a new technique that exploits a genetic algorithm for selecting a subset of k-mer features that are used for classification. We report our initial, yet promising results obtained for the problem of detecting the type 2 diabetes from human gut metagenomic samples. We expect that the proposed classification framework will enhance the capabilities of metagenomic analysis, without the need for performing costly taxonomic classification.

Evaluation of inverse selection operators on maximum flow test generation problem

The choice of selection operators in evolutionary algorithms can greatly affect their performance, as it can be hard to balance maintaining population diversity with providing enough selection pressure. Recently Corus et al. have shown that reverse tournaments of fixed size can be a good default choice for steady-state evolutionary algorithms.

In this paper, we aim to further explore this idea, by evaluating the (μ + 1) evolutionary algorithm with uniform, k-inverse tournament and inverse elitist selection on the problem of hard test generation for the maximum flow problem. Our results show that the performance of the k-inverse tournament is highly dependent on the problem, the choices of population size and the tournament size. In some cases, the k-inverse tournament outperforms the uniform selection operator, but similar performance can be achieved by using the uniform selection with the increased population size.

Biology inspired growth in meta-learning

Unlike biological systems most current neural networks models do not change their structure during learning. In most machine learning problems, it is important to choose an appropriate network structure because simple networks are likely to under-fit while complex networks are less plastic and more computationally expensive to train. To address this challenge, we introduce a dynamically growing network that starts from a small network and adds layers while training. We demonstrate an acceleration of learning using a simple dynamically growing network compared to other baseline models with fixed network structures on a meta-learning task.

Hereditary stratigraphy: genome annotations to enable phylogenetic inference over distributed populations

Phylogenetic analyses can also enable insight into evolutionary and ecological dynamics such as selection pressure and frequency dependent selection in digital evolution systems. Traditionally digital evolution systems have recorded data for phylogenetic analyses through perfect tracking where each birth event is recorded in a centralized data structures. This approach, however, does not easily scale to distributed computing environments where evolutionary individuals may migrate between a large number of disjoint processing elements. To provide for phylogenetic analyses in these environments, we propose an approach to infer phylogenies via heritable genetic annotations rather than directly track them. We introduce a "hereditary stratigraphy" algorithm that enables efficient, accurate phylogenetic reconstruction with tunable, explicit trade-offs between annotation memory footprint and reconstruction accuracy. This approach can estimate, for example, MRCA generation of two genomes within 10% relative error with 95% confidence up to a depth of a trillion generations with genome annotations smaller than a kilobyte. We also simulate inference over known lineages, recovering up to 85.70% of the information contained in the original tree using a 64-bit annotation.

Evolutionary algorithms meet classical and deep machine learning for skin detection in color images

Skin detection in color images has become an active research topic due to its numerous practical applications. In this paper, we couple classical and deep machine learning, together with an evolutionary training set selection algorithm for this task, and build an end-to-end pipeline that can flexibly benefit from such techniques. The experiments indicate that our approach can deliver accurate skin detection in a short time that may be generalized over different datasets, and that evolutionary training set selection can play a key role to allow for training the models from large training data.

A population based-approach to address real-life transport on-demand problems

This work aims at improving the quality of the service provided to the customers within real-life demand-responsive transportation systems. To this end, a new customer-oriented problem is designed to minimize both total transit time for the transportation service providers and total waiting time for the travellers. To solve the new problem, an evolutionary algorithm is proposed and compared with a hybrid evolutionary one from the literature. Preliminary computational experiments show the effectiveness of our method.

PIACERE project: description and prototype for optimizing infrastructure as code deployment configurations

PIACERE is an European project supported by the Union's Horizon 2020 research and innovation programme, whose objective is to enhance the productivity of DevOps teams in the operation of Infrastructure as Code (IaC) by offering an integrated DevSec-Ops framework. Thus, DevOps practitioners can develop IaC as if they were programming a common software application. In order to achieve this challenging task, one of the core technologies considered within PIACERE will be the design and development of optimization metaheuristics, in a module coined as IaC Optimizer Platform (IOP). The main objective of the IOP is to provide DevSecOps teams with the most appropriate deployment configurations that best fit a set of defined constraints. The goal of this technical paper is to describe the preliminary approach followed in PIACERE for carrying out this optimization, and how the IOP fits into the whole PIACERE ecosystem. Additionally, results obtained in a preliminary experimentation are detailed in this study.

Natural gradient evolution strategies for adaptive sampling

We evaluate two (1+1)-natural evolution strategies (NES) turned into adaptive Markov chain Monte Carlo (MCMC) samplers on a test suite of probability distributions. We compare their performance with the AM-family of samplers considered to be the state of the art in adaptive MCMC. Our experiments show that natural gradient based adaptation used in NES further improves adaptive MCMC.

Textonmap optimization for spine segmentation using adaptive differential evolution

With the development of machine learning techniques in the image processing field, research related to semantic segmentation has attracted much attention. Especially in medical image segmentation requires highly pixel-by-pixel accurate results. Given that it is difficult to obtain test images with their ground truth in the medical field, we aim to develop a robust method in an environment with few test images. Specifically, we improve Textonboost by using differential evolution and stochastic hill climbing methods. Experimental results showed that the proposed method outperformed conventional methods in terms of accuracy.

POSTER SESSION: Ant colony optimization and swarm intelligence

Ant colony optimization with shortest distance biased dispatch for visiting constrained multiple traveling salesmen problem

The visiting constrained multiple traveling salesmen problem (VCMTSP) aims to minimize the total traveling cost of all salesmen by taking the accessibility of cities to salesmen into consideration. To solve this challenging problem, this paper devises a shortest distance biased dispatch (SDBD) scheme based on the accessibility of cities and a pheromone diffusion strategy for ant colony optimization (ACO). Specifically, this algorithm maintains a population of ant teams to construct feasible solutions. Each team maintains multiple ants with each ant responsible for constructing the route of one salesman to generate a feasible solution. During the solution construction of an ant team, the multiple ants construct the routes of all salesmen city by city in parallel based on the devised dispatch scheme. To further improve the solution quality, the 2-opt local search operation is integrated to further optimize the routes of all salesmen. Experiments conducted on several VCMTSP instances generated from the TSPLIB benchmark set demonstrate the effectiveness of the proposed algorithm.

Particle swarm optimization with average-fitness based selection

The search trajectories of particles in Particle Swarm Optimization are influenced by attractions towards personal best positions. These personal best positions are meant to represent promising areas of the search space for further exploration and exploitation. However, the best position that has been visited by a particle may not be the best area for further exploration. Traditional fitness-based selection is suitable for identifying areas of the search space for further exploitation. Average-Fitness Based Selection is introduced as an alternative to identify the best areas for further exploration. Initial experiments demonstrate that this alternate form of selection can improve the performance of Particle Swarm Optimization in a multi-modal search space.

Ant colony optimization for feature selection via a filter-randomized search heuristic

The combination of bio-inspired algorithms with the design of new search heuristics has allowed the development of innovative methods for combinatorial problems where the number of possible solutions is unaffordable. However, for years swarm intelligent methods have proven to be adequate to deal with these issues. Therefore, this paper proposes a new application of the Ant Colony Optimization (ACO) algorithm for Feature Selection (FS). Specifically, the proposed algorithm applies ACO for FS using a filter approach as a heuristic and makes use of a randomized search, resulting in a very innovative algorithm for variable reduction.

Cooperative attack-defense evolution of large-scale agents: a multi-population high-dimensional mean-field game approach

The traditional optimization and control technologies deal with the dynamic interactions between individuals separately, with the increase in the agents' number, the modeling process of cooperative attack-defense problems tends to be complex, and the difficulty of solving the optimal strategy will increase significantly. Moreover, to carry out more accurate real-time control of agents, the state variables used to characterize their kinematics are usually high-dimensional. To overcome these challenges, we formulate the cooperative attack-defense evolution of large-scale agents as a multi-population high-dimensional stochastic mean-field game (MPHD-MFG). Numerical methods for MPHD-MFGs are practically non-existent, because, the heterogeneity of the multi-population model increases the complexity of sequential games, and grid-based spatial discretization leads to dimension explosion. Thus, we propose a generative adversarial network-based method, where we use a coupled alternating neural network composed of multiple generators and multiple discriminators, to tractably solve MPHD-MFGs. Simulation experiments are carried out for various attack-defense scenarios, the results verify the feasibility and effectiveness of our proposed model and algorithm.

Dropout topology-assisted bidirectional learning particle swarm optimization for neural architecture search

The neural architecture search (NAS) is a new high-complexity optimization problem emerging in recent years. Solving NAS is challenging for optimization algorithms due to the following two issues. Firstly, besides the network architectures, there are also network hyperparameters that need to be optimized, which causes the search space of NAS to be complex and large and poses a great challenge to the optimization ability of the optimization algorithm. Secondly, NAS is an expensive optimization problem with expensive computational time to evaluate candidates, which poses a great challenge to the search speed and convergence of the optimization algorithm. Therefore, this paper proposes a novel dropout topology-assisted bidirectional learning particle swarm optimization (DBLPSO) algorithm for NAS to tackle these two issues. Firstly, inspired by the dropout technique in deep learning, a sorting-assisted dropout-based neighbor topology is proposed to enhance the population diversity and the optimization ability of PSO. Secondly, a bidirectional learning strategy is proposed to improve search speed and accelerate PSO convergence. In the experiments, the performance of the DBLPSO algorithm is evaluated on 10 tabular benchmarks based on NAS-bench 201, NATS-bench, and HPO-bench, by comparing with four NAS algorithms that have achieved state-of-the-art results on the NAS tabular benchmarks. The experimental results show that DBLPSO can obtain great performance for NAS and is superior to those NAS algorithms in comparison.

A chaotic parallel antlion optimization algorithm for feature selection

Feature selection is an important part of data preprocessing. There are currently three types of feature selection, namely filter, wrapper, and embedded feature selection methods. The latter is a combination of the former two. However, when the feature dimension is high and the amount of data is large, the filter method may not be able to extract a better subset, ignoring the interaction between the features, and the embedded algorithm has requirements for the learning model. Currently, wrapper algorithms face the problem of high computational overhead. The Spark platform based on in-memory computing has advantages in handling iterative tasks. Therefore, this paper proposes a chaotic parallel antlion optimization algorithm for feature selection (QPSALO) based on Spark, and conducts experimental comparisons on seven datasets. The results show that the QPSALO algorithm can improve the quality of candidate feature subsets while speeding up the execution.

POSTER SESSION: Complex systems (artificial life, artificial immune systems, generative and developmental systems, evolutionary robotics evolvable hardware)

AutoMoDe-pomodoro: an evolutionary class of modular designs

In this paper, we reintroduce evolutionary algorithms into Auto-MoDe, an automatic design approach which optimizes behavioural modules into a probabilistic finite automaton. We evaluate three approaches, with different encodings of the probabilistic finite automaton phenotype, and observe their performances. This work opens modular designs to more advanced evolutionary robotics methods, such as novelty search and embodied evolution.

Selecting continuous life-like cellular automata for halting unpredictability: evolving for abiogenesis

Substantial efforts have been applied to engineer CA with desired emergent properties, such as supporting gliders. Recent work in continuous CA has generated a wide variety of compelling bioreminiscent patterns, and the expansion of CA research into continuously-valued domains, multiple channels, and higher dimensions complicates their study. In this work we devise a strategy for evolving CA and CA patterns in two steps, based on the simple idea that CA are likely to be complex and computationally capable if they support patterns that grow indefinitely as well as patterns that vanish completely, and are difficult to predict the difference in advance. The second part of our strategy evolves patterns by selecting for mobility and conservation of mean cell value. We validate our pattern evolution method by re-discovering gliders in 17 of 17 Lenia CA, and also report 4 new evolved CA and 1 randomly evolved CA that support novel evolved glider patterns. The CA reported here share neighborhood kernels with previously described Lenia CA, but exhibit a wider range of typical dynamics than their Lenia counterparts. Code for evolving continuous CA is made available under an MIT License1.

Empowered neural cellular automata

Information-theoretic fitness functions are becoming increasingly popular to produce generally useful, task-independent behaviors. One such universal function, dubbed empowerment, measures the amount of control an agent exerts on its environment via its sensorimotor system. Specifically, empowerment attempts to maximize the mutual information between an agent's actions and its received sensor states at a later point in time. Traditionally, empowerment has been applied to a conventional sensorimotor apparatus, such as a robot. Here, we expand the approach to a distributed, multi-agent sensorimotor system embodied by a neural cellular automaton (NCA). We show that the addition of empowerment as a secondary objective in the evolution of NCA to perform the task of morphogenesis, growing and maintaining a pre-specified shape, results in higher fitness compared to evolving for morphogenesis alone. Results suggest there may be a synergistic relationship between morphogenesis and empowerment. That is, indirectly selecting for coordination between neighboring cells over the duration of development is beneficial to the developmental process itself. Such a finding may have applications in developmental biology by providing potential mechanisms of communication between cells during growth from a single cell to a multicellular, target morphology. Source code for the experiments in this paper can be found at: https://github.com/caitlingrasso/empowered-nca.

CoBEA: framework for evolving hardware by direct manipulation of FPGA bitstreams

Evolvable Hardware is a general approach to apply Evolutionary Algorithms to hardware in order to design, improve, or adapt circuits. Approaches that directly manipulate the bitstream of field-programmable gate arrays (FPGAs) had been abandoned due to the lack of well-documented bitstream formats.

Recent advancements in open source FPGA toolchains fundamentally changed the feasibility of direct bitstream manipulation yet again. Unfortunately, contemporary tools are slow and waste valuable time calling external tools.

Therefore, we present an integrated approach that combines bitstream manipulation, low-level communication, and hardware evaluation into a single framework called CoBEA. In addition, the framework allows compaction of the bitstream and direct configuration of the FPGA device without having to program flash memory. Compared to the state of the art, our framework achieves an acceleration of 130 times for FPGA reconfiguration. This allows complex hardware evolution experiments to be performed.

Minimize surprise MAP-elites: a task-independent MAP-elites variant for swarms

Swarm robotics controllers are often automatically generated using methods of evolutionary computation with a task-specific fitness function to guide the optimization process. By contrast, our minimize surprise approach uses a task-independent fitness function to generate diverse behaviors over several independent evolutionary runs. Alternatives are divergent search algorithms rewarding behavioral novelty, such as novelty search, and quality-diversity algorithms generating diverse high-quality solutions, such as MAP-Elites. These approaches usually rely on task-dependent measures. We propose Minimize Surprise MAP-Elites, a task-independent MAP-Elites variant that combines MAP-Elites with our minimize surprise approach. Our first experiments result in high-quality solutions that lead to behavioral diversity across tasks and within tasks.

Evolving robot bodies with a sense of direction

The joint evolution of bodies and brains (morphologies and controllers) is one of the grand challenges of Evolutionary Robotics. Related work is conducted in various morphological spaces, including, modular robots and voxel-based artificial organisms, most commonly evolving robots for a good gait, i.e., for walking as far as possible without a target, using an open-loop controller. Here we investigate a practically more relevant task: directed locomotion, i.e., the ability to walk in a target direction. To this end, we use closed-loop controllers based on an extension of the popular CPG-networks and compare them to the traditional open-loop setup, where robots are walking 'blindly', but walking in the right direction is rewarded by higher fitness. Our results disclose that the new system does not only lead to better task performance (which was expected), but also to very different morphologies and gaits. In other words, we obtain surprising results showing that adding the ability to utilise sensory feedback does not only lead to better performance, but also to different 'life forms'.

Diversification techniques and distance measures in evolutionary design of 3D structures

Evolutionary algorithms are among the most successful metaheuristics for hard optimization problems. Nonetheless, there is still much room for improvement of their effectiveness, especially in the multimodal problems, where the algorithms are prone to falling into unsatisfactory local optima. One of the solutions to this problem may be to encourage a broader exploration of the solution space. Motivated by this premise, we compare the evolutionary algorithm without niching, with niching, the novelty search, and the two-criteria optimization (NSGA-II) where the criteria of fitness and diversity are not aggregated. We investigate these methods in the context of automated design of three-dimensional structures, which is one of the hardest optimization problems, often characterized by a rugged fitness landscape arising from a complex genotype to phenotype mapping. In the experiments we optimize 3D structures towards two different goals, height and velocity, using two genetic encodings and three distance measures: two phenetic ones and a genetic one. We demonstrate how different distance measures and diversity promotion mechanisms influence the fitness of the obtained solutions.

QDax: on the benefits of massive parallelization for quality-diversity

Quality-Diversity (QD) algorithms are a well-known approach to generate large collections of diverse and high-quality policies. However, QD algorithms are also known to be data-inefficient, requiring large amounts of computational resources and are slow when used in practice for robotics tasks. Policy evaluations are already commonly performed in parallel to speed up QD algorithms but have limited capabilities on a single machine as most physics simulators run on CPUs. With recent advances in simulators that run on accelerators, thousands of evaluations can be performed in parallel on single GPU/TPU. In this paper, we present QDax, an implementation of MAP-Elites which leverages massive parallelism on accelerators to make QD algorithms more accessible. We show that QD algorithms are ideal candidates and can scale with massive parallelism to be run at interactive timescales. The increase in parallelism does not significantly affect the performance of QD algorithms, while reducing experiment runtimes by two factors of magnitudes, turning days of computation into minutes. These results show that QD can now benefit from hardware acceleration, which contributed significantly to the bloom of deep learning.

De-redundancy in a random boolean network using knockout

Random Boolean Network (RBN) is one of the regulatory networks and the state value of its node is a binary variable. The network usually contains complex interaction and redundancy such as redundant nodes and connections. However, redundant nodes and connections often lead to increased computational and analytical complexities. To address these problems, the node knockout method inspired by the gene regulatory networks is applied in this work to reduce the number of the redundant nodes in the RBN. For the purpose of verifying the performance of the method, three control task experiments with different dynamic characteristics are implemented. The results show that the fitness distribution of the RBN after node knockout is similar to the original RBN but the number of nodes is reduced by > 50% compared to the original RBN. This proposed method significantly reduces the computational complexity while maintaining the task completion. Thus this paper provides an alternative solution for the de-redundancy of the RBNs.

A comparative analysis on genome pleiotropy for evolved soft robots

Biological evolution shapes the body and brain of living creatures together over time. By contrast, in evolutionary robotics, the co-optimization of these subsystems remains challenging. Conflicting mutations cause dissociation between morphology and control, which leads to premature convergence. Recent works have proposed algorithmic modifications to mitigate the impact of conflicting mutations. However, the importance of genetic design remains underexposed. Current approaches are divided between a single, pleiotropic genetic encoding and two isolated encodings representing morphology and control. This design choice is commonly made ad hoc, causing a lack of consistency for practitioners. To standardize this design, we performed a comparative analysis between these two configurations on a soft robot locomotion task. Additionally, we incorporated two currently unexplored alternatives that drive these configurations to their logical extremes. Our results demonstrate that pleiotropic representations yield superior performance in fitness and robustness towards premature convergence. Moreover, we showcase the importance of shared structure in the pleiotropic representation of robot morphology and control to achieve this performance gain. These findings provide valuable insights into genetic encoding design, which supply practitioners with a theoretical foundation to pursue efficient brain-body co-optimization.

Growth-based morphological development: a natural approach to fitness landscape shaping

In nature, the morphological changes that occur as cognitive development takes place in human beings and animals have been shown to facilitate learning. Taking inspiration from nature, morphological development aimed at improving learning has been applied in the literature to different robotics configurations and tasks with mixed results. In this paper, we consider a growth-based morphological development approach applied to a quadruped walking task, and, in addition to determining that it improves the learning characteristics, we show that morphological development is something other than just introducing noise as a fitness landscape smoothing technique. Finally, by means of a study of the evolution of the different fitness landscapes, we argue and discuss that learning using growth is equivalent to an incremental learning or fitness landscape shaping approach.

Do harsher environments cause selfish or altruistic behavior?

In this study we develop an Agent-based Model (ABM), called Neo-COOP, to investigate the emergence and evolution of altruistic and selfish behaviour in Neolithic-inspired household agents under varying degrees of environmental stress. We conduct scenario experimentation where we track the evolution of the agents' resource trading preferences in scenarios with varying frequencies of environmental stress and agent types initialized to exhibit differing altruistic or selfish tendencies. Our results suggest that neither extreme selfishness or extreme altruism is desirable but rather, some middle-ground value is. Additionally, we find that the frequency of the environmental stress plays a significant role in the emergence of selfish behaviour amongst the social elite with higher frequency environmental stress scenarios resulting in a greater disparity of resource transfer beliefs between agents with equal social status.

A single neural cellular automaton for body-brain co-evolution

A successful development of a complex multicellular organism took millions of years of evolution. The genome of such a multicellular organism guides the development of its body from a single cell, including its control system. Our goal is to imitate this natural process using a single neural cellular automaton (NCA) as a genome for modular robotic agents. In the introduced approach, called Neural Cellular Robot Substrate (NCRS), a single NCA guides the growth of a robot and the cellular activity which controls the robot during deployment. We also introduce three benchmark environments, which test the ability of the approach to grow different robot morphologies. We evolve the NCRS with covariance matrix adaptation evolution strategy (CMA-ES), and covariance matrix adaptation MAP-Elites (CMA-ME). While the NCRS is able to solve the easier tasks in the benchmark, the success rate reduces when the difficulty of the task increases. We discuss directions for future work that may facilitate the use of the NCRS approach for more complex domains.

Using evolutionary game theory to understand scalability in task allocation

Cooperation is an important challenge in multi-agent systems. Decentralised learning of cooperation is difficult because interactions between agents make the environment non-stationary, and the reward structure tempts agents to act selfishly. A centralised solution bypasses these challenges, but may scale poorly with system size. Understanding this trade-off is important, but systematic comparisons have been limited to tasks with fully aligned incentives. We introduce a new task for studying cooperation: agents can solve the task by working together and specialising in different sub-tasks, or by working alone. Using neuroevolution, we empirically investigate scalability comparing centralised and decentralised approaches. A mathematical model based on the replicator dynamics allows us to further study how the task's social dynamics affect the emergent behaviour. Our results show that the task's unique social features, in particular the challenge of agents' physical coordination, causes both centralised and decentralised approaches to scale poorly. We conclude that mitigating this coordination challenge can improve scalability more than the choice of learning type.

Geodesics, non-linearities and the archive of novelty search

The Novelty Search (NS) algorithm was proposed more than a decade ago. However, the mechanisms behind its empirical success are still not well formalized/understood. This short note focuses on the effects of the archive on exploration. Experimental evidence from a few application domains suggests that archive-based NS performs in general better than when Novelty is solely computed with respect to the population. An argument that is often encountered in the literature is that the archive prevents exploration from backtracking or cycling, i.e. from revisiting previously encountered areas in the behavior space. We argue that this is not a complete or accurate explanation as backtracking---beside often being desirable---can actually be enabled by the archive. Through low-dimensional/analytical examples, we show that a key effect of the archive is that it counterbalances the exploration biases that result, among other factors, from the use of inadequate behavior metrics and the non-linearities of the behavior mapping. Our observations seem to hint that attributing a more active role to the archive in sampling can be beneficial.

Comparison of evolutionary multi-objective optimization algorithms using imitation game

In this paper, we consider the artificial bird song imitation game as dynamic two objective optimization problem and propose it as the benchmark problem that includes a wide variety of transitions. We have evaluated the speed of evolution towards the edge of chaos using NSGA-II and SPEA2 which are representative evolutionary multi-objective optimization algorithms based on Pareto dominance. As a result, we show that our proposed method is effective, and NSGA-II is better than SPEA2 in convergence speed. We also show that the solution tends to converge around the median of the edge of chaos and the chaos region, as compared with the conventional imitation game using the single objective evolutionary algorithm.

Effects of encodings and quality-diversity on evolving 2D virtual creatures

How to jointly optimize the morphology and controller is a challenging problem in evolutionary robotics. Due to the large search space, both quality diversity algorithms and types of encodings have been employed to search the solution space more effectively. Here we compare Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) and a standard evolutionary algorithm as well as the effect of using a direct versus an indirect encoding. The results showed that the MAP-Elites algorithm found diverse solutions, yet the encodings accounted for a larger performance discrepancy. This indicates that the representation is at least as important as the optimization method for effectively creating robots.

POSTER SESSION: Evolutionary combinatorial optimization and metaheuristics

Multilevel memetic hypergraph partitioning with greedy recombination

The Hypergraph Partitioning (HGP) problem is a well-studied problem that finds applications in a variety of domains. In several application domains, such as the VLSI design and database migration planning, the quality of the solution is more of a concern than the running time of the algorithm. In this work, we introduce novel problem-specific recombination and mutation operators and develop a new multilevel memetic algorithm by combining these operators with kKaHyPar-E. The performance of our algorithm is compared with the state-of-the-art HGP algorithms on 150 real-life instances taken from the benchmark sets used in the literature. The experiments reveal that our algorithm outperforms all others, and finds the best solutions in 112, 115, and 125 instances in 2, 4, and 8 hours, respectively.

Neighbours similar fitness and the effectiveness of restarting local search

Premature convergence can be detrimental to the performance of search methods, which is why many search algorithms include restart strategies to deal with it. While it is common to perturb the incumbent solution with diversification steps of various sizes with the hope that the search method will find a new basin of attraction leading to a better local optimum, it is usually not clear whether this strategy is effective. To establish a connection between restart effectiveness and properties of a problem, we introduce a new property of fitness landscapes termed Neighbours with Similar Fitness. We conjecture that this property is true for many PLS (polynomial-time local search)-complete problems, and we demonstrate that the effectiveness of a restart strategy depends on this property.

Cooperative co-evolutionary memetic algorithm for pickup and delivery problem with time windows

Solving rich vehicle routing problems has become an important research avenue due to a plethora of their practical applications. Such discrete optimization problems commonly deal with multiple aspects of intelligent transportation systems through mapping them into the objectives which should be targeted by the optimization algorithm. In this paper, we introduce the cooperative co-evolutionary memetic algorithm for this task. It benefits from the simultaneous evolution of several subpopulations, each corresponding to a single objective, and from the process of migrating the best individuals across such subpopulations to effectively guide the search process. The experimental study performed over widely-used benchmark test cases indicates that our algorithm significantly outperforms the memetic techniques which tackle each objective separately and those that turn the multi-objective problem into a single-objective one through weighting the optimization criteria.

Multi-workflow scheduling in industrial edge: a genetic algorithm with heuristic strategy

On the industrial Internet, edge computing can effectively solve the problem of latency and data lending by sinking a portion of cloud computing to industrial park edges. However, as the edge servers are heterogeneous, how to allocate tasks in multiple workflows to the appropriate servers effectively to reduce execution time and energy consumption is a challenging problem. In this paper, we first model the multi-workflow scheduling for industrial edge computing as an optimization problem with the objectives to reduce execution time and energy consumption. Then we propose an efficient greedy algorithm to solve this problem by taking into account the competitive resource relationship between tasks. To further improve performance, we combine this heuristic with genetic algorithm (GA), resulting in a GA with heuristic strategy. In order to avoid coding redundancy, we propose a discrete coding scheme for GA. This genetic expression strategy reduces the search space and improves search efficiency. The experimental results validate the effectiveness of the proposed algorithm in finding promising solutions.

Exact and sequential penalty weights in quadratic unconstrained binary optimisation with a digital annealer

Quadratic unconstrained binary optimisation (QUBO) problems is a general class of combinatorial optimisation problems, which regained popularity after recent advances in quantum computing. Quantum-inspired technologies like Fujitsu's Digital Annealer (DA), based on simulated annealing, can solve QUBO problems much faster than traditional computers. Penalty methods can convert constrained optimisation problems into QUBO problems. However, existing exact methods of determining penalty weights are limited to specific QUBO problems and require manual analysis by experts in QUBO. Empirical methods are general but become computationally prohibitive for large problem sizes and do not guarantee that penalty weights preserve the feasibility of global optima. We present a simple, efficient, general method applicable to any QUBO to determine exact penalty weights. Such weights are simple upper-bounds of the smallest penalty weight guaranteeing that unconstrained global optima are the same as the feasible global optima. These bounds can be iteratively improved by sequential penalty methods which we also present. Experimental results with the DA on minimum cut, travelling salesman and multi-dimensional knapsack problems show the viability of the novel methodology hybridising exact and sequential methods. This work contributes towards general, automatic and scalable penalty methods in QUBO.

On the fitness landscapes of interdependency models in the travelling thief problem

Since its inception in 2013, the Travelling Thief Problem (TTP) has been widely studied as an example of problems with multiple interconnected sub-problems. The dependency in this model arises when tying the travelling time of the "thief" to the weight of the knapsack. However, other forms of dependency as well as combinations of dependencies should be considered for investigation, as they are often found in complex real-world problems. Our goal is to study the impact of different forms of dependency in the TTP using a simple local search algorithm. To achieve this, we use Local Optima Networks, a technique for analysing the fitness landscape.

Exploiting landscape features for fitness prediction in university timetabling

A small but growing number of papers have shown that landscape metrics can be useful for performance prediction, usually on classic unconstrained problems. In this paper, we consider the Curriculum-Based Course Timetabling problem, a heavily constrained problem known to have very neutral landscapes, and extract over 100 instance and landscape features to construct prediction models. An Iterated Local Search is used to sample the landscape, and the performance of both Simulated Annealing and a Hybrid Local Search algorithm are predicted using linear regression. Using as few as 4 features obtained via feature selection, our simple models are able to accurately predict the final fitness for either approach with an R-squared of approximately 0.95.

A hyper-heuristic approach for the PDPTW

The one-to-one pickup and delivery problem with time-windows (PDPTW) is one of the most important problems in Operations Research (OR). In this problem a set of goods need to be transported in a given time-window with a fleet of vehicles. The pickup and delivery problem is one of the most challenging and important combinatorial optimisation problems as it has many real-world applications. Selection hyper-heuristics that learn heuristic utility during optimisation have been successfully applied to a variety of different optimisation problems including those in OR. In this paper we investigate the application of a sequence-based selection hyper-heuristic to the one-to-one, static and deterministic variant of the pickup and delivery problem with time-windows and will compare the results against two well known approaches in the Adaptive Large Neighbourhood Search and Grouping Genetic Algorithm.

Towards evolutionary self-optimization of large multi-agent systems

The development of IoT technologies raises the question of their self-optimization. For this purpose we propose an approach based on a multi-agent interpretation, modeling discrete space and time by Cellular Automata (CA), and applying a game-theoretical model known as spatial Prisoner's Dilemma (SPD). A 2D space is occupied by agents which belong to species with certain strategies. Each agent participates in games with neighbors and its goal is to maximize its cumulated payoff. Species competing for space apply locally a mechanism of evolutionary selection, where the cumulated payoff is considered as fitness. As a result of the competition, more profitable species replace less profitable ones. While agents act locally to maximize their incomes we study conditions of emerging collective behavior measured by the global average total payoff of which the players are not aware. We show that collective behavior based on achieving in a fully distributed way a Nash equilibrium (NE) can emerge if some conditions of the game are fulfilled, in particular when an "income sharing mechanism" is introduced.

External archive hybrid genetic algorithm for unequal area facility layout problem

Layout problems are known to be complex and are generally NP-hard. As a subclass of facility layout problem (FLP), the unequal area facility layout problem (UA-FLP) is also difficult to find a satisfactory solution within acceptable computation time. Although much research has been carried out in this area, the search efficiency is far from sufficient to handle the UA-FLPs with a large number of facilities. Aiming to solve this problem, this paper proposes an external archive hybrid genetic algorithm (HGA) which uses flexible bay structure to represent solution. In HGA, a novel area-based greedy initialization strategy is used to produce initial population which guarantees the feasibility of initial individuals. Offspring is produced by individuals from current population and external archive in the hope that the search is guided in promising search space. Part of the offspring is further improved by a hill-climbing method to enhance the HGA's exploitability. All these combined to present an efficient HGA for the UA-FLP. Experimental results demonstrate that the HGA is able to obtain highly competitive results compared to other peer algorithms.

Policy network for solving flexible job shop scheduling problem with setup times and rescoure constraints

The flexible job shop scheduling problem (FJSP) is one of the most popular scheduling problems which allows an operation to be completed on several machines in a specific order and is well known as NP-hardness. And in today's industrial manufacturing environment, this problem is becoming more complex. We should take sequence-dependent setup times and resource constraints into account during the manufacturing process. In this article, a policy network is designed to solve the extended FJSP, with the goal of minimizing the total completion time (makespan). Three dispatching rules are proposed to simultaneously select an operation and assign it on a feasible machine at each scheduling point. To describe the manufacturing status at a scheduling stage, six state features are extracted as the input of the network. After the calculation of the network, the distribution of choosing each dispatching rule is obtained. After all of the operations are completed, the policy network is trained using policy gradient, a well-known reinforcement learning (RL) method. Finally, we generate ten benchmark data instances with different scales to compare the performance and efficiency of policy network with other algorithms. The proposed policy network outperforms its competitors, according to the results of the experiments.

POSTER SESSION: Evolutionary machine learning

Entropy-based local fitnesses for evolutionary multiagent systems

Evolutionary multiagent systems have been successfully applied to many real world problems, including search and rescue and ocean exploration. However, as the number of agents increases in such problems, the evaluation function captures an individual agent's fitness less and less accurately. As a consequence, agents adopt a small set of acceptable behaviors that are neither optimal nor robust to environmental changes or teammate failures. Fitness shaping, intrinsic fitnesses, or multi-fitness learning alleviate some of these concerns but generally require domain knowledge or the functional form of the evaluation function. In this paper, we introduce Entropy-Based Local Fitnesses (EBLFs) that generate diverse behaviors for agents and produce robust team behaviors without requiring environmental knowledge. The key contribution of EBLFs is to inject a dense, entropy-based fitness into the agents' evolution without interfering with the sparse, high-level system evaluation function. Our results show that the agents using EBLFs learn new skills in difficult environments with sparse feedback without requiring domain knowledge. In addition, EBLFs generated new team-level behaviors that were not defined by a human operator, but beneficial to robust team performance.

TGPGAN: towards expression-based generative adversarial networks

Before the advent of Generative Adversarial Networks (GANs), Evolutionary Computation approaches made up the majority of the state of the art for image generation. Adversarial models employing Deep Convolutional Neural Networks better fitted for GPU computing have been at this point more efficient. Nevertheless, motivated by recent successes in GPU-accelerated Genetic Programming (GP) and given the disposition of expression-based solutions towards image evolution, we believe in the prospect of using symbolic expressions as generators for GANs, instead of neural networks. In this paper, we propose a novel GAN model called TGPGAN, where the traditional convolutional generator network is replaced with a GP approach. The generator iteratively evolves a population of expressions that are then passed to the discriminator module along with real images for backpropagation. Our experimental results show that it is possible to achieve comparable results to a typical Deep Convolutional GAN while benefiting from the flexibility enabled by an expression-based genotype. Moreover, this work serves as a proof of concept for the evolution of symbolic expressions within adversarial models.

Multi-fidelity optimization method with asynchronous generalized island model for AutoML

AutoML frameworks seek to find the best configurations of machine learning techniques. A research field that has gained a great deal of popularity in recent years is the combined algorithm selection and hyperparameter optimisation (CASH) problem.

Several bio-inspired optimization techniques have been applied in AutoML, each with their drawbacks and benefits. For instance, methods may get stuck evaluating computationally expensive models, or certain solutions may dominate early on and inhibit the discovery of better ones. We propose to use multiple bio-inspired techniques in parallel in a generalized island model & combine this with multi-fidelity optimization to speed up the search. We analyze 3 different island topologies, including fully connected, unconnected, and ring topologies, to understand the trade-offs between information sharing and maintaining diversity. With respect to convergence time, the proposed method outperforms Asynchronous Evolutionary Algorithms and Asynchronous Successive Halving techniques.

In an objective comparison based on the OpenML AutoML Benchmark, we also find that the proposed method is competitive with current state-of-the-art AutoML frameworks such as TPOT, AutoWEKA, AutoSklearn, H2O AutoML, GAMA, Asynchronous Successive Halving, and random search.

Interpretable pipelines with evolutionary optimized modules for reinforcement learning tasks with visual inputs

The importance of explainability in AI has become a pressing concern, for which several explainable AI (XAI) approaches have been recently proposed. However, most of the available XAI techniques are post-hoc methods, which however may be only partially reliable, as they do not reflect exactly the state of the original models. Thus, a more direct way for achieving XAI is through interpretable (also called glass-box) models. These models have been shown to obtain comparable (and, in some cases, better) performance with respect to black-boxes models in various tasks such as classification and reinforcement learning. However, they struggle when working with raw data, especially when the input dimensionality increases and the raw inputs alone do not give valuable insights on the decision-making process. Here, we propose to use end-to-end pipelines composed of multiple interpretable models co-optimized by means of evolutionary algorithms, that allows us to decompose the decision-making process into two parts: computing high-level features from raw data, and reasoning on the extracted high-level features. We test our approach in reinforcement learning environments from the Atari benchmark, where we obtain comparable results (with respect to black-box approaches) in settings without stochastic frame-skipping, while performance degrades in frame-skipping settings.

MOPINNs: an evolutionary multi-objective approach to physics-informed neural networks

This paper introduces Multi-Objective Physics-Informed Neural Networks (MOPINNs). MOPINNs use an EMO algorithm to find the set of trade-offs between the data and physical losses of PINNs and therefore allow practitioners to correctly identify which of these trade-offs better represent the solution they want to reach. We discuss how MOPINNs overcome the complexity of weighting the different loss functions and to the best of our knowledge this is the first work relating multi-objective optimization problems (MOPs) and PINNs via evolutionary algorithms. We provide an exploratory analysis of this technique in order to determine its feasibility by applying MOPINNs on PDEs of particular interest: the heat, waves, and Burgers equations.

Particle swarm optimisation for sparsity-based feature selection in multi-label classification

Multi-label classification (MLC) is an emerging real-world problem in which each instance is associated with multiple class labels simultaneously. Multi-label classification is challenging due to the complex interactions of features and multiple labels. Sparsity-based feature selection is an effective and efficient approach to selecting relevant features for multi-label classification. However, most (if not all) sparsity-based approaches are gradient-based and thus they tend to get stuck at local optima. This paper proposes a new sparsity approach based on particle swarm optimisation (PSO) which enhances its global search ability, thereby avoiding local optima. The paper also proposes new sparsity-based fitness functions for PSO, which can consider the feature interactions. The experimental results show that PSO can enable sparsity-based methods to select highly relevant features, thus improving the multi-label classification performance. The proposed sparsity-based methods also achieve the highest number of statistically significant results in comparison to several state-of-the-art and standard sparsity-based benchmarks.

Balancing teams with quality-diversity for heterogeneous multiagent coordination

Evolutionary optimization is difficult in domains that require heterogeneous agents to coordinate on diverse tasks as agents often converge to a limited set of "acceptable" behaviors. Quality-Diversity methods alleviate this problem by shifting the focus from optimizing to finding a diverse repertoire of behaviors. However, in multiagent environments with diverse and tightly-coupled tasks, exploring the entire space of behaviors is often intractable. Agents must focus on searching regions of the behavior space that yield behaviors for good team performance. We extend Behavior Exploration for Heterogeneous Teams (BEHT)[4], a multi-level training framework that allows systematic exploration of the agents' behavior space required to complete diverse tasks as a coordinated team in a dynamic environment. We show that BEHT allows agents to learn diverse synergies that are demonstrated by the diversity of acquired agent behavior in response to the changing environment and agent behaviors.

Cascades of evolutionary support vector machines

Support vector machines (SVMs) have been widely applied to binary classification, but their real-life applications are limited due to high time and memory complexities of training coupled with high sensitivity to the hyperparameters of the classifier. To train SVMs from large datasets, numerous techniques were proposed which select a subset out of all the data presented for training. However, it is challenging to determine the appropriate size of such a subset which may lead to sub-optimal performance. In this paper, we propose a new approach to building a cascade of SVMs, each of which is optimized using a memetic algorithm that selects a small subset of the training data and tunes the hyperparameters. The optimization at each level of the cascade is aimed at creating competence regions that altogether cover complementary parts of the input space. Our experiments performed over 12 synthesized datasets and 24 benchmarks revealed that our method outperforms other classifiers, including SVMs trained with the whole set as well as with a reduced set selected using other techniques. Furthermore, our cascade identifies the high-confidence regions in the input space, and the results confirm that they are characterized with increased classification accuracy obtained for the test data.

On stochastic evolving algorithms

This pioneering study tries to break the wall of the question, how to develop algorithms capable of self-improving. The foundation of this work is the extended model of the human mind, while the information flow between components is inspired by molecular genetics. As a result, the proposed evolving algorithms consist of complex rules and stochastic processing, while the preliminary results also revealed enormous potential for the future.

Separating rule discovery and global solution composition in a learning classifier system

While utilization of digital agents to support crucial decision making is increasing, trust in suggestions made by these agents is hard to achieve. However, it is essential to profit from their application, resulting in a need for explanations for both the decision making process and the model. For many systems, such as common black-box models, achieving at least some explainability requires complex post-processing, while other systems profit from being, to a reasonable extent, inherently interpretable. We propose a rule-based learning system specifically conceptualised and, thus, especially suited for these scenarios. Its models are inherently transparent and easily interpretable by design. One key innovation of our system is that the rules' conditions and which rules compose a problem's solution are evolved separately. We utilise independent rule fitnesses which allows users to specifically tailor their model structure to fit the given requirements for explainability.

Evolvable hybrid ensembles for musical genre classification

The possibility of classifying musical songs according to their musical genre is becoming more challenging because of the millions of songs included in online databases. Therefore, reliable and efficient methods need to be developed that will automatically solve this task. In this article, the mentioned task is accomplished using sets of classifiers. The authors' contribution to the development of automatic musical genre recognition is using hybrid ensembles formed from deep neural networks and classical classifiers and the optimization process executed on the voting process of individual classifiers. Finally, differential evolution algorithms have been used to improve the classification quality further. The proposed evolutionary algorithm shows improvement in comparison with other optimization methods.

Accelerated pattern search with variable solution size for simultaneous instance selection and generation

The search for the optimum in a mixed continuous-combinatorial space is a challenging task since it requires operators that handle both natures of the search domain. Instance reduction (IR), an important pre-processing technique in data science, is often performed in separated stages, combining instance selection (IS) first, and sub-sequently instance generation (IG). This paper investigates a fast optimisation approach for IR considering the two stages at once. This approach, namely Accelerated Pattern Search with Variable Solution Size (APS-VSS), is characterised by a variable solution size, an accelerated objective function computation, and a single-point memetic structure designed for IG.

APS-VSS is composed of a global search crossover and three local searches (LS). The global operator prevents premature convergence to local optima, whilst the three LS algorithms optimise the reduced set (RS). Furthermore, by using the k-nearest neighbours algorithm as a base classifier, APS-VSS exploits the search logic of the LS to accelerate, by orders of magnitude, objective function computation. The experiments show that APS-VSS outperforms existing algorithms using the single-point structure, and is statistically as competitive as state-of-the-art IR techniques regarding accuracy and reduction rates, while reducing significantly the runtime.

A distributed particle swarm optimization algorithm for distributed clustering

With the rapid development of the Internet of Things, distributed clustering has emerged as an attractive research topic. Although evolutionary computation (EC) has been introduced as an effective approach to solve complex data clustering problems, few studies realize the use of EC for distributed clustering. In this paper, we intend to propose a distributed particle swarm optimization (PSO) for distributed clustering. The main idea of the work is to take advantage of the natural distributed computing feature of EC to solve distributed clustering. Since each site only knows a part of the data, the performance of distributed clustering algorithm usually deteriorates. To address this challenge, the proposed algorithm includes two main steps. Firstly, PSO is introduced in each local site to perform local data clustering. Secondly, to combine the local clustering results together, the K-means algorithm is used in the global site to generate the global clustering result. Experimental results demonstrate the overall performance superiority of the proposed algorithm compared with two other algorithms.

Multi-task optimisation for multi-objective feature selection in classification

Many effective evolutionary multi-objective feature selection algorithms have been developed in recent years. However, most of them tend to address feature selection tasks independently, while in real-world applications, many feature selection tasks are closely related to each other and share common knowledge. Multi-task optimisation, which aims to address multiple related optimisation tasks simultaneously and share common knowledge across them, can benefit feature selection. However, it is seldom considered for feature selection. In this work, we develop a multi-task multi-objective optimisation algorithm for feature selection in classification, with the aim of capturing and sharing common knowledge for related feature selection tasks. To evaluate the effectiveness of the proposed algorithm, we conduct a set of experiments to compare its performance with that of the single-task multi-objective feature selection algorithm on three sets of related feature selection tasks. With the help of knowledge transfer, our new algorithm significantly improved the feature selection performance is more efficient.

KDE-GAN: enhancing evolutionary GAN with knowledge distillation and transfer learning

Generative Adversarial Networks (GAN) is a powerful algorithm to reconstruct artificial data that is similar to a set of given data. Evolutionary GAN (E-GAN) is a state-of-the-art variant of GAN. E-GAN combines evolutionary elements such as mutation, fitness evaluation and selection to address vanishing gradient and mode collapse problems of classic GAN. This paper further improves E-GAN by proposing Knowledge Distillation and Knowledge Transfer. The new method, known as Knowledge Distillation E-GAN (KDE-GAN), incorporates the student-teacher architecture and fine-tuning of transfer learning to help the evolutionary GAN training process. Our experiments on benchmark data sets show that KDE-GAN can improve performance and efficiency. Knowledge distillation and knowledge transfer can indeed accelerate the learning process yet reach better FIDs (Frechet Inception Distance).

Scalable evolutionary hierarchical reinforcement learning

This paper investigates a novel method combining Scalable Evolution Strategies (S-ES) and Hierarchical Reinforcement Learning (HRL). S-ES, named for its excellent scalability, was popularised with demonstrated performance comparable to state-of-the-art policy gradient methods. However, S-ES has not been tested in conjunction with HRL methods, which empower temporal abstraction thus allowing agents to tackle more challenging problems. We introduce a novel method merging S-ES and HRL, which creates a highly scalable and efficient (compute time) algorithm. We demonstrate that the proposed method benefits from S-ES's scalability and indifference to delayed rewards. This results in our main contribution: significantly higher learning speed and competitive performance compared to gradient-based HRL methods, across a range of tasks.

A comparative study of GP-based and state-of-the-art classifiers on a synthetic machine learning benchmark

In this paper we compare performance of genetic programming-based symbolic classifiers on a novel synthetic machine learning benchmark called DIGEN. This framework and collection of 40 different classification problems was designed specifically to differentiate performance of leading machine learning methods.

Bootstrapped fitness critics with bidirectional temporal difference

Evolutionary algorithms (EAs) are well suited for solving many real-world multiagent coordination problems with global, long-term feedback. However, EAs struggle when the feedback becomes sparse and uninformative. In such cases, a system designer can use Fitness Critics, which are functional models that estimate the value of an agent's contribution to transform the sparse domain feedback into a dense reward signal. However, existing methods for updating fitness critics do not leverage the temporal information about when a reward is received. Ideally, temporal difference (TD) methods can leverage temporal information about the sparse feedback signal to bootstrap Fitness Critics. Yet, due to the structure Fitness Critics, direct application of TD methods coevolutionary algorithms result in Fitness Critics that under-represent the rewards that are received earlier in the episode. This paper introduces Bidirectional Fitness Critics (BFCs), which makes use of a novel, bidirectional temporal difference method, to successfully bootstrap the training of fitness critics with temporal reward information, without under-representing early rewards. The paper demonstrates a significant increase in the converged performance of agents coevolved with BFCs on a multiagent coordination domain.

Cost-sensitive classification tree induction as a bi-level optimization problem

Data imbalance is still so far a challenging issue in data classification. In literature, cost-sensitive approach has been used to deal with such a challenge. Despite its interesting results, the manual design of cost matrices is still the main shortcoming of this approach. The data engineer is still facing a great difficulty in defining the misclassification costs, especially with the absence of domain specific knowledge. Recent works suggest the use of genetic programming as an effective tool to design classification trees with automatically learned costs. Although promising results were obtained, evaluating a classification tree with a single cost matrix is not a wise choice. Indeed, the tree quality evaluation requires trying several misclassification cost matrices to be more precise and fair. Motivated by this observation, we propose in this paper a bi-level modeling of the cost-sensitive classification tree induction problem where the upper level evolves the classification trees, while the cost matrix of each tree is optimized at the lower level. Our bi-level modeling is solved using an existing co-evolutionary algorithm, and the resulting method is named Bi-COS. The obtained comparative experimental results on several imbalanced benchmark datasets show the merits of Bi-COS with respect to the state-of-the art.

GA-auto-PU: a genetic algorithm-based automated machine learning system for positive-unlabeled learning

Positive-Unlabeled (PU) learning is a growing field of machine learning that now consists of numerous algorithms; the number is now so large that considering an extensive manual search to select the best algorithm for a given task is impractical. As such, the area of PU learning could benefit from an Automated Machine Learning (Auto-ML) system, which selects the best algorithm for a given input dataset, among a pre-defined set of candidate algorithms. This work proposes such with GA-Auto-PU, a Genetic Algorithm-based Auto-ML system that can generate PU learning algorithms. Experiments with 20 real-world datasets show that GA-Auto-PU significantly outperformed a state-of-the-art PU learning method.

DEvS: data distillation algorithm based on evolution strategy

The development of machine learning solutions often relies on training using large labeled datasets. This raises challenges in terms of data storage, data privacy protection, and longer model training time. One of the possible solutions to overcome these problems is called dataset distillation - a process of creating a smaller dataset while maximizing the preservation of its task-related information. In this paper, a new dataset distillation algorithm is proposed, called DEvS, which uses an evolutionary strategy approach to condense the training samples initially available for an image classification task, while minimizing the loss of classification accuracy. Experiments on CIFAR-10 demonstrate the competitiveness of the proposed approach. Also, contrary to recent trends, DEvS is derivative-free image generation, and therefore has greater scalability on larger input image sizes.

Chaotic genetic bee colony: combining chaos theory and genetic bee algorithm for feature selection in microarray cancer classification

Evolutionary and Swarm algorithms show great effectiveness when performing feature selection, classification problems, and other optimization tasks. These scenarios highlight several algorithms such as Genetic Algorithm, Particle Swarm Optimization, Artificial Bee Colony, and Genetic Bee Colony (GBC). The last one is a combination of Genetic Algorithm and Artificial Bee Colony. Our study proposes an improvement over the GBC algorithm, including the Chaos Theory behavior in its foundation. In addition, we modified the Onlooker Bee and Scout Bee phase to improve the proposed model exploration and exploitation capabilities. Our novel proposal, Chaos-GBC, showed a very competitive performance compared to GBC and other metaheuristics. CGBC achieved the highest classification accuracy and the lowest average number of selected genes, showing the importance of our new proposal.

Binary and multinomial classification through evolutionary symbolic regression

We present three evolutionary symbolic regression-based classification algorithms for binary and multinomial datasets: GPLearnClf CartesianClf and ClaSyCo. Tested over 162 datasets and compared to three state-of-the-art machine learning algorithms---XGBoost, LightGBM, and a deep neural network---we find our algorithms to be competitive. Further, we demonstrate how to find the best method for one's dataset automatically, through the use of a state-of-the-art hyperparameter optimizer.

Evolving convolutional neural networks for intrusion detection system using hybrid multi-strategy aquila optimizer

As the attack methods of intruders become more complicated and intelligent, the capability of the intrusion detection system (IDS) should be improved to deal with this situation. In recent years, deep learning has been widely used in the IDS because deep learning has advantages in processing high-dimensional data, obtaining hidden information in data, and solving the problem of data imbalance in the network. In addition, swarm intelligence algorithms (SI) as stochastic optimization methods have been extensively employed to improve various optimization problems. In this paper, we propose a hybrid multi-strategy aquila optimizer (HMAO) to obtain the optimal subset of features extracted by convolutional neural networks to train the IDS classifier. We demonstrate the superiority of HMAO by using standard benchmark function experiment. Then, we evaluate the proposed algorithm for IDS on UNSW-NB15 dataset. The experimental results show that HMAO outperforms the original algorithm in terms of the capacity of finding excellent solutions greatly. Furthermore, in the application of IDS, the proposed algorithm performs better than several feature engineering methods from state-of-the-art related works in Accuracy, TPR, FPR, and F-score.

EvoJAX: hardware-accelerated neuroevolution

Evolutionary computation has been shown to be a highly effective method for training neural networks, particularly when employed at scale on CPU clusters. Recent work have also showcased their effectiveness on hardware accelerators, such as GPUs, but so far such demonstrations are tailored for very specific tasks, limiting applicability to other domains. We present EvoJAX, a scalable, general purpose, hardware-accelerated neuroevolution toolkit. Building on top of the JAX library, our toolkit enables neuroevolution algorithms to work with neural networks running in parallel across multiple TPU/GPUs. EvoJAX achieves very high performance by implementing the evolution algorithm, neural network and task all in NumPy, which is compiled just-in-time to run on accelerators. We provide extensible examples of EvoJAX for a wide range of tasks, including supervised learning, reinforcement learning and generative art. Since EvoJAX can find solutions to most of these tasks within minutes on a single accelerator, compared to hours or days when using CPUs, our toolkit can significantly shorten the iteration cycle of evolutionary computation experiments. EvoJAX is available at https://github.com/google/evojax

Designing a novel and high performance algorithmic trading model using evolutionary AutoML and technical analysis

Trading point prediction is the action of identifying ideal buy and sell points for stocks to maximize profit. It is a widely studied application of Machine Learning on time series data due to the abundance of available historical data and the challenges presented by stocks' noisy nature. However, we have found that this is not an application that has drawn significant attention from the automated machine learning (AutoML) community despite its ideal nature, due to the large number of available models and algorithms that can address this problem. In this research, we employ the Evolutionary Multi-objective Algorithm Design Engine (EMADE). This search framework uses genetic programming to automate model creation and hyperparameter optimization. Traditionally, EMADE produced novel algorithms for tabular, time-series, and image-based problems. This research extends EMADE's capabilities for trading algorithms by adding technical indicator (TI) primitives and novel objective functions. We present analyses on objective sets, learners, TI primitives, and their hyperparameters for this problem. To measure the effectiveness of the evolved models, we evaluate profit percentage on historical US stock data. We have found that the models discovered through EMADE AutoML search techniques produce returns up to 36.38% on average, more than two-fold that of state-of-the-art.

Comparing different metaheuristics for model selection in a supervised learning classifier system

In the context of tasks that highly involve human interaction and expert knowledge, i.e., operator guidance in manufacturing, the possibility of decision verifications by the user is a key requirement for inspiring confidence in a system and its predictions. Rule-based Machine Learning offers one way to create such systems, with Learning Classifier Systems being a family of algorithms whose models are by design human-interpretable. Obtaining a rule base as compact and accurate as possible is a mandatory prerequisite to increasing comprehensibility, and metaheuristics such as Genetic Algorithms or Particle Swarm Optimization are powerful approaches to reducing the size of large rule bases. In particular, this paper will analyze five population-based metaheuristics and their ability to compose solutions (rule subsets) as part of a newly developed rule-based learning system, the Supervised Rule-based Learning System (SupRB). The experiments suggest that all metaheuristics can significantly reduce the complexity and filter out obstructive rules, increasing the prediction quality in the process.

Fairness in generative modeling: do it unsupervised!

We design general-purpose algorithms for addressing fairness issues and mode collapse in generative modeling. More precisely, to design fair algorithms for as many sensitive variables as possible, including variables we might not be aware of, we assume no prior knowledge of sensitive variables: our algorithms use unsupervised fairness only, meaning no information related to the sensitive variables is used for our fairness-improving methods. All images of faces (even generated ones) have been removed to mitigate legal risks.

POSTER SESSION: Evolutionary multiobjective optimization

Estimating the quality of initial populations in multi-objective evolutionary algorithms

The initial population is the starting point of each evolutionary algorithm (EA) and is widely understood to be an important factor when it comes to the overall performance of the algorithm. In this paper, we explore the possibility of estimating the quality of the initial population in multi-objective evolutionary algorithms (MOEAs) without using expensive benchmarking. While there are already well established metrics for measuring the performance of a MOEA, they are often calculated only on the non-dominated solutions and show drawbacks when applied as a measure for the first generation. Here, we discuss the challenges for estimating the quality of the initial population and propose a new metric dedicated for this purpose. Finally, we evaluate the accuracy of our metric against a set of benchmarking problems.

Multi-objective counterfactual fairness

When machine learning is used to automate judgments, e.g. in areas like lending or crime prediction, incorrect decisions can lead to adverse effects for affected individuals. This occurs, e.g., if the data used to train these models is based on prior decisions that are unfairly skewed against specific subpopulations. If models should automate decision-making, they must account for these biases to prevent perpetuating or creating discriminatory practices. Counter-factual fairness audits models with respect to a notion of fairness that asks for equal outcomes between a decision made in the real world and a counterfactual world where the individual subject to a decision comes from a different protected demographic group. In this work, we propose a method to conduct such audits without access to the underlying causal structure of the data generating process by framing it as a multi-objective optimization task that can be efficiently solved using a genetic algorithm.

Exploring the decision and objective space of SAT constrained multi-objective problems

Although most real-world problems include many constraints that limit the set of feasible solutions, there is significantly less study in constrained benchmark problems when compared to unconstrained ones. SAT constraints attempt to aid the study of constrained multi-objective optimization by allowing any number of equality and inequality constraints, with controllable feasibility ratio, to be added to state-of-the-art benchmark problems. This paper studies the features of SAT constraints by looking at some of their properties such as distribution, clustering and distance of feasible solutions in the decision and objective space when added to 0-1 multi-objective knapsack problems and multi-objective MNK-Landscapes.

Application of nature inspired algorithms to multi-objective optimization of new generation network problem

The subject of the paper is the application of metaheuristic algorithms inspired by nature for multi-criteria optimization in new generation optical networks. In the considered optical network, criteria related to the structure and topology of the network and the equipment used were taken into account. Network criteria include the length of optical channels, optical fiber attenuation, and dispersion. On the other hand, the hardware criteria include the cost of transponders and a finite range of frequency slides in the optical spectrum of the optical fiber. Several nature-inspired metaheuristics were used for the multi-criteria optical network optimization problem. The proposed algorithm, based on the bee algorithm was compared with others taken from the literature. Simulation results of all algorithms were implemented and carried out using test networks with topology typical for telecommunication networks. The proposed and improved algorithm obtained good results that encourage further work and research.

A computationally fast but approximate MIP-DoM calculation for multi-objective optimization

Quality indicators play an essential role in multi- and many-objective optimization. Dominance move (DoM) is a binary indicator that compares two solution sets. It measures the minimum move in elements of one set P must do in a way that every element of Q is dominated to at least one element of the moved set P'. As an indicator, it presents some outstanding characteristics as Pareto compliant, absence of the use of reference point or set, and robustness in terms of dominance-resistant solutions. However, DoM computation presents a combinatorial nature. A recent paper proposes a mixed-integer programming model, MIP-DoM, which exhibits a polynomial computational complexity to the number of solutions. Considering practical situations, its calculation on some problems may take hours. Using a cluster-based and divide-and-conquer strategy, this paper presents a computationally fast approximate MIP-DoM to deal with the combinatorial nature of the original calculation. Some classical problem sets are tested, showing that our approach is computationally faster and provides accurate estimates for the exact MIP-DoM.

BOBEA: a bi-objective biclustering evolutionary algorithm for genome-wide association analysis

The behavior of many diseases is still not well understood by researchers. Genome-Wide Association (GWA) analyzes have recently become a popular approach to discovering the genetic causes of many complex diseases. These analyzes could lead to the discovery of genetic factors potentially involved in certain disease susceptibility. These studies typically use the most common genetic variation between individuals, the Single Nucleotide Polymorphism (SNP). Indeed, many complex diseases have been revealed to be associated with combinations of SNP interactions. However, the identification of such interactions is considered difficult. Therefore, various unsupervised data mining methods are often developed in the literature to identify such variation involved in disease. In this work, a biclustering method is adopted to detect possible associations between SNP markers and disease susceptibility. It is an unsupervised classification technique, which plays an increasingly important role in the study of modern biology. We propose an evolutionary algorithm based on a bi-objective approach for the biclustering of the Genome-Wide Association. An experimental study is achieved on synthetic data to evaluate the performance of the proposed algorithm. Promising results are obtained.

A novel evolutionary framework based on a family concept for solving multi-objective bilevel optimization problems

Multi-objective Bilevel Optimization (MOBO) problems are challenging because they include an optimization problem as part of the constraints within a multi-objective problem. Different evolutionary algorithms have been proposed to handle these MOBO problems, and they implement a solution conservative representation that was not designed for bilevel problems. This work presents a novel evolutionary framework based on a suitable and adaptive representation of MOBO problem solutions. The adopted representation is based on the family concept inspired by the biological classification of species, which helps to group representative solutions promoting diversity. Moreover, a non-dominated sorting and a density estimator (based on the hypervolume indicator) are adapted for the family concept. The proposed framework shows competitiveness in the experiments compared to a state-of-the-art algorithm for challenging constrained MOBO problems.

Towards multi-objective optimization of sustainable insect production chains

Due to the relatively recent history of industrial insect farming, there are still no extensive studies on the optimization of insect production chains. In this context, the present work aims to be a first step in filling this gap. A tentative set of mathematical models is proposed to take into account three different, conflicting objectives: maximizing economic viability, minimizing environmental impacts, and maximizing societal benefits. The state-of-the-art multi-objective algorithm NSGA-II is used to obtain an approximate Pareto fronts of solutions, that are later analyzed to identify suitable trade-offs. While preliminary, the results are encouraging enough for the computer-assisted design and development of sustainable insect production chains. Future works will take into account more extensive models, able to simulate scenarios in different European countries, and include parts of the chain such as transportation of goods to and from the production facilities.

On the potential of automated algorithm configuration on multi-modal multi-objective optimization problems

Hardness of Multi-Objective (MO) continuous optimization problems results from an interplay of various problem characteristics, e. g. the degree of multi-modality. We present a benchmark study of classical and diversity focused optimizers on multi-modal MO problems based on automated algorithm configuration. We show the large effect of the latter and investigate the trade-off between convergence in objective space and diversity in decision space.

Fair feature subset selection using multiobjective genetic algorithm

The feature subset selection problem aims at selecting the relevant subset of features to improve the performance of a Machine Learning (ML) algorithm on training data. Some features in data can be inherently noisy, costly to compute, improperly scaled, or correlated to other features, and they can adversely affect the accuracy, cost, and complexity of the induced algorithm. The goal of traditional feature selection approaches has been to remove such irrelevant features. In recent years ML is making a noticeable impact on the decision-making processes of our everyday lives. We want to ensure that these decisions do not reflect biased behavior towards certain groups or individuals based on the protected attributes such as age, sex, or race. In this paper, we present a feature subset selection approach that improves both fairness and accuracy objectives and computes Pareto-optimal solutions using the NSGA-II algorithm. We use statistical disparity as a fairness metric and F1-Score as a metric for model performance. Our experiments on the most commonly used fairness benchmark datasets with three different machine learning algorithms show that using the evolutionary algorithm we can effectively explore the trade-off between fairness and accuracy.

Surrogate models for IoT task allocation optimization

The optimization of Task Allocation is an important and ongoing problem in the field of Internet-of-Things (IoT) networks. A sometimes overlooked aspect is the computational overhead of the employed optimization algorithms, which may require expensive simulations. Such simulations may be too taxing for the limited hardware found in such networks. To alleviate this, this work proposes two distinct surrogate models suited to replace the simluation-based evaluation in a task allocation optimization. Experiments show that both models provide comparable results to a simulation-based optimization while requiring far less computational resources during the evaluation.

Extending the push and pull search framework with boundary search for constrained multi-objective optimization

Constraints are a reality in many real-world application areas, adding feasibility to the existing multiple objective challenge. Further, the presence of complex constraints poses a significant challenge to multi-objective evolutionary algorithms. A recently proposed biphasic multi-objective evolutionary framework for constrained multi-objective optimization problems is the Push and Pull Search framework. This framework benefits from a strong exploration of the constrained landscape during the search for the unconstrained Pareto-Front during the first phase. The work herein extends the Push and Pull Search framework, extending landscape information gathering in the push phase; adding a binary search of the feasible and infeasible regions and creating a suitably diverse population and improved initialization for the push phase.

An analysis on effectiveness of estimated convergence points for enhancement of multi-objective optimization algorithms

Using an estimated convergence point to enhance the search for an evolutionary algorithm (EC) is one of the methods for improving the EC convergence speed. In this work, we analyze the practicality of estimation point(s) using various benchmark functions in multi-objective optimization algorithms (MOAs). We apply two estimation methods to seven MOAs and evaluate the effectiveness of estimation points using 15 multi-objective benchmark functions. We compare and analyze the original and adaption algorithms using estimated convergence point(s) with statistical evaluation metrics. The results indicate that the estimated convergence point(s) is effective for MOAs, and the effectiveness is analyzed, investigated, and discussed. We found that using estimated convergence points can effectively enhance the optimization of various MOAs. Its performance does not depend on only the fitness landscape, but also other unknown factors, which induce our future work.

A constraint cone decomposition evolutionary algorithm with dual populations

Evolutionary algorithms are very popular for constrained multi-objective optimization problems (CMOPs). It is critical for constrained multi-objective evolutionary algorithms (CMOEAs) to pass across infeasible regions and acquire complex feasible fronts caused by constraints. In this paper, a constraint cone decomposition evolutionary algorithm (CCDEA) is proposed for CMOPs. CCDEA decomposes a CMOP into a series of constrained subproblems. And every subproblem is further decomposed into multiple constrained sub-layers. CCDEA maintains a decomposition-based population to preserve the best individual at each sub-layer. Thus CCDEA makes full use of the effective information of both feasible and infeasible solutions to pass across infeasible regions reasonably. In addition, a dominance-based population is designed in CCDEA to combine the advantages of decomposition and dominance to heighten the convergence and distribution of obtained fronts. Experimental results on DTLZ test instances with three types of constraints indicate that CCDEA achieves the overall better performances than the three other popular algorithms.

The effect of epigenetic blocking on dynamic multi-objective optimisation problems

Hundreds of Evolutionary Computation approaches have been reported. From an evolutionary perspective they focus on two fundamental mechanisms: cultural inheritance in Swarm Intelligence and genetic inheritance in Evolutionary Algorithms. Contemporary evolutionary biology looks beyond genetic inheritance, proposing an "Extended Evolutionary Synthesis". Many concepts from the Extended Evolutionary Synthesis have been left out of Evolutionary Computation as interest has moved towards specific implementations of the same general mechanisms. One such concept is epigenetic inheritance, which is increasingly considered central to evolutionary thinking. Epigenetic mechanisms allow quick non- or partially-genetic adaptations to environmental changes. Dynamic multi-objective optimisation problems represent similar circumstances to the natural world where fitness can be determined by multiple objectives (traits), and the environment is constantly changing.

This paper determines whether the advantages that epigenetic inheritance provide in the natural world can be replicated in dynamic multi-objective optimisation problems. Specifically, an epigenetic blocking mechanism is applied to a state-of-the-art multi-objective genetic algorithm, MOEA/D-DE, and its performance is compared on three sets of dynamic test functions: FDA, JY, and UDF. The mechanism shows improved performance on 12 of the 16 test problems, providing initial evidence that more algorithms should explore the wealth of epigenetic mechanisms seen in the natural world.

Learning to balance exploration and exploitation in pareto local search for multi-objective combinatorial optimization

As a natural extension of local search, Pareto local search (PLS) is a basic building block in many state-of-the-art metaheuristics for multi-objective combinatorial optimization problems (MCOPs). However, the basic PLS suffers from a low convergence rate, since it always fully explores the neighborhood of each unexplored solution, which is unnecessary. Some studies tried to introduce heuristic design in PLS to balance exploration and exploitation. In this paper, we handle this issue by a learning based framework. In the framework, PLS applies the firstK strategy, namely it stops exploring a solution's neighborhood when it obtains K non-dominated solutions, where K is adaptively controlled by a neural network based on observations collected during the search. Training the neural network is modeled as a reinforcement learning problem. Thus the proposed PLS variant is called PLSNN. In the experiments, we compared the performance of PLS, PLSNN and several PLS variants with heuristic design on the multi-objective unconstrained binary quadratic programming problem (mUBQP). The experimental results show that PLSNN performed significantly better than its counterparts on all test instances.

POSTER SESSION: Evolutionary numerical optimization

A hybrid self-adapting multi-swarm algorithm based on PSO and CMA-ES for continuous dynamic optimization

In this study a new multi-swarm hybrid algorithm is designed for dynamic optimization problems. It is based on two well-known stochastic approaches such as the Particle Swarm Optimization (PSO) and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) applied with additional modifications to make them able to find better solutions before the environmental changes. Additionally, an adaptation strategy is proposed to regulate the number of active swarms, so that new swarms would be brought into existence or redundant swarms would be removed to maintain the multi-swarm diversity. The Generalized Moving Peak Benchmark is used to evaluate the performance of the proposed algorithm and to compare it to the alternative approaches. Obtained results demonstrated usefulness of the new algorithm as it outperformed other alternative approaches.

Dynamic perturbation for population diversity management in differential evolution

The performance of Differential Evolution (DE) is closely related to the population diversity since its mechanism of generating offspring depends wholly on the differences between individuals. This paper presents a simple perturbation technique to maintain the population diversity in which the noise intensity is adjusted dynamically during the search. A modification of the well-known L-SHADE adaptation method is also introduced to manipulate the convergence behaviour of DE. By incorporating these techniques, we develop a new variant of DE called S-LSHADE-DP. Experiment results conducted on the benchmark suite of CEC '22 competition show that S-LSHADE-DP is highly competitive with current state-of-the-art DE-based algorithms. The implementation of S-LSHADE-DP is available at https://github.com/cuonglvsoict/S-LSHADE-DP.

Distributed evolution strategies for large-scale optimization

As their underlying models are becoming larger and data-driven, an increasing number of modern real-world applications can be mathematically formulated as large-scale continuous optimization. In this paper, we propose a distributed evolution strategy (DES) for large-scale black-box optimization (specifically with memory-costly function evaluations), running on the mainstream clustering computing platform. In order to amortize the memory cost, DES utilizes the distributed shared memory to support parallelism of function evaluations. For better fitting into the scalable computing architecture of interest, DES adopts the well-known island model to distribute one low-rank version of covariance matrix adaptation (CMA), because the quadratic complexity of the standard CMA is not well scalable. For a proper trade-off between exploration and exploitation, DES needs to, on-the-fly, adjust strategy parameters at two time-scale levels. Experiments show its efficiency on most of test functions chosen.

A layered learning estimation of distribution algorithm

Though estimation of distribution algorithms (EDAs) have witnessed success in problem optimization, most of them suffer from sharp shrink of covariance, which may lead to premature convergence. To alleviate this issue, this paper proposes a layered learning estimation of distribution algorithm (LLEDA) by maintaining multiple probability distribution models. Specifically, LLEDA first separates the population into several layers based on fitness. Then, the mean position of each layer is computed. Subsequently, we let the estimated mean position in each lower layer randomly learn from the one of a randomly selected higher layer, so that the mean positions of lower layers could be promoted to be close to promising areas found in the current population. At last, the covariance of each layer is estimated based on the generated new mean position and the individuals in this layer. By this means, multiple probability models with high quality are maintained and then are used to sample promising and diversified offspring separately. Comparative experiments conducted on a widely used benchmark problem set demonstrate that the proposed LLEDA achieves competitive or even much better performance than several state-of-the-art and representative EDAs.

The effect of mirrored sampling with active CMA and sample reuse in the CMAES-APOP algorithm

The CMAES-APOP is a variation of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) which adjusts the population size in the CMA-ES to deal with multi-modal functions. In this paper, we investigate two methods to improve the performance of this algorithm. The first one is based on the mirrored sampling with active covariance matrix adaptation. The second one is to keep some search points of the previous iteration for the evolution of the current iteration when the population size is large. We will show that the techniques can enhance the performance of the CMAES-APOP on some benchmark multi-modal functions, especially when they are used simultaneously.

Improving the differential evolution strategy by coupling it with CMA-ES

Differential Evolution Strategy (DES) is a method that combines the differential mutation with the search direction adaptation mechanisms used by the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Although earlier research on that algorithm proved its good efficiency, it was still outperformed by the combined and hybrid methods which have been the winners of single objective bound constrained numerical optimization competitions. This paper reports on research that was aimed at improving the efficiency of DES in such a way that the optimization process is initially performed by DES, and after it terminates, the result is finely tuned by CMA-ES, whose expectation vector and the covariance matrix are initialized with statistics of points generated by DES. The hybrid method is evaluated according to the problem definitions and evaluation criteria for the 2022 Special Session and Competition on Single Objective Bound Constrained Numerical Optimization. According to the numerical results, the proposed hybrid method outperforms the standard versions of both DES and CMA-ES. Moreover, the comparison of results on the CEC'2017 benchmark suite evidences that the presented method would be superior or comparable to other methods whose results for CEC'2017 have been reported by the competing teams.

A novel dynamic analysis on multi-scale quantum harmonic oscillator algorithm using double-well function

Based on the double-well function, a novel dynamic analysis is proposed to study the optimization mechanism of the multi-scale quantum harmonic oscillator algorithm. This procedure-oriented method gives new clues for the algorithm's improvement, it can be adapted to study other algorithms' evolution process and complement the prevailing performance-oriented benchmark testing.

POSTER SESSION: Genetic algorithms

A GA based approach for solving ring design telecommunication network

Improvement in physical network infrastructure is often required to enhance the services provided by telecommunication companies to meet consumer demand. One aspect of enhancement is to improve broadband access which is based on the Gigabit Passive Optical Network (GPON) technology for Fiber-To-The-Home (FTTH) networks. However, designing and deploying FTTH networks are costly due to the infrastructure costs such as digging up the road, laying the cables, installing the junction boxes, etc.

This paper, based on a GA approach, focuses on optimizing the cost of designing ring topology for the GPON FTTH network. Our approach consists of three steps to achieve a near-optimum solution. The first step exploits the similarity between the traveling salesman problem (TSP) and the ring design problem. A GA is used to find a TSP solution for our problem. If the solution is not valid, then a second step based on another GA is executed to find a set of valid ring designs. The third step is used to group the obtained valid solutions and apply a customized GA with a specific crossover for further improvement. The proposed method will be tested with different networks to illustrate the effectiveness of our approach for solving the ring design problems.

An edge quality aware crossover operator for application to the capacitated vehicle routing problem

Permutation type problems require vertices to be present once only in a given solution. To apply a Genetic Algorithm (GA) to permutation problems necessitates specialist crossover operators to avoid vertex repetition in offspring solutions whilst preserving parental paths or edges. However, these operators are typically blind in terms of the potential quality of a given edge relying on natural selection to ensure the quality of parental edges. Natural selection is a relative quality measure when perhaps consideration should be given to an absolute quality measure to select edges. Moreover, it is proposed that introduction of non-parental edges during crossover can be beneficial if selected using an absolute quality measure. Consequently, a novel crossover operator ER-Q is proposed that preserves parental edges whilst also exploiting edge quality. Applied to a set of Capacitated Vehicle Routing Problems (CVRP) of up to 150 vehicles ER-Q significantly improves upon blind crossover operators with results within just a few percent of the best known.

Neural architecture search using genetic algorithm for facial expression recognition

Facial expression is one of the most powerful, natural, and universal signals for human beings to express emotional states and intentions. Thus, it is evident the importance of correct and innovative facial expression recognition (FER) approaches in Artificial Intelligence. The current common practice for FER is to correctly design convolutional neural networks' architectures (CNNs) using human expertise. However, finding a well-performing architecture is often a very tedious and error-prone process for deep learning researchers. Neural architecture search (NAS) is an area of growing interest as demonstrated by the large number of scientific works published in recent years thanks to the impressive results achieved in recent years. We propose a genetic algorithm approach that uses an ingenious encoding-decoding mechanism that allows to automatically evolve CNNs on FER tasks attaining high accuracy classification rates. The experimental results demonstrate that the proposed algorithm achieves the best-known results on the CK+ and FERG datasets as well as competitive results on the JAFFE dataset.

The pole balancing problem from the viewpoint of system flexibility

Whereas evolutionary computation usually solves problems from scratch, organisms evolve under changing environments and possess flexibility, adapting from being good at one task to being good at a related task. There is abundant evidence that there are general properties that promote flexibility in nature, such as hierarchy, modularity, exploratory behavior, and degeneracys or neutrality.

Our interest is to understand if such properties can also be identified for non-biological systems. We thus study if a controller evolved by a genetic algorithm for one pole balancing task can be adapted to a different pole balancing task, and if this saves training time compared to evolving a new controller from scratch. Moreover, we investigate how diversity and degeneracy in the controllers population affect adaption efficiency by promoting high quality solutions that are both structurally and behaviorally diverse, concluding that it can potentially decrease the adaption cost.

Boosting the convergence of a GA-based wrapper for feature selection problems on high-dimensional data

High-dimensional data often need techniques, such as Feature Selection (FS), in order to solve the curse of dimensionality problem. One of the most popular approaches to FS is wrapper methods, which are based on a search algorithm and a clustering technique. NSGA-II and k-NN are applied in this paper. Since NSGA-II is intended to converge to a small subset of features, the generation of the initial population is crucial to speed up the search process. The lower number of features selected by the individuals belonging to the initial population, the lower number of generations needed to achieve convergence. This work presents a novel technique to reduce the average size (number of features selected) of the individuals forming the initial population. This technique is based on a hyper-parameter, p, which controls the probability of any feature being selected by any individual of the initial population. An analysis of both convergence time and classification accuracy is performed for different values of p, concluding that p can be set to quite low values, accelerating markedly the convergence of the algorithm without affecting the quality of the solutions.

Trimming, ordering, and similarity check for DSMGA-II: DSMGA-II-TOS

The dependency structure matrix genetic algorithm II (DSMGA-II) is one of the state-of-the-art model-building genetic algorithms capable of solving combinatorial optimization problems by exploiting the underlying structures of the problems. The linkage model generates a series of masks for trial to recombine genes among chromosomes via optimal mixing. This paper proposes three improvements that adaptively adjust the scope, the order, and the receivers of trials. Specifically, the mean of the mask sizes from previous successful recombinations is used to limit the maximum sizes of later trials. Also, successful recombinations prioritize the corresponding mask sizes of trials. Finally, recombinations are confined between chromosomes that pass the proposed similarity check. The ablation study indicates that each proposed technique is indispensable. Combined with these three improvements, DSMGA-II-2E empirically consumes fewer function evaluations on most of the test problems.

Quantum strategy of population initialization in genetic algorithm

Quantum Genetic Algorithm is a relatively new field of study to enhance the computational efficiency of the Darwinian optimization process in genetic algorithms with quantum speedup techniques. This paper introduces an application strategy of the quantum counting algorithm to genetic algorithms, particularly aimed to enhance the initial population setup at the beginning of optimization. More specifically, our goal is to exploit a quantum algorithm to count the number of marked items from an unstructured list quadratically faster than classical algorithms in order to detect the presence and amount of unsuitable individuals in a stochastically generated initial population, thereby starting optimization with a mark of potential to improve the performance in the later stage. The advantage of our method is examined via a conventional genetic algorithm to solve the 0-1 Knapsack problem with varying cases of the constraints, and a comparative analysis on the optimizing performance is made accordingly.

Evolutionary constrained multi-task optimization: benchmark problems and preliminary results

Multi-task optimization (MTO) aims to solve multiple tasks simultaneously. However, multi-task evolutionary algorithms (MTEAs) hardly consider the problem with constraints, while most optimization problems, in reality, are with constraints. This study presents a benchmark of constrained multi-task optimization problems (CMTOPs), modified from the CEC2017 competition on evolutionary multi-task optimization and CEC2017 competition on constrained real-parameter optimization. Moreover, this study attempts to solve CMTOPs by incorporating constraint handling techniques into MTEAs. Experimental results demonstrate the complexity of the proposed benchmark in the context of CMTOPs and present the requirements for handling constraints in MTEAs. Our preliminary exploration also reveals prospects for the development of evolutionary algorithms in the area of constrained multitask optimization. The Matlab source code can be obtained from https://github.com/intLyc/CMTO-Benchmark.

Initialization method of genetic algorithm based on improved clustering algorithm

In order to improve the search ability, the convergence performance and algorithm results of genetic algorithm, we propose a population initialization method of genetic algorithm based on improved k-means algorithm. Firstly, the initial individuals walking evenly in the decision space are created by uniform design method, and then the initial individuals are processed by local search such as step-by-step search. Secondly, through the similarity degree between the individuals, several main search spaces are aggregated. Based on the search space, the individuals needed by the final initial population are clustered by K-means method.

Evolutionary operation setting for outcome accumulation type evolutionary rule discovery method

Association rule analysis has been widely employed as a basic technique for data mining. Extensive research has also been conducted to apply evolutionary computing techniques to the field of data mining. This study presents a method to evaluate the settings of evolutionary operations in evolutionary rule discovery method, which is characterized by the execution of overall problem solving through the acquisition and accumulation of small results. Since the purpose of population evolution is different from that of general evolutionary computation methods that aim at discovering elite individuals, we examined the difference in the concept of settings during evolution and the evaluation of evolutionary computation by visualizing the progress and efficiency of problem solving. The rule discovery method (GNMiner) is characterized by reflecting acquired information in evolutionary operations; this study determines the relationship between the settings of evolutionary operations and the progress of each task execution stage and the achievement of the final result. This study obtains knowledge on the means of setting up evolutionary operations for efficient rule-set discovery by introducing an index to visualize the efficiency of outcome accumulation. This implies the possibility of setting up dynamic evolutionary operations in the outcome accumulation-type evolutionary computation in future studies.

Improving DSMGA-II performance on hierarchical problems by introducing preservative back mixing

Inspired from the optimal mixing in the linkage tree gene-pool optimal mixing evolutionary algorithm, the dependent structure matrix genetic algorithm II (DSMGA-II) is one of the state-of-the-art model-building genetic algorithms. It obtains the patterns from successful restricted mixing and uses them during the back mixing (BM) to remove spurious linkage information such that the linkage information in need stands out. However, such techniques can be misled in hierarchical problems where the linkages in latter layers are mistreated as spurious in BM. There has been diversity preservation scheme developed for DSMGA-II by monitoring the entropy, but such technique impairs the exploration ability of DSMGA-II on plateau. To solve this issue, this paper proposes the preservative back mixing (PBM), a modified version of BM. Specifically, PBM features (i) the adaptive back mixing that automatically adjusts the power of BM and (ii) the linkage preservation that preserves the linkages among complementary patterns. This paper conducted several experiments to show that, PBM improves DSMGA-II greatly on hierarchical problems including the hierarchical trap and hierarchical exclusive-or while the NFE performances on other problems are not compromised by much.

Networks of evolution: modelling and deconstructing genetic algorithms using dynamic networks

Evolutionary computing has benefited several fields such as psychology, economics, and of course, computer science. These algorithms can tackle challenging real-world optimisation problems using selection, crossover, and mutation operators. Despite the rich literature examining evolutionary algorithms, there are unanswered questions concerning their behaviour and the interplay between operators. This work models genetic algorithms as dynamic networks where nodes represent the population and edges describe the inheritance link between individuals. Using the interaction networks as a proxy, we assessed the impact of different parameters, optimisation problems and operators (i.e. selection, crossover, mutation) on the algorithm's behaviour.

Quantum-enhanced selection operators for evolutionary algorithms

Genetic algorithms have unique properties which are useful when applied to black box optimization. Using selection, crossover, and mutation operators, candidate solutions may be obtained without the need to calculate a gradient. In this work, we study results obtained from using quantum-enhanced operators within the selection mechanism of a genetic algorithm. Our approach frames the selection process as a minimization of a binary quadratic model with which we encode fitness and distance between members of a population, and we leverage a quantum annealing system to sample low energy solutions for the selection mechanism. We benchmark these quantum-enhanced algorithms against classical algorithms over various black-box objective functions, including the OneMax function, and functions from the IOHProfiler library for black-box optimization. We observe a performance gain in average number of generations to convergence for the quantum-enhanced elitist selection operator in comparison to classical on the OneMax function. We also find that the quantum-enhanced selection operator with non-elitist selection outperform benchmarks on functions with fitness perturbation from the IOHProfiler library. Additionally, we find that in the case of elitist selection, the quantum-enhanced operators outperform classical benchmarks on functions with varying degrees of dummy variables and neutrality.

Black-box adversarial attack via overlapped shapes

Deep neural networks (DNNs) have achieved state-of-the-art performance in many tasks but have shown extreme vulnerabilities to attacks generated by adversarial examples. Many works assume an attacker has total access to the targeted model. A realistic assumption is that an attacker has access to the targeted model only by querying some input and observing its predicted class probabilities. In this paper we propose a concept of applying techniques similar to those used within evolutionary-art to generated adversarial images.

Adversarial example generation via genetic algorithm: a preliminary result

In recent years, some studies showed that deep neural networks (DNNs) are vulnerable to being attacked by small perturbated examples. To satisfy the lexical, grammatical, and semantic constrain, some works proposed using black-box population-based optimization algorithms to attack neural networks in natural language processing. However, they are inefficient enough because they do not consider the characteristics of the text itself. Also, they are slow to close to the decision boundary. In this paper, we propose a more efficient attention based genetic algorithm adversarial attack method, called AGA. We use attention mechanism to pay more attention to the important tokens and utilize the multi-membered strategy to accelerate the search procedure. The result shows that our attack achieves a higher success rate with less than 136% of the number of queries than the existing methods.

POSTER SESSION: General evolutionary computation and hybrids

Fitness diversification in the service of fitness optimization: a comparison study

Blindly chasing after fitness is not the best strategy for optimization of hard problems, as it usually leads to premature convergence and getting stuck in low-quality local optima. Several techniques such as niching or quality-diversity algorithms have been established that aim to alleviate the selective pressure present in evolutionary algorithms and to allow for greater exploration. Yet another group of methods which can be used for that purpose are fitness diversity methods. In this work we compare the standard single-population evolution against three fitness diversity methods: fitness uniform selection scheme (FUSS), fitness uniform deletion scheme (FUDS), and convection selection (ConvSel). We compare these methods on both mathematical and evolutionary design benchmarks over multiple parametrizations. We find that given the same computation time, fitness diversity methods regularly surpass the performance of the standard single-population evolutionary algorithm.

Independent influence of exploration and exploitation for metaheuristic-based recommendations

Exploration and exploitation (E&E) of a search space are two fundamental processes in many fields of artificial intelligence. Indeed, when the search space is vast, it is important to ensure that many of its regions are examined, so as not to get trapped in a local optimum, but also that promising regions are examined more in depth in order to find good local optima. Influencing both processes is thus necessary. The literature has rarely proposed to approach the recommendation task as a vast search space problem. This paper introduces a metaheuristic-based recommendation approach with two contributions: an E&E influence process which is independent from the influenced algorithm and new indicators to represent and explain E&E. Performed on a genetic algorithm (GA) and on a reinforcement algorithm (RA), our experiments confirm that (1) the proposed influence process has a positive impact on evaluation criteria and on E&E which brings better recommendations, (2) proposed indicators contribute to represent E&E from new angles.

Dynamic multi-objective ensemble of acquisition functions in batch bayesian optimization

Bayesian optimization (BO) is a typical approach to solve expensive optimization problems. In each iteration of BO, a Gaussian process (GP) model is trained using the previously evaluated solutions; then next candidate solutions for expensive evaluation are recommended by maximizing a cheaply-evaluated acquisition function on the trained surrogate model. The acquisition function plays a crucial role in the optimization process. However, each acquisition function has its own strengths and weaknesses, and no single acquisition function can consistently outperform the others on all kinds of problems. To better leverage the advantages of different acquisition functions, we propose a new method for batch BO. In each iteration, three acquisition functions are dynamically selected from a set based on their current and historical performance to form a multi-objective optimization problem (MOP). Using an evolutionary multi-objective algorithm to optimize such a MOP, a set of non-dominated solutions can be obtained. To select batch candidate solutions, we rank these non-dominated solutions into several layers according to their relative performance on the three acquisition functions. The empirical results show that the proposed method is competitive with the state-of-the-art methods on different problems.

Dynamic evaluation of decomposition methods for large-scale optimization problems using an island model

Optimization problems with a high number of variables are known as Large-Scale Optimization Problems (LSOPs) and tend to be complex to solve. Additional strategies can be applied in Evolutionary Algorithms (EAs) to solve LSOPs. Decomposition Methods (DMs) decompose the problem domain into groups, then solve them separately. This work implements an adaptive hybrid Island Model based on stigmergy to solve LSOPs using different DMs. The DMs are compared during their execution to identify the most suitable ones to solve the problem. This study concerns the assessment of the DMs' behavior during their execution because in general, works in the literature compare them only based on the quality of the obtained solutions.

Improved data clustering using multi-trial vector-based differential evolution with gaussian crossover

In this paper, an Improved version of the Multi-Trial Vector-based Differential Evolution (IMTDE) algorithm is proposed and adapted for clustering data. The purpose here is to enhance the balance between the exploration and exploitation mechanisms in MTDE by employing Gaussian crossover and modifying the sub-population distribution between the strategies. Results show that IMTDE is superior to the compared algorithms in most 19 datasets used. The code is available here: https://github.com/parhamhadikhani/IMTDEClustering.

Reduction of genetic drift in population-based incremental learning via entropy regularization

The Population-Based Incremental Learning (PBIL) is a Bernoulli distribution-based evolutionary algorithm for binary black-box optimization. The PBIL updates the distribution parameter according to the samples generated from the current distribution and their rankings. In PBIL, when some distribution parameters are continuously updated randomly, undesirable convergence without sufficient exploration is observed. This behavior is called genetic drift and induces an increasing number of function evaluations and convergence to local optima. In particular, large update strength leads to genetic drift while faster search. The ways to deal with genetic drift are limited, such as decreasing the update strength, and there is a trade-off between search efficiency and stability. This paper proposes a method to reduce genetic drift in PBIL based on entropy regularization widely used in reinforcement learning. We introduce entropy regularization into PBIL as a penalty term or constraint. The experimental results on well-known benchmark problems show that the proposed entropy regularization can efficiently suppress genetic drift, decrease the number of function evaluations, and improve stability.

Enhancing MOEA/D with learning: application to routing problems with time windows

Integrating machine learning (ML) techniques into metaheuristics is an efficient approach in single-objective optimization. Indeed, high-quality solutions often contain relevant knowledge, that can be used to guide the heuristic towards promising areas. In multi-objective optimization, the quality of solutions is evaluated according to multiple criteria that are generally conflicting. Therefore, the ML techniques designed for single-objective optimization can not be directly adapted for multi-objective optimization. In this paper, we propose to enhance the Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) with a clustering-based learning mechanism. To be more precise, solutions are grouped regarding a metric based on their quality on each criterion, and the knowledge from the solutions of the same group is merged. Experiments are conducted on the multi-objective vehicle routing problem with time windows. The results show that MOEA/D with learning outperforms the original version.

Benchmarking algorithm portfolio construction methods

A portfolio is a set of algorithms, which run concurrently or interchangeably, whose aim is to improve performance by avoiding a bad selection of a single algorithm. Despite its high error tolerance, a carefully constructed portfolio, i.e., the smallest set of complementary algorithms, is expected to perform better than an arbitrarily constructed one. In this paper, we benchmark five algorithm portfolio construction methods, using as benchmark problems the ASLib scenarios, under a cross-validation regime. We examine the performance of each portfolio in terms of its riskiness, i.e., the existence of unsolved problems on the test set, and its robustness, i.e., the existence of an algorithm that solves most instances. The results demonstrate that two of these methods produce portfolios with the lowest risk, albeit with different levels of robustness.

A surrogate model-based genetic algorithm for the optimal policy in cart-pole balancing environments

Agent's learning for completing a given mission needs a lot of trial-and-error. Generally, an agent's performance is evaluated by a simulation. Thus, exorbitant simulation costs may be incurred when an agent is trained to solve real-world problems. We propose a surrogate model-based genetic algorithm for reducing simulation-based evaluation costs of the cart-pole balancing problem. Policy networks and returns by the networks were collected by training a deep Q-network (DQN) among reinforcement learning methods. A surrogate model for the fitness function of a genetic algorithm was completed through our heuristic measure and DNN-based models trained on DQN data. Our surrogate model-based genetic algorithm performed much faster than that with simulation-based evaluation, and it could successfully find optimal solutions that achieves the maximum cumulative reward.

Implementing and evaluating parallel evolutionary algorithms in modern GPU computing libraries

In this paper, we describe FFEAT - a library for GPU-based implementation of evolutionary algorithms based on Torch. We discuss limitations of GPU computing and how they affect implementations of evolutionary algorithms and other population-based heuristics.

Using FFEAT, we implement a number of different types of nature inspired algorithms, including evolutionary algorithms, evolution strategies, and particle swarm optimization. We investigate the performance of such algorithms in a number of benchmarks and with varying algorithm settings. We show that in some cases, we can obtain an order of magnitude speed-up by running the algorithm on a GPU compared to running it on a CPU.

Localized distance and time-based differential evolution for multimodal optimization problems

Multimodal optimization problems (MMOPs) aim to find global multiple optimal solutions with high accuracy. Although many state-of-the-art algorithms are proposed to deal with MMOPs, there are still some challenges as how to avoid local optima and how to refine the solutions accuracy. Aim to these, this paper proposes a Localized Distance and Time-based Differential Evolution (LDTDE) for MMOPs, which includes three contributions. Firstly, a Random and Neighborhood-based Mutation (RNM) strategy is proposed to avoid local optima and refine the accuracy of found solutions. That is each individual not only performs random-based mutation operation but also performs neighborhood-based mutation operation by Euclidean distance. Secondly, a Locality-based Crowding Selection (LCS) strategy is introduced to accelerate the convergence and further approach the global optima. Thirdly, an Adaptive Parameter Control (APC) strategy is proposed to reduce the sensitivity of parameters, which can dynamically calculate the appropriate parameters for each individual based on the state of itself. The performance of LDTDE is tested on CEC'2013 and compared with state-of-the-art multimodal optimization algorithms.

Using domain knowledge in coevolution and reinforcement learning to simulate a logistics enterprise

We demonstrate a framework (CoEv-Soar-RL) for a logistics enterprise to improve readiness, sustainment, and reduce operational risk. The CoEv-Soar-RL uses reinforcement learning and coevolutionary algorithms to improve the functions of a logistics enterprise value chain. We address: (1) holistic prediction, optimization, and simulation for the logistics enterprise readiness; (2) the uncertainty and lack of data which require large-scale systematic what-if scenarios to simulate potential new and unknown situations. In this paper, we perform four experiments to investigate how to integrate prediction and simulation to modify a logistics enterprise's demand models and generate synthetic data based. We use general domain knowledge to design simple operators for the coevolutionary search algorithm that provide realistic solutions for the simulation of the logistic enterprise. In addition, to evaluate generated solutions we learn a surrogate model of a logistic enterprise environment from historical data with Soar reinforcement learning. From our experiments we discover, and verify with subject matter experts, novel realistic solutions for the logistic enterprise. These novel solutions perform better than the historical data and where only found when we include knowledge derived from the historical data in the co-evolutionary search.

POSTER SESSION: Genetic programming

Genetic programming with external memory in sequence recall tasks

Partially observable tasks imply that a learning agent has to recall previous state in order to make a decision in the present. Recent research with neural networks have investigated both internal and external memory mechanisms for this purpose, as well as proposing benchmarks to measure their effectiveness. These developments motivate our investigation using genetic programming and an external linked list memory model. A thorough empirical evaluation using a scalable sequence recall benchmark establishes the underlying strength of the approach. In addition, we assess the impact of decisions made regarding the instruction set and characterize the sensitivity to noise / obfuscation in the definition of the benchmarks. Compared to neural solutions to these benchmarks, GP extends the state-of-the-art to greater task depths than previously possible.

Benchmarking genetic programming in a multi-action reinforcement learning locomotion task

Reinforcement learning (RL) requires an agent to interact with an environment to maximize the cumulative rather than the immediate reward. Recently, there as been a significant growth in the availability of scalable RL tasks, e.g. OpenAI gym. However, most benchmarking studies concentrate on RL solutions based on some form of deep learning. In this work, we benchmark a family of linear genetic programming based approaches to the 2-d biped walker problem. The biped walker is an example of a RL environment described in terms of a multi-dimensional, real-valued 24-d input and 4-d action space. Specific recommendations are made regarding mechanisms to adopt that are able to consistently produce solutions, in this case using transfer from periodic restarts.

Bayesian model selection for reducing bloat and overfitting in genetic programming for symbolic regression

When performing symbolic regression using genetic programming, overfitting and bloat can negatively impact generalizability and interpretability of the resulting equations as well as increase computation times. A Bayesian fitness metric is introduced and its impact on bloat and overfitting during population evolution is studied and compared to common alternatives in the literature. The proposed approach was found to be more robust to noise and data sparsity in numerical experiments, guiding evolution to a level of complexity appropriate to the dataset. Further evolution of the population resulted not in overfitting or bloat, but rather in slight simplifications in model form. The ability to identify an equation of complexity appropriate to the scale of noise in the training data was also demonstrated. In general, the Bayesian model selection algorithm was shown to be an effective means of regularization which resulted in less bloat and overfitting when any amount of noise was present in the training data.

The efficacy of a Genetic Programming (GP) [1] solution is often characterized by its (1) fitness, i.e. ability to perform a training task, (2) complexity, and (3) generalizability, i.e. ability to perform its task in an unseen scenario. Bloat is a common phenomenon for GP in which continued training results in significant increases in complexity with minimal improvements in fitness. There are several theories for the prevalence of bloat in GP which postulate possible evolutionary benefits of bloat [2]; however, for most practical purposes bloat is a hindrance rather than a benefit. For example, bloated solutions are less interpretable and more computationally expensive. Overfitting is another common phenomena in GP and the broader machine learning field. Overfitting occurs when continued training results in better fitness but reduced generalizability.

A preliminary study of prediction interval methods with genetic programming

This article presents an exploratory study on modelling Prediction Intervals (PI) with two Genetic Programming (GP) methods. A PI is the range of values in which the real target value is expected to fall into. It should combine two contrasting properties: to be as narrow as possible and to include as many data observations as possible. One proposed GP method, called CWC-GP, evolves simultaneously the lower and upper boundaries of the PI using a single fitness measure that combines the width and the probability coverage of the PI. The other proposed GP method, called LUBE-GP, evolves independently the boundaries of the PI with a multi-objective approach, in which one fitness aims to minimise the width and the other aims to maximise the probability coverage of the PI. Both methods were applied with Direct and Sequential approaches. In the former, the PI is assessed without the crisp prediction of the model. In the latter, the method makes use of the crisp prediction to find the PI boundaries. The proposed methods showed to have good potential on assessing PIs and the presented preliminary results pave the way to further investigations. The most promising results were observed with the Sequential CWC-GP.

Initialisation and grammar design in grammar-guided evolutionary computation

Grammars provide a convenient and powerful mechanism to define the space of possible solutions for a range of problems. While recent work has shed light on the matters of initialisation and grammar design with respect to grammatical evolution (GE), their impact on other methods, such as random search and context-free grammar genetic programming (CFG-GP), is largely unknown. This paper examines GE, random search and CFG-GP on benchmark problems using different initialisation routines and grammar designs. Results suggest that CFG-GP is less sensitive to initialisation and grammar design than both GE and random search: we also demonstrate that observed cases of poor performance by CFG-GP are managed through simple adjustment of tuning parameters. We conclude that CFG-GP is a strong base from which to conduct grammar-guided evolutionary search, and that future work should focus on understanding the parameter space of CFG-GP for better application.

Going faster and hence further with lexicase selection

Lexicase selection is a semantic-aware parent selection method, which has demonstrated success in multiple research areas including genetic programming, symbolic regression, and recently deep learning. One potential drawback of lexicase selection and its variants is that the selection procedure requires evaluating training cases in a single data stream, making it difficult to handle tasks where the evaluation is computationally heavy or the dataset is large-scale. In this work, we investigate how the weighted shuffle methods can be employed to improve the efficiency of lexicase selection. We propose fast lexicase selection, a method that incorporates lexicase selection and weighted shuffle with partial evaluation. Experiments on both classic genetic programming and deep learning tasks demonstrate that the proposed method has superior efficiency with a significantly reduced number of evaluation steps during selection, as well as sustains equivalent performance.

Genetic programming for electric vehicle routing problem with soft time windows

Vehicle routing problems (VRPs) that model transport processes have been intensively studied. Due to environmental concerns, the electric VRP (EVRP), which uses only electric vehicles, has recently attracted more attention. In many cases, such problems need to be solved in a short time, either due to their complexity or because of their dynamic nature. Routing policies (RPs), simple heuristics that build the solution incrementally, are a suitable choice to solve these problems. However, it is difficult to design efficient RPs manually. Therefore, in this paper, we consider the application of genetic programming (GP) to automatically generate new RPs. For this purpose, three RP variants and several domain-specific terminal nodes are defined to optimise three criteria. The results show that GP is able to automatically designed RPs perform, and it finds RPs with good generalisation properties that can effectively solve unseen problems.

Genetic programming for understanding cognitive biases that generate polarization in social networks

Recent studies have applied agent-based models to infer human-interpretable explanations of individual-scale behaviors that generate macro-scale patterns in complex social systems. Genetic programming has proven to be an ideal explainable AI tool for this purpose, where primitives may be expressed in an interpretable fashion and assembled into agent rules. Evolutionary model discovery (EMD) is a tool that combines genetic programming and random forest feature importance analysis, to infer individual-scale, human-interpretable explanations from agent-based models. We deploy EMD to investigate the cognitive biases behind the emergence of ideological polarization within a population. An agent-based model is developed to simulate a social network, where agents are able to create or sever links with one another, and update an internal ideological stance based on their neighbors' stances. Agent rules govern these actions and constitute of cognitive biases. A set of 7 cognitive biases are included as genetic program primitives in the search for rules that generate hyper-polarization among the population of agents. We find that heterogeneity in cognitive biases is more likely to generate polarized social networks. Highly polarized social networks are likely to emerge when individuals with confirmation bias are exposed to those with either attentional bias, egocentric bias, or cognitive dissonance.

Active learning improves performance on symbolic regression tasks in StackGP

This paper introduces an active learning method for symbolic regression using StackGP. The approach begins with a small number of data points for StackGP to model. To improve the model the system incrementally adds the data point characterized by maximizing prediction uncertainty as measured by the model ensemble. Symbolic regression is re-run with the larger data set. This cycle continues until the system satisfies a termination criterion. The Feynman AI benchmark set of equations is used to examine the ability of the method to find appropriate models using as few data points as possible. The approach successfully rediscovered 72 of the 100 Feynman equations without the use of domain expertise or data translation.

Incorporating sub-programs as knowledge in program synthesis by PushGP and adaptive replacement mutation

Program synthesis aims to build an intelligent agent that composes computer programs to solve problems. Genetic programming (GP) provides an evolutionary solution for the program synthesis task. A typical GP includes a random initialization, an unguided variation, and a fitness-guided selection to search for a solution program. However, several recent studies have shown the importance of using prior knowledge in different components of the GP. This study investigates the effectiveness of incorporating sub-programs as "prior knowledge" into the variation process of GP by Replacement Mutation. We further design an adaptive strategy that allows the automatic selection of the helpful sub-programs to the search process from an archive (including helpful and unhelpful ones). With handcrafted sub-program archives, we verify the effectiveness of the Adaptive Replacement Mutation method in success rate. We demonstrate the effectiveness of our approach with transferred archives on two composite problems.

Discovery of implicit relationships from data using linear programming and mixed integer linear programming

Symbolic regression is widely used to discover underlying relationships within observed data in the form of equations. Most studies in the existing literature have focused on explicit equations, i.e., y = f (x). However, in a number of scenarios, such as in complex manifolds and partial differential equations, the relationship among the variables may conform to implicit equation, i.e., f (x) = 0. In this paper, we are interested in uncovering implicit equations that have the form of a general linear model. In particular, we construct improvised formulations of linear programming and mixed-integer linear programming to uncover implicit equations while avoiding trivial solutions. The strengths and limitations of both these approaches are assessed on 23 simulated benchmarks. The results are promising and establish the suitability of the proposed approach for integration with genetic programming frameworks to expedite the search for accurate implicit expressions.

Automatically evolving malice scoring models through utilisation of genetic programming: a cooperative coevolution approach

Quantification of software malignance through the assignment of a malice score based upon a scoring module, is a technique that is present throughout the literature. The majority of these, however, are synthesised using hand-picked features and manual weighting with the expertise of a domain specialist. This paper proposes an automated malice scoring model evolved through the utilisation of genetic programming and cooperative coevolution, which automatically produces an ensemble of symbolic regression functions to assign a malice score to an instance of software data. The experimental results on a publicly available data set show that the proposed method has significantly outperformed the state-of-the-art malice scoring method, and exhibits the best performing model that produces an overall balanced accuracy of 95.80%, correctly classifying 94.21% and 97.39% of unseen malignant and benign instances, respectively.

Phenotypic duplication and inversion in cartesian genetic programming applied to boolean function learning

The search performance of Cartesian Genetic Programming (CGP) relies to a large extent on the sole use of genotypic point mutation in combination with extremely large redundant genotypes. Over the last years, steps have been taken to extend CGP's variation mechanisms by the introduction of advanced methods for recombination and mutation. One branch of these contributions addresses phenotypic variation in CGP. However, recent studies have demonstrated the limitations of phenotypic recombination in Boolean function learning and highlighted the effectiveness of the mutation-only approach. Therefore, in this work, we further explore phenotypic mutation in CGP by the introduction and evaluation of two phenotypic mutation operators that are inspired by chromosomal rearrangement. Our initial findings show that the proposed methods can significantly improve the search performance of CGP on various single- and multiple-output Boolean function benchmarks.

Compositional genetic programming for symbolic regression

In genetic programming, candidate solutions are compositional structures that can be easily decomposed into constituent parts and assembled from them. This property is extensively used in search operators, but rarely exploited in other stages of evolutionary search. We propose an approach to symbolic regression that augments the search state by maintaining, apart from the population of candidate solutions, a library of subprograms and a library of program contexts, i.e. partial programs that need to be supplemented by a subprogram to form a complete program. This allows us to identify the promising program components and guide search using two mechanisms in parallel: the conventional fitness-based selection pressure, and matching contexts with subprograms using a gradient-based mechanism. In experimental assessment, the approach significantly outperforms the control setups and the conventional GP. Maintaining subprograms and contexts in efficient data structures prevents redundancy and lessens the demand for computational resources, in particular memory.

Failed disruption propagation in integer genetic programming

We inject a random value into the evaluation of highly evolved deep integer GP trees 9 743 720 times and find 99.7% of test outputs are unchanged. Suggesting crossover and mutation's impact are dissipated and seldom propagate outside the program. Indeed only errors near the root node have impact and disruption falls exponentially with depth at between e-depth/3 and e-depth/5 for recursive Fibonacci GP trees, allowing five to seven levels of nesting between the runtime perturbation and an optimal test oracle for it to detect most errors. Information theory explains this locally flat fitness landscape is due to FDP. Overflow is not important and instead, integer GP, like deep symbolic regression floating point GP and software in general, is not fragile, is robust, is not chaotic and suffers little from Lorenz' butterfly.

Optimizing LLVM pass sequences with shackleton: a linear genetic programming framework

In this paper we explore the novel application of a linear genetic programming framework, Shackleton, to optimizing sequences of LLVM optimization passes. The algorithm underpinning Shackleton is discussed, with an emphasis on the effects of different features unique to the framework when applied to LLVM pass sequences. Combined with analysis of different hyperparameter settings, we report the results on automatically optimizing pass sequences with Shackleton for two software applications at differing complexity levels. Finally, we reflect on the advantages and limitations of our current implementation and lay out a path for further improvements. These improvements aim to surpass hand-crafted solutions with an automatic discovery method for an optimal pass sequence.

Using graph neural networks as surrogate models in genetic programming

Surrogate models have been used for decades to speed up evolutionary algorithms, however, most of their uses are tailored for problems with simple individual encoding, like vectors of numbers. In this paper, we evaluate the possibility to use two different types of graph neural networks to predict the quality of a solution in tree-based genetic programming without evaluating the trees. The proposed models are evaluated in a number of benchmarks from symbolic regression and reinforcement learning and show that GNNs can be successfully used as surrogate models for problems with a complex structure.

On the interaction between lexicase selection, modularity and data subsets

Lexicase selection represents a framework for maintaining population diversity, and has therefore demonstrated a capacity for discovering solutions to a wide range of tasks. We note, however, that the benchmarking studies used to demonstrate these properties also tend assume some form of modularity in the representation. With that in mind, an empirical study using five classification datasets is performed in which we compare: 1) Lexicase and a canonical 'breeder' model of selection, and 2) modularity is explicitly enabled and disabled. A clear preference is demonstrated for including modularity with both forms of selection. However, the effect is most pronounced under Lexicase selection with significantly simpler solutions also resulting when modularity is enabled.

Espresso to the rescue of genetic programming facing exponential complexity

Fitness computation is a crucial part of any population-based evolutionary strategy. In Genetic Programming (GP) each individual of a population is evaluated by comparing the output they return when fed some input with the expected output. The mapping input/output form the test cases, it describes the "behaviour" an individual should have when presented an instance of a problem. However, there exist situations in which the mapping is given as a function an individual should implement, thus consisting in combinations of several variables of the problem. This paper addresses the issue of efficiently computing the fitness of an individual evaluated on such test sets exponentially large in the number of variables. The inspiration came from digital electronics and more specifically the ESPRESSO-MV algorithm, accepting multi-valued variables. The proposed datastructure represents subsets of the solution space, hence allowing to compute the number of passed test cases as a single operation rather than enumerating all of them. The heuristics used rely on the Unate Recursive Paradigm (URP), some of the proposed algorithms are new while others come directly from ESPRESSO-MV.

On the effect of embedding hierarchy within multi-objective optimization for evolving symbolic regression models

Symbolic Regression is sometimes treated as a multi-objective optimization problem where two objectives (Accuracy and Complexity) are optimized simultaneously. In this paper, we propose a novel approach, Hierarchical Multi-objective Symbolic Regression (HMS), where we investigate the effect of imposing a hierarchy on multiple objectives in Symbolic Regression. HMS works in two levels. In the first level, an initial random population is evolved using a single objective (Accuracy), then, when a simple trigger occurs (the current best fitness is five times better than best fitness of the initial, random population) half of the population is promoted to the next level where another objective (complexity) is incorporated. This new, smaller, population subsequently evolves using a multi-objective fitness function. Various complexity measures are tested and as such are explicitly defined as one of the objectives in addition to performance (accuracy). The validation of HMS is performed on four benchmark Symbolic Regression problems with varying difficulty. The evolved Symbolic Regression models are either competitive with or better than models produced with standard approaches in terms of performance where performance is the accuracy measured as Root Mean Square Error. The solutions are better in terms of size, effectively scaling down the computational cost.

Environments with local scopes for modules in genetic programming

Scoped environments are used in various programming languages to provide contexts within which a function's code can bind values and perform computations with them, without affecting bindings used by the rest of the program. The PushGP genetic programming system, which has produced state-of-the-art results in application areas, including software synthesis, allows programs to label sequences of instructions as modules. However, those modules can also modify bindings used by the rest of the program. This increases module coupling and might therefore make modular programs less evolvable than they otherwise could be. To rectify this, and to ensure that modules return single values, we implement scoped environments in PushGP, using a method that allows for the dynamic definition of arbitrary modules in a multi-type setting. We demonstrate that the use of scoped environments leads to an increase in the success rates for multiple software synthesis problems.

Evolving parsimonious circuits through shapley value-based genetic programming

Evolutionary analog circuit design is a challenging task due to the large search space incurred by the circuit topology and device values. Applying genetic operators on randomly selected genes may make it difficult to identify which part of sub-circuit is beneficial to the evolution and even destroy useful sub-circuits, potentially incurring stagnation of the evolutionary process and bloat on the evolved circuits. In this paper, we propose a tree-based approach called Shapley Circuit Tree that incorporates Shapley values for quantifying the contribution of each function node of the circuit tree to the performance of the whole tree, to guide the evolutionary process. Our experiments on three benchmarks show that the proposed approach is able to evolve analog circuits with smaller area while converging faster than existing approaches.

Analyzing optimized constants in genetic programming on a real-world regression problem

Ephemeral random constants are commonly used for symbolic regression with genetic programming. However, due to their random nature, it is difficult for genetic programming to find proper constants. This often leads to large and complex non-interpretable solutions. Therefore, in this short paper, we analyze a method that replaces ephemeral random constants with constant tokens whose values are optimized during evolution with the Sequential Least Squares Programming method. The results achieved on the studied regression problem indicate that replacing ephemeral random constants by optimized constant tokens leads to solutions with a significantly lower error which are also smaller than the solutions generated by a standard GP approach using ephemeral random constants.

Imbalanced classification with tpg genetic programming: impact of problem imbalance and selection mechanisms

Recent research advances on Tangled Program Graphs (TPGs) have demonstrated that Genetic Programming (GP) can be used to build accurate classifiers. However, this performance has been tested on balanced classification problems while most of the real world classification problems are imbalanced, with both over-represented classes and rare classes.

This paper explores the effect of imbalanced data on the performance of a TPG classifier, and proposes mitigation methods for imbalance-caused classifier performance degradation using adapted GP selection phases. The GP selection phase is characterized by a fitness function, and by a comparison operator. We show that adapting the TPG to imbalanced data significantly improves the classifier performance. The proposed adaptations on the fitness make the TPG agent capable to fit a model even with 104 less examples than the majority class whereas the revised selection phase of the GP process increases the robustness of the method for moderate imbalance ratios.

Denoising autoencoder genetic programming for real-world symbolic regression

Denoising Autoencoder Genetic Programming (DAE-GP) is a novel neural-network based estimation of distribution genetic programming algorithm that uses denoising autoencoder long short-term memory networks as probabilistic model to replace the standard recombination and mutation operators of genetic programming (GP). Recent work demonstrated that the DAE-GP outperforms standard GP. However, results are limited to the generalization of the royal tree problem. In this work, we apply the DAE-GP to real-world symbolic regression. On the Airfoil dataset and given a fixed number of fitness evaluations, we find that the DAE-GP generates significantly better and smaller (number of nodes) best candidate solutions than standard GP. The results highlight that the DAE-GP may be a good alternative for generating good and interpretable solutions for real-world symbolic regression.

Genetic programming with diverse partner selection for dynamic flexible job shop scheduling

Dynamic flexible job shop scheduling (DFJSS) aims to make decisions for machine assignment and operation sequencing simultaneously to get an effective schedule under dynamic environments. Genetic programming hyper-heuristic (GPHH) has been successfully applied to evolve scheduling heuristics for the DFJSS problem. Parent selection plays an important role in GPHH for generating high-quality offspring. Traditional GPHHs select parents for crossover purely based on fitness (e.g., tournament selection). This might be too greedy to get good offspring and the selected parents might have similar structures/behaviours. In this paper, a GPHH method with a new diverse partner selection (DPS) scheme is proposed, namely GPDPS, for DFJSS. Specifically, we first define a new multi-case fitness to characterise the behaviour of each scheduling heuristic for DFJSS. Then, the newly proposed DPS method selects a pair of complementary high-quality parents for crossover to generate offspring. The experimental results show that GPDPS significantly outperforms the GPHH method on most of the DFJSS scenarios, in terms of both test performance and convergence speed.

Large scale image classification using GPU-based genetic programming

Genetic programming (GP) has been applied to image classification and achieved promising results. However, most GP-based image classification methods are only applied to small-scale image datasets because of the limits of high computation cost. Efficient acceleration technology is needed when extending GP-based image classification methods to large-scale datasets. Considering that fitness evaluation is the most time-consuming phase of the GP evolution process and is a highly parallelized process, this paper proposes a CPU multiprocessing and GPU parallel approach to perform the process, and thus effectively accelerate GP for image classification. Through various experiments, the results show that the highly parallelized approach can significantly accelerate GP-based image classification without performance degradation. The training time of GP-based image classification method is reduced from several weeks to tens of hours, enabling it to be run on large-scale image datasets.

Regulatory genotype-to-phenotype mappings improve evolvability in genetic programming

Most genotype-to-phenotype mappings in EAs are redundant, i.e., multiple genotypes can map to the same phenotype. Phenotypes are accessible from one to another through point mutations. However, these mutational connections can be unevenly distributed among phenotypes. Quantitative analysis of such connections helps better characterize the robustness and evolvability of an EA. In this study, we propose two genotype-to-phenotype mapping mechanisms for linear genetic programming (LGP), where the execution and output of a linear genetic program are varied by a regulator. We investigate how such regulatory mappings can alter the genotypic connections among different phenotypes and the robustness and evolvability of phenotypes. We also compare the search ability of LGP using the conventional mapping versus the regulatory mappings, and observe that the regulatory mappings improve the efficiency in all three search scenarios, including random walk, hill climbing, and novelty search.

POSTER SESSION: Neuroevolution

The diversity-accuracy duality in ensembles of classifiers

A typical approach to defining a classifier ensemble requires two phases: creating a large set of potential classifiers, then selecting an appropriate subset to form an ensemble. It is generally accepted that the ensemble should be behaviourally diverse in order to deliver maximum accuracy, hence evolutionary search methods have been used to explicitly create diversity. Here we investigate a range of search methods that span the full spectrum of favouring accuracy, diversity, or different combinations of both, across both phases defined above. Surprisingly, we show that multiple unique combinations between a diversity metric and accuracy give rise to similar results, including cases that assign the most importance to accuracy. This equivalence between utilising diversity or accuracy objectives enables us to posit the existence of a diversity-accuracy duality in ensembles of classifiers, which challenges notions about the need to find a trade-off between the two.

MFENAS: multifactorial evolution for neural architecture search

Neural Architecture Search (NAS) aims to automatically find neural network architectures competitive with human-designed ones. Despite the remarkable progress achieved, existing NAS methods still suffer from vast computational resources cost. Inspired by MFEA, we model the NAS task as a two-factorial problem and propose a multifactorial evolutionary neural architecture search (MFENAS) algorithm to solve it. MFENAS divides a population into two subgroups according to factors, and then the factors influence the evolution and knowledge transfer between subgroups. Experimental results of NATS-Bench demonstrate the efficiency of the proposed MFENAS in finding optimal structures under resource constraints compared to other state-of-the-art methods.

Accelerating neural architecture exploration across modalities using genetic algorithms

Neural architecture search (NAS), the study of automating the discovery of optimal deep neural network architectures for tasks in domains such as computer vision and natural language processing, has seen rapid growth in the machine learning research community. While there have been many recent advancements in NAS, there is still a significant focus on reducing the computational cost incurred when validating discovered architectures by making search more efficient. Evolutionary algorithms, specifically genetic algorithms, have a history of usage in NAS and continue to gain popularity versus other optimization approaches as a highly efficient way to explore the architecture objective space. Most NAS research efforts have centered around computer vision tasks and only recently have other modalities, such as natural language processing, been investigated in depth. In this work, we show how genetic algorithms can be paired with lightly trained objective predictors in an iterative cycle to accelerate multi-objective architectural exploration in the modalities of both machine translation and image classification.

Synaptic pruning with MAP-elites

Reducing the number of parameters in deep learning models is a current challenge in machine learning. We exploit the capability of MAP-Elites of illuminating the search space in a reinforcement learning problem, confronting neural networks with a different number of connections. In this work, we focus specifically on the Open AI BipedalWalker [2], a widely employed reinforcement learning benchmark, and on a deep learning model that successfully solved that task. The resulting architectures show a reduction of the synaptic connectivity of approximately 90%, equivalent to the state-of-the-art pruning techniques usually employed in supervised or generative learning. Specifically, our approach allows us to evaluate the respective impacts of sparsity and dropout. Results show that both sparsity and dropouts of hidden units are uncorrelated to the performance in our generalization test.

CGP-NAS: real-based solutions encoding for multi-objective evolutionary neural architecture search

Convolutional Neural Networks (CNNs) have had a remarkable performance in difficult computer vision tasks. In previous years, human experts have developed a number of specialized CNN architectures to deal with complex image datasets. However, the automatic design of CNN through Neural Architecture Search (NAS) has gained importance to reduce and possibly avoid human expert intervention. One of the main challenges in NAS is to design less complex and yet highly precise CNNs when both objectives conflict. This study extends Cartesian Genetic Programming (CGP) for CNNs representation in NAS through multi-objective evolutionary optimization for image classification tasks. The proposed CGP-NAS algorithm is built on CGP by combining real-based solutions representation and the well-established Non-dominated Sorting Genetic Algorithm II (NSGA-II). A detailed empirical assessment shows CGP-NAS achieved competitive performance when compared to other state-of-the-art proposals while significantly reduced the evolved CNNs architecture's complexity as well as GPU-days.

Neuroevolution based multi-objective algorithm for gene selection and microarray classification

Microarrays allow the expression level analysis of thousands of genes simultaneously; thus, it is a common technique used for cancer detection and diagnosis. However, existing microarray datasets have huge data dimension and class imbalance, therefore, it is important to find relevant genes that accurately set classes apart and allow building more reliable classification models. A multi-objective algorithm is proposed to evolve artificial neural networks' topology and connection weights for microarray classification by minimizing the number of selected genes and the cross-entropy loss. The algorithm is based on the evolutionary multi-objective algorithm SMS-EMOA along with the genetic encoding and the crossover and mutation operators from the neuroevolution algorithms NEAT/N3O. Moreover, a speciation algorithm was implemented to protect the diversity of the selected features within the solutions. To test the algorithm performance, open datasets were used, most of them were gathered from the Curated Microarray Database (CuMiDa). The algorithm performance was measured by the geometric mean, the number of selected features, and the population hypervolume, and it was compared against N3O on microarray binary classification problems with competitive results. Furthermore, the results were investigated for statistical significance and show that the novel algorithm has a promising performance on the problem at hand.

Neuroevolution of recurrent architectures on control tasks

Modern artificial intelligence works typically train the parameters of fixed-sized deep neural networks using gradient-based optimization techniques. Simple evolutionary algorithms have recently been shown to also be capable of optimizing deep neural network parameters, at times matching the performance of gradient-based techniques, e.g. in reinforcement learning settings. In addition to optimizing network parameters, many evolutionary computation techniques are also capable of progressively constructing network architectures. However, constructing network architectures from elementary evolution rules has not yet been shown to scale to modern reinforcement learning benchmarks. In this paper we therefore propose a new approach in which the architectures of recurrent neural networks dynamically evolve according to a small set of mutation rules. We implement a massively parallel evolutionary algorithm and run experiments on all 19 OpenAI Gym state-based reinforcement learning control tasks. We find that in the majority of cases, dynamic agents match or exceed the performance of gradient-based agents while utilizing orders of magnitude fewer parameters. We believe our work to open avenues for real-life applications where network compactness and autonomous design are of critical importance. We provide our source code, final model checkpoints and full results at github.com/MaximilienLC/nra.

Artificial neural networks are computing systems that have become central to the field of artificial intelligence. Through waves of innovation, these networks have gotten bigger, more efficient, and increasingly competent on a wide range of tasks. Most often, the parameters of these artificial neural networks are optimized using first-order gradient-based optimization techniques, yet their architecture is for the most part still constructed manually.

Efficient guided evolution for neural architecture search

Neural Architecture Search methods have been successfully applied to image tasks with excellent results. However, NAS methods are often complex and tend to quickly converge for local minimas. In this paper, we propose G-EA, a novel approach for guided NAS. G-EA guides the evolution by exploring the search space by generating and evaluating several architectures in each generation at initialisation stage using a zero-proxy estimator, where only the highest-scoring architecture is trained and kept for the next generation. By generating several off-springs from an existing architecture at each generation, G-EA continuously extracts knowledge about the search space without added complexity. More, G-EA forces exploitation of the most performant architectures by descendant generation while at the same time forcing exploration by parent mutation and favouring younger architectures to the detriment of older ones. Experimental results demonstrate the effectiveness of the proposed method. Results show that G-EA achieves state-of-the-art results in NAS-Bench-101 and in all NAS-Bench-201 search space data sets: CIFAR-10, CIFAR-100 and ImageNet16-120, with mean accuracies of 93.99%, 72.62% and 46.04% respectively.

ONE-NAS: an online neuroevolution based neural architecture search for time series forecasting

Time series forecasting (TSF) is one of the most important tasks in data science, as accurate time series (TS) predictions can drive and advance a wide variety of domains including finance, transportation, health care, and power systems. However, real-world utilization of machine learning (ML) models for TSF suffers due to data drift. To address this, models must be periodically retained or redesigned, which requires significant human and computational resources. This work presents the Online NeuroEvolution based Neural Architecture Search (ONE-NAS) algorithm, which to the authors' knowledge is the first neural architecture search algorithm capable of automatically designing and training new recurrent neural networks (RNNs) in an online setting. Without any pretraining, ONE-NAS utilizes populations of RNNs which are continuously updated with new network structures and weights in response to new multivariate input data. ONE-NAS is tested on real-world large-scale multivariate wind turbine data and is shown to outperform traditional statistical time series forecasting, including naive, moving average, and exponential smoothing methods.

A new grammatical evolution method for generating deep convolutional neural networks with novel topologies

Neuroevolution, a sub-field of AutoML, utilizes evolutionary algorithms to automate the process of creating Deep Neural Networks architectures. For problems with complex objects, such as neural networks, Grammar-based Evolutionary Algorithms (GEs) can be used to simplify the implementation and the experimentation by using grammar rules to describe what the components of the complex object are and how they can be connected, that is, they elegantly describe the search space of the problem. In this work, we propose a GE algorithm based on Structured Grammatical Evolution to generate deep convolutional neural networks. Our work has two major contributions: first, the neural networks may contain an arbitrary number and arrangement of skip connections; second, our skip connections may upscale lower-resolution inputs, allowing the generation of architectures such as U-Net. Our best model achieved 0.85 accuracy on CIFAR-10.

Towards optimizing neural networks' connectivity and architecture simultaneously with feature selection

We propose an extension of NeuroEvolution of Augmenting Topologies (NEAT), called Heterogeneous Activation Feature Deselection NEAT (HA-FD-NEAT) that evolves the weights and topography (architecture and activation functions) of Artificial Neural Networks (ANNs) while performing feature selection. The algorithm is evaluated against its ancestors: NEAT that evolves the weights and topology, FD-NEAT that evolves the weights and the topology while performing feature selection and HA-NEAT that evolves the topography and the weights. Performance is described by (i) median classification accuracy, (ii) computational efficiency (number of generations), (iii) network complexity (number of nodes and connections) and (iv) ability of selecting the relevant features. On the artificial 2-out-100 XOR problem, used as benchmark for feature selection, HA-FD-NEAT reaches a 100% accuracy in a few generations. It is significantly better than NEAT and HA-NEAT and exhibits the same performance as FD-NEAT. Even though HA-FD-NEAT needs to search in a bigger search space that includes weights, activation functions, topology and inputs, it is able to achieve the same performance as FD-NEAT. To conclude, the proposed method reduces the burden of human designers by determining the network's inputs, weights, topology and activation functions while achieving a very good performance.

Novelty driven evolutionary neural architecture search

Evolutionary algorithms (EA) based neural architecture search (NAS) involves evaluating each architecture by training it from scratch, which is extremely time-consuming. This can be reduced by using a supernet for estimating the fitness of an architecture due to weight sharing among all architectures in the search space. However, the estimated fitness is very noisy due to the co-adaptation of the operations in the supernet which results in NAS methods getting trapped in local optimum. In this paper, we propose a method called NEvoNAS wherein the NAS problem is posed as a multi-objective problem with 2 objectives: (i) maximize architecture novelty, (ii) maximize architecture fitness/accuracy. The novelty search is used for maintaining a diverse set of solutions at each generation which helps avoiding local optimum traps while the architecture fitness is calculated using supernet. NSGA-II is used for finding the pareto optimal front for the NAS problem and the best architecture in the pareto front is returned as the searched architecture. Exerimentally, NEvoNAS gives better results on 2 different search spaces while using significantly less computational resources as compared to previous EA-based methods. The code for our paper can be found here.

Neuroevolutionary multi-objective approaches to trajectory prediction in autonomous vehicles

The incentive for using Evolutionary Algorithms (EAs) for the automated optimization and training of deep neural networks (DNNs), a process referred to as neuroevolution, has gained momentum in recent years. The configuration and training of these networks can be posed as optimization problems. Indeed, most of the recent works on neuroevolution have focused their attention on single-objective optimization. Moreover, from the little research that has been done at the intersection of neuroevolution and evolutionary multi-objective optimization (EMO), all the research that has been carried out has focused predominantly on the use of one type of DNN: convolutional neural networks (CNNs), using well-established standard benchmark problems such as MNIST. In this work, we make a leap in the understanding of these two areas (neuroevolution and EMO), regarded in this work as neuroevolutionary multi-objective, by using and studying a rich DNN composed of a CNN and Long-short Term Memory network. Moreover, we use a robust and challenging vehicle trajectory prediction problem. By using the well-known Non-dominated Sorting Genetic Algorithm-II, we study the effects of five different objectives, tested in categories of three, allowing us to show how these objectives have either a positive or detrimental effect in neuroevolution for trajectory prediction in autonomous vehicles.

An effective metaheuristic-based pruning method for convolutional neural network

Selecting filters to be pruned and determining the pruning ratio are two important research issues of model compression because both of them will strongly impact the accuracy, size, and computation time of a neural network. In this study, an effective algorithm---based on a promising metaheuristic algorithm, search economics, to find the pruning ratio and the filter importance of each convolutional layer to remove unimportant filters---is proposed. Experimental results show that the proposed algorithm can find a better solution than other pruning methods compared in this study.

POSTER SESSION: Real world applications

Multi-objective evolutionary beer optimisation

Food production is a complex process which can benefit from many optimisation approaches. However, there is growing interest in methods that support customisation of food properties to satisfy individual consumer preferences. This paper addresses the personalisation of beer properties. Having identified components of the production process for beers, we introduce a system which enables brewers to map the desired beer properties into ingredients dosage and combination. Previously explored approaches include direct use of structural equations as well as global machine learning methods. We introduce a framework which uses an evolutionary method supporting multi-objective optimisation. This work identifies problem-dependent objectives, their associations, and proposes a workflow to automate the discovery of multiple novel recipes based on user-defined criteria. The quality of the solutions generated by the multi-objective optimiser is compared against solutions from multiple runs of the method, and those of a single objective evolutionary technique. This comparison provides a road-map allowing the users to choose among more varied options or to finetune one of the preferred solutions. The experiments presented here demonstrate the usability of the framework as well as the transparency of its criteria.

Minimal criterion artist collective

Minimal criterion co-evolution (MCC) is an evolutionary algorithm that uses a simple reproduction constraint between two interacting populations to drive an open-ended search process. While it has previously been applied to parameterise simple agents and environments, in this work we extend its use to the generation of art: synthesising both images and music. As a creative AI tool which does not require any data, the use of MCC emphasises the design of the creative medium.

On generalizing the power function exponent constructions with genetic programming

Many works are investigating Almost Perfect Nonlinear (APN) functions and, in particular, APN power functions. Such functions are of the form F(x) = xd, and they have practical relevance as they reach in characteristic 2 the best possible differential uniformity. This work investigates whether genetic programming (GP) can "reinvent" the known expressions used to obtain exponent values d resulting in APN functions. The ultimate goal is to find classes of exponents that would be "transversal" to the known infinite classes of APN exponents, and would contain new APN exponents (for values of n necessarily larger than those for which an exhaustive search could be made so far). This would be already a breakthrough, and our hope is to find this way new infinite classes of APN exponents. Our results show this is possible but difficult, and a careful trade-off between finding new values and skipping known values is required.

Wind farm layout optimisation using set based multi-objective bayesian optimisation

Wind energy is one of the cleanest renewable electricity sources and can help in addressing the challenge of climate change. One of the drawbacks of wind-generated energy is the large space necessary to install a wind farm; this arises from the fact that placing wind turbines in a limited area would hinder their productivity and therefore not be economically convenient. This naturally leads to an optimisation problem, which has three specific challenges: (1) multiple conflicting objectives (2) computationally expensive simulation models and (3) optimisation over design sets instead of design vectors. The first and second challenges can be addressed by using multi-objective Bayesian optimisation (BO). However, the traditional BO cannot be applied as the optimisation function in the problem relies on design sets instead of design vectors. This paper extends the applicability of multi-objective BO to set based optimisation for solving the wind farm layout problem. We use a set-based kernel in Gaussian process to quantify the correlation between wind farms (with a different number of turbines). The results on the given data set of wind energy and direction clearly show the potential of using set-based multi-objective BO.

Human activity recognition using grammar-based genetic programming

Smart devices provide a way of acquiring useful data for human activity recognition (HAR). The identification of activities is a task applicable to a wide range of situations, such as automatically providing aid to someone in need. Machine learning techniques can solve this problem, but their capacity in providing understanding regarding the classification is usually limited. Here, we propose a Grammar-based Genetic Programming (GGP) to generate interpretable models for HAR. A Context-free Grammar defines a language that the models belong to, providing a way to read and extract knowledge. The results show that the proposed GGP generates results better than another Genetic Programming method and machine learning approaches. Also, the models created provided an understanding of the features associated with the activities.

Routing for bridge inspecting robots using a metaheuristic genetic algorithm

Deteriorating bridge infrastructure accounts for billions per year in inspection costs worldwide and recent advances in robotics and autonomy for bridge inspection can significantly decrease these costs. However, generating optimal tours for inspection robots to cover all members of a bridge truss maps to the well known but NP-hard Min-Max k-Chinese Postman arc-routing problem. We thus attack this problem with a new meta-heuristic genetic algorithm that quickly produces near-optimal balanced tours for k robots. Meta-heuristic genetic algorithm produced tour quality is statistically indistinguishable from best know results on common benchmarks. Scaling up to real-world bridge sizes, our genetic algorithm produces significantly better (15.24%) tours in a fraction of the time (0.05) compared to a prior genetic algorithm approach using a direct encoding. These results show the potential of our new approach for the broad class of arc-routing problems and specifically for quickly generating high-quality tours for robot-assisted real-world bridge inspection tasks.

Evolving polydisperse soft robotic jamming grippers

Granular systems react to changes in external pressure by adapting their density through complex grain contact interactions. Granular packings subjected to small pressures are loose and fluidic, but are jammed into compressed, rigid packings at higher pressures. Common soft robotic jamming grippers are composed of a vacuum pump connected to a flexible membrane filled with granular material, e.g. ground coffee. The membrane encompasses an object, and the pump activates. The grains jam, deforming the membrane, and grasping the object, with grain morphology playing a critical role in determining gripper performance. Bespoke grippers can be designed to effectively grasp specific objects by evolving their constituent grains. Evolved grippers have to-date used exclusively monodisperse granular materials (grains with identical size and shape). However, while not conceived through evolution, polydisperse grippers comprised of natural grains varying in size and morphology can often perform better in real-world grasping experiments. We employ the Discrete Element Method and NSGA-III to optimise grasps on disparate objects by evolving distributions of superellipsoidal grains (varying both their shapes and volumes) within the gripper. Results elucidate the successful application of multi-objective evolution to design bespoke polydisperse jamming grippers, and how variations in grain surface curvatures and volumes influences grasping performance.

Evolving spaceships with a hybrid L-system constrained optimisation evolutionary algorithm

Content generation for video games is a time-consuming process that impacts both game development and gameplay in the case of sandbox games. One approach to improve this process is to use procedural content generation, which can automatically generate diverse content for the game. Lindenmayer systems (L-systems) with handcrafted rules have successfully been used to generate content for games, but rely on undirected expansion of the rules. In contrast, by combining L-systems with an evolutionary algorithm for constrained optimisation, we are able to efficiently utilise simple L-system rules with directed search to find functionally-viable and appealing spaceship designs within the Space Engineers game.

Heuristic strategies for solving the combinatorial optimization problem in real-world credit risk assessment

Credit risk assessment plays an important role in financial institutions and banks. A common method of predicting credit risks is to build an interpretable rule-set model to distinguish high-risk and non-risk users. This further assists experts and analysts to make critical decisions. In a rule-set model, the quality of selected rules dramatically influences the prediction accuracy. It is challenging to automatically select rules for a rule-set learning task.

The rule selection problem in credit risk assessment is considered as a combinatorial optimization problem: the goal is to find a subset of rules that maximize the coverage of risky users under given constraints. In this paper, we employed four methods on top of traditional heuristic strategies, including the Simulated Annealing (SA), Beam Search (BS), Roll-In Roll-Out (RIRO), and Dynamic-step Search (DS). Experiments on real-world datasets show that the proposed heuristics based on SA, BS, and RIRO can detect the most high-risk users in different datasets.

PreDive: preserving diversity in test cases for evolving digital circuits using grammatical evolution

The ever-present challenge in the domain of digital devices is how to test their behavior efficiently. We tackle the issue in two ways. We switch to an automated circuit design using Grammatical Evolution (GE). Additionally, we provide two diversity-based methodologies to improve testing efficiency. The first approach extracts a minimal number of test cases from subsets formed through clustering. Moreover, the way we perform clustering can easily be used for other domains as it is problem-agnostic. The other uses complete test set and introduces a novel fitness function hitPlex that incorporates a test case diversity measure to speed up the evolutionary process.

Experimental and statistical evaluations on six benchmark circuits establish that the automatically selected test cases result in good coverage and enable the system to evolve a highly accurate digital circuit. Evolutionary runs using hitPlex indicate promising improvements, with up to 16% improvement in convergence speed and up to 30% in success rate for complex circuits when compared to the system without the diversity extension.

Selection schemes from evolutionary computing show promise for directed evolution of microbes

Directed microbial evolution harnesses evolutionary processes in the laboratory to construct microorganisms with enhanced or novel functional traits. Directing evolutionary processes for applied goals is fundamental to evolutionary computation, which harnesses the principles of Darwinian evolution as a general purpose search engine for solutions to computational problems. Despite overlapping aims, artificial selection methods from evolutionary computing are not commonly applied to living systems in the laboratory. Here, we summarize recent work wherein we ask if parent selection algorithms from evolutionary computation might be useful for directing the evolution of microbial populations when selecting for multiple functional traits. To do so, we developed an agent-based model of directed microbial evolution, which we used to evaluate how well three selection schemes from evolutionary computing (tournament selection, lexicase selection, and non-dominated elite selection) performed relative to schemes used in the laboratory (elite and top-10% selection). We found that lexicase selection and non-dominated elite selection generally outperformed the commonly used directed evolution approaches. Our results are informing ongoing work to transfer these techniques into the laboratory and motivate future work testing more sophisticated selection schemes from evolutionary computation in a directed evolution context.

Optimizing sample diversity with fairness constraints on imbalanced, sparse, hiring data

There are many cases where one may wish to retrieve non-random, diverse, and fair samples from an imbalanced dataset. With over 90K tech job descriptions and corresponding resumes that applied to those jobs, we describe our approach using evolutionary algorithms to derive a diverse and gender-fair subset for use in validating ML algorithms. Since 3/4 of the applicants were male, we had an imbalanced dataset. We describe how, through the use of evolutionary algorithms, we were able to discover different characteristics between genders as well as recognize issues with sparse representations. We constructed additional optimizing objectives to rectify these issues to ultimately unearth a desired sample.

An evolutionary approach to the discretization of gene expression profiles to predict the severity of COVID-19

In this work, we propose to use a state-of-the-art evolutionary algorithm to set the discretization thresholds for gene expression profiles, using feedback from a classifier in order to maximize the accuracy of the predictions based on the discretized gene expression levels, while at the same time minimizing the number of different profiles obtained, to ease the understanding of the expert. The methodology is applied to a dataset containing COVID-19 patients that developed either mild or severe symptoms. The results show that the evolutionary approach performs better than a traditional discretization based on statistical analysis, and that it does preserve the sense-making necessary for practitioners to trust the results.

Facility location problem and permutation flow shop scheduling problem: a linked optimisation problem

There is a growing literature spanning several research communities that studies multiple optimisation problems whose solutions interact, thereby leading researchers to consider suitable approaches to joint solution. Real-world problems, like supply chain, are systems characterised by such interactions. A decision made at one point of the supply chain could have significant consequence that ripples through linked production and transportation systems. Such interactions would require complex algorithmic designs. This paper, therefore, investigates the linkages between a facility location and permutation flow shop scheduling problems of a distributed manufacturing system with identical factory (FLPPFSP). We formulate a novel mathematical model from a linked optimisation perspective with objectives of minimising facility cost and makespan. We present three algorithmic approaches in tackling FLPPFSP; Non-dominated Sorting Genetic Algorithm for Linked Problem (NSGALP), Multi-Criteria Ranking Genetic Algorithm for Linked Problem (MCRGALP), and Sequential approach. To understand FLPPFSP linkages, we conduct a pre-assessment by randomly generating 10000 solution pairs on all combined problem instances and compute their respective correlation coefficients. Finally, we conduct experiments to compare results obtained by the selected algorithmic methods on 620 combined problem instances. Empirical results demonstrate that NSGALP outperforms the other two methods based on relative hypervolume, hypervolume and epsilon metrics.

A hybrid optimization tool for active magnetic regenerator

Active Magnetic Regenerator (AMR) refrigeration is an innovate technology, which can reduce energy consumption and the depletion of the ozone layer. However, to develop a commercially applicable design of the AMR model is still an issue, because of the difficulty to find a configuration of the AMR parameters, which are suitable for various applications needs. In this work, we focus on the optimization method for finding a common parameters of the AMR model in two application modes: a magnetic refrigeration system and a thermo-magnetic generator. This paper proposes a robust optimisation tool, which ensures the scalability with respect to the number of objectives and allows to easily set up different optimisation experiments. A tool validation is presented. It is expected that this tool can help to make a qualitative jump in the development of AMR refrigeration.

Rethinking of controller placement problem from static optimization to multi-objective dynamic optimization

The controller placement problem (CPP) is modeled as a dynamic multi-objective combinatorial optimization problem (DMOCPP). A novel algorithm is introduced - Dynamic multi-objective controller placement algorithm (DMOCPA) based on the multi-population and multi-objective quantum-inspired Salp Swarm Algorithm (MMQSSA) to solve the DMOCPP. The proposed work advances the field of traditional networking, to address the issues when the networks are highly dynamic and complex.

Multi-objective path planning for environmental monitoring using an autonomous surface vehicle

Intelligent environmental monitoring is an exciting and impactful use case of autonomous vehicles. In this work, we explore measuring water quality parameters via a battery-powered Autonomous Surface Vehicle to obtain an accurate spatial model of the quality of a water resource while minimizing the distance traveled (alternatively, time, or battery consumption). We develop an objective function for this class of problems and propose an evolutionary multi-objective optimization algorithm that selects the measurement locations while using a quick novel construction heuristic for determining the path. Our method leverages new crossover and mutation operators that seem to be of independent interest. We demonstrate the efficacy of the proposed approach in obtaining results for two different waterbodies: Lake Ypacarai in Paraguay, and Mar Menor in Spain.

Public transport timetable and charge optimization using multiple electric buses types

Electrification of transportation brings novel opportunities and challenges in designing new sustainable public transport systems. The adoption of electric buses implies the need for new models that consider the autonomy of batteries, charging opportunities, users' demand, and the capacity of vehicles, among others. We model the problem of a bus lane in Los Angeles as a combinatorial optimization problem and use the available information to obtain the optimal timetable and fleet of buses. For that, an accurate mathematical model to calculate the consumption of each type of bus is defined, and the problem is solved with a genetic algorithm. Thanks to the strategic schedules found by the proposed approach, the buses culminate the day with 26.5% of energy on average. Just 6.5% over the minimum safety threshold required to preserve the battery state of health (20%). Our approach optimizes the number of buses needed and their battery utilization based on the energy consumption per trip, the regenerative braking, and overnight recharging.

Adapting mutation and recombination operators to range-aware relations in real-world application data

Optimisation problems with higher-dimensional search spaces do usually not only come with equality or inequality constraints, but also with dependencies between the different variables. In real-world applications, especially in experimental data from material sciences, these relations as well as the constraints may not be true for the entire search space, but only for certain areas. Other constraints or relations may hold then for different areas of the search space. We build on correlation-aware mutation and recombination operators that are used in genetic algorithms and adjust them to be able to deal with area-specific constraints and relations. This can be configured by a domain expert using a domain-specific language. Our approach is evaluated with well-known benchmark functions, carefully designed distributions, and data description files and shows a better capability of generating feasible solutions in the population.

Exploration of unknown environments via evolution of behavioral and morphological properties of miniaturized sensory agents

In this work, we propose a framework to evolve morphology and behavior of resource-constrained miniaturized sensory agents. One key application of this framework is the exploration of unknown environments. Consequently, we demonstrate its ability to solve multi-objective optimization problem by applying it to an environment exploration task. The goal is to find the position of a pollution source in a U-shaped fluid pipe with unknown fluid dynamics properties. Firstly, morphological evolution is set to find the optimal physical shape for an agent so that it passes through six target points inside the pipe, i.e. it follows a certain trajectory. Based on that, behavioral evolution optimizes then the sensor controller of an agent and tries to solve a Pareto-optimization problem where the objectives are (1) to measure accurately chemical particle concentrations, (2) to measure depth of the agent while in the environment and (3) to minimize power consumption while using the sensors by altering their operation frequencies. We executed 24000 simulations and the best agent hit the target points with accuracy 91% and reached the maximum performance possible in first and second objectives while reducing the power usage by 86.6%, compared to fixed frequency rate.

Genetic heterogeneity analysis using genetic algorithm and network science

Genome-wide association studies (GWAS) have linked thousands of genetic variants to the susceptibility of many common human diseases. However, the genetic explanations of diseases are often heterogeneous, imposing a substantial challenge for GWAS. We propose a feature construction method using genetic algorithm (GA) to recognize the heterogeneous risk effects of different genetic variable groups. Multiple GA-based feature selection runs are used to collect an ensemble of the high-performing feature subsets. We generate a feature co-selection network from the ensemble, where nodes represent genetic variables and edges represent their co-selection frequencies. A new synthetic feature, namely community risk score (CRS), is created for each network community. CRS quantifies the risk of a community of variables and allows for more effective heterogeneity analysis. We applied our method to two colorectal cancer GWAS datasets, one for training and the other for validation. We ran the GA-based feature selection on the training dataset and constructed the co-selection network. CRS was then created for each community in the network. We identified three colorectal cancer subtypes using the CRSs and clustering algorithms on the validation dataset. The function enrichment analysis in our results further highlighted gastric cancer related genes, tumor suppressors and DNA methylation genes.

A hyper-heuristic approach for artificial teeth scheduling

Finding efficient machine schedules in the area of teeth manufacturing is a challenging task, as complex constraints need to be fulfilled and multiple cost objectives should be minimized.

This paper investigates a hyper-heuristic solution approach for the artificial teeth scheduling problem which originates from real-life production sites. We propose low-level heuristic strategies which can be utilized by state-of-the-art selection-based hyper-heuristic strategies to efficiently solve large problem instances.

An extensive set of experiments on the benchmark instances shows that the proposed approach can improve results for several realistic scenarios.

An optimization strategy for the complex large-scale stockpile blending problem

The Large-scale stockpile blending problem is a decision problem that seeks to maximize valued materials in the final productions by determining a schedule for blending materials from stockpiles and generating parcels. The resource capacities constraints are imposed for all stockpiles subject to a long-term mine schedule, and the downstream side and customer demand impose parcel characteristics constraints. We adopt an existing continuous nonlinear model of the stockpile blending problem and introduce an optimization strategy for the large-scale stockpile blending problem to tackle the complex problem. The optimization strategy splits the problem into several unit problems, and the solution to each unit problem is dependent on each other. An evolutionary algorithm is first used to solve each unit problem. Then a selection operator is implemented to select a set of solutions and inheres to the next unit problem. This paper investigates the performance of different selection operators implemented in the optimization strategy and different evolutionary algorithms for solving unit problems.

A surrogate-assisted multi-objective evolutionary algorithm for shelter locating and evacuation planning

With the increase of urban population, emergency evacuation is an important subject in the field of urban planning and management, and how to set up shelters efficiently is crucial in emergency evacuation. One challenge in the shelter location problem is how to handle uncertainties, e.g., sudden unavailability of some evacuation shelters, etc. To tackle this issue, this paper formulate the problem as a multi-objective optimization problem. Due to uncertainty, it is very time-consuming to evaluate the objective function value of a plan, as a lot of simulations are required. To solve this problem, we develop a surrogate-assisted multi-objective evolutionary algorithm. Instead of using a regression surrogate model to approximate the objective function values, the proposed method uses the Levenberg-Marquardt Back-propagation (LMBP) to predict whether new solutions are better than the current Pareto front. To verify the performance of the proposed approach, we perform numerical experiments on some urban models. The experimental results demonstrate the effectiveness of the proposed method.

POSTER SESSION: Search-based software engineering

Evolutionary-based automated testing for GraphQL APIs

The Graph Query Language (GraphQL) is a powerful language for APIs manipulation in web services. It has been recently introduced as an alternative solution for addressing the limitations of RESTful APIs. This paper introduces an automated solution for GraphQL APIs testing. We present a full framework for automated APIs testing, from the schema extraction to test case generation. Our approach is based on evolutionary search. Test cases are evolved to intelligently explore the solution space while maximizing code coverage criteria. The proposed framework is implemented and integrated in the open-source EVOMASTER tool. Experiments on two open-source GraphQL APIs show statistically significant improvement of the evolutionary approach compared to the baseline random search.

A bi-level evolutionary approach for the multi-label detection of smelly classes

This paper presents a new evolutionary method and tool called BMLDS (Bi-level Multi-Label Detection of Smells) that optimizes a population of classifier chains for the multi-label detection of smells. As the chain is sensitive to the labels' (i.e., smell types) order, the chains induction task is framed as a bi-level optimization problem, where the upper-level role is to search for the optimal order of each considered chain while the lower-level one is to generate the chains. This allows taking into consideration the interactions between smells in the multi-label detection process. The statistical analysis of the experimental results reveals the merits of our proposal with respect to several existing works.

A safety checking algorithm with multi-swarm particle swarm optimization

Model checking is a formal verification technique that automatically decides whether a software system conforms to its specification or not. One of the important features of model checking is to output a counterexample, if a violative execution is detected. Traditional exhaustive techniques fail to complete the verification in large-scale systems because of the shortage of computational resources. This is an efficiency problem called State Explosion Problem. On the other hand, the user of model checking hopes to obtain comprehensible and short outputs. However, balancing efficiency and comprehensibility is not easy, since the exhaustive investigation of the system executions is necessary to find a short counterexample. In this paper, we tackle the balancing problem with an approach of Search-Based Software Engineering. We propose a novel verification algorithm using multi-swarm Particle Swarm Optimization. Our technical breakthrough is to use the specification to decompose the verification problem into the small-sized subproblems dynamically. Each swarm solves one of the subproblems efficiently and the aggregation of the partial solutions found by the swarms provides us with sufficiently short counterexamples. Our numerical experiments show that the proposed algorithm outperforms traditional exhaustive techniques and state-of-the-art non-exhaustive techniques in terms of both efficiency and comprehensibility.

Combining GIN and PMD for code improvements

Software developers are increasingly dependent on question and answer portals and blogs for coding solutions. While such interfaces provide useful information, there are concerns that code hosted here is often incorrect, insecure or incomplete. Previous work indeed detected a range of faults in code provided on Stack Overflow through the use of static analysis. Static analysis may go a far way towards quickly establishing the health of software code available online. In addition, mechanisms that enable rapid automated program improvement may then enhance such code. Accordingly, we present this proof of concept. We use the PMD static analysis tool to detect performance faults for a sample of Stack Overflow Java code snippets, before performing mutations on these snippets using GIN. We then re-analyse the performance faults in these snippets after the GIN mutations. GIN's RandomSampler was used to perform 17,986 unique line and statement patches on 3,034 snippets where PMD violations were removed from 770 patched versions. Our outcomes indicate that static analysis techniques may be combined with automated program improvement methods to enhance publicly available code with very little resource requirements. We discuss our planned research agenda in this regard.

Towards an interactive ranking operator for NSGA-II

Knowing the Decision Makers (DM) preferences during the search process may help the algorithms to find solutions adequate to the DM profile. For this purpose, some interactive approaches allow the DM to evaluate solutions during the search process, rating them with scores. These scores represent the adequacy level of the solutions concerning the DM preferences and should influence the evolution of the search algorithm. Nevertheless, in these interactive approaches, this prioritization is partial and/or specific for a problem domain. In this context, this work aims to propose an initial version of an interactive ranking operator for NSGA-II, whose goal is to support the ranking of solutions considering both the DM preferences and the fitness of the solutions, for any software engineering domain. For validating the proposal, we incorporate this operator in an interactive approach for Product Line Architecture design. The results pointed out that the new ranking operator can properly deal with the DM preferences, giving a greater chance of surviving to those solutions with higher scores without compromising the diversity of solutions.

POSTER SESSION: Theory

Counterexample to the best-case running time of efficient non-dominated sorting algorithm

Non-dominated sorting is one of the important steps in Pareto dominance-based multiobjective evolutionary algorithms. In this paper, we show that the number of comparisons in the best case of an approach, Efficient Non-dominated Sort (ENS) from the paper "An Efficient Approach to Nondominated Sorting for Evolutionary Multiobjective Optimization" by Zhang et al., is not correct. For this purpose, we have identified a scenario where the approach performs less number of comparisons than that of reported in the paper.

TUTORIAL SESSION: Introductory tutorials

A (biased) introduction to benchmarking

"In the course of your work, you will from time to time encounter the situation where the facts and the theory do not coincide. In such circumstances [...], it is my earnest advice to respect the facts."

- Igor Sikorsky

Evolutionary diversity optimization for combinatorial optimization: tutorial at GECCO'22, Boston, USA

• Majority of approaches consider diversity in obj. space.

• Ulrich/Thiele [UT11] considered diversity in search space (see, e. g., Tamara Ulrich's PhD thesis [Ulr12])

• Diversity with respect to other properties (features) could be useful in various domains.

Bayesian optimization: tutorial

Quality-diversity optimisation

A gentle introduction to theory (for non-theoreticians)

GECCO 2022#: embedding knowledge into optimization process

Difficulties in fair performance comparison of multiobjective evolutionary algorithms

Graph-based genetic programming

Selection hyper-heuristics

Automated algorithm configuration and design

Evolution of neural networks

Transfer learning in evolutionary spaces

Representations for evolutionary algorithms

Introductory mathematical programming for EC

Learning classifier systems: cognitive inspired machine learning for eXplainable AI

Generative hyper-heuristics

Model-based evolutionary algorithms: GECCO 2022 tutorial

Graybox optimization and next generation genetic algorithms

Evolutionary computation for feature selection and feature construction

Evolutionary continuous dynamic optimization

TUTORIAL SESSION: Advanced tutorials

CMA-ES and advanced adaptation mechanisms

GECCO 2022 tutorial on benchmarking multiobjective optimizers 2.0

Constraint-handling techniques used with evolutionary algorithms

Evolutionary Algorithms (EAs) have been found successful in the solution of a wide variety of optimization problems. However, EAs are unconstrained search techniques. Thus, incorporating constraints into the fitness function of an EA is an open research area.

Benchmarking and analyzing iterative optimization heuristics with IOH profiler

Statistical analyses for multi-objective stochastic optimization algorithms: GECCO 2022 tutorial

Genetic improvement: taking real-world source code and improving it using computational search methods

Lexicase selection

Runtime analysis of population-based evolutionary algorithms

Evolutionary submodular optimisation: tutorial

Sequential experimentation by evolutionary algorithms

Theory and practice of population diversity in evolutionary computation

"The principle of divergence is simplicity itself: the more the coinhabitants of an area differ from each other in their ecological requirements, the less they will compete with each other; therefore natural selection will tend to favor any variation toward greater divergence" (Mayr, 1992)

Coevolutionary computation for adversarial deep learning

TUTORIAL SESSION: Specialized tutorials

Introduction to automated design of scheduling heuristics with genetic programming

Ant colony optimization for software engineers

Imitates the ants foraging behaviour as an optimization process to find the best path between the nest and the food source.

Optimization challenges at the european space agency

Decomposition multi-objective optimisation: current developments and future opportunities

Evolutionary computation and machine learning in security

Evolutionary computation and evolutionary deep learning for image analysis, signal processing and pattern recognition

WORKSHOP SESSION: Workshop analysing algorithmic behaviour of optimisation heuristics

Survivor selection in a crossoverless evolutionary algorithm

The plant propagation algorithm is a crossoverless population based evolutionary algorithm, defaultly deploying the plus-selection method for survivor selection - combining parents and offspring population and then selecting the best popSize individuals. In this study, we explore eight different survivor selection methods (plus and comma selection, tournament selection with and without replacement, elitist tournament selection, linear ranking selection and (elitist) roulette wheel selection) on 59 continuous benchmark test function instances and compare the results.

In the results, the tournament selection methods outperform PPA's default selection method on 83% to 88% of the 59 benchmark functions. Results show that the performance of all selection methods diminish as dimensions of benchmark function instances increase. Analysis indicates that the tournament selection methods employ more exploitation by decreasing population diversity. Future research should establish if this also benefits performance on other benchmark functions and industrial problems.

Exactly characterizable parameter settings in a crossoverless evolutionary algorithm

The number of offspring and the population size greatly influence the performance of the Plant Propagation Algorithm, a crossover-less metaheuristic, when applied to the Euclidean traveling salesman problem. However, with large volumes of experimental data, a clearly characterizable relation emerges, facilitating both an exact parameterization as well as opening up perspectives on a precisely predictable outcome. These results directly question its position between exact, approximation, and PTAS algorithms.

Although the metaheuristic is thereby officially parameter sensitive when deployed to the Euclidean traveling salesman problem, it does appear to be largely insensitive to different problem instances. This is very different from earlier results on continuous optimization, which were completely opposite. Why is that, and what does it mean for our practices in algorithmic benchmarking?

The effect of decoding fairness on particle swarm optimization for the p-median problem

Solution encoding and decoding have a direct impact on meta-heuristic optimization methods. The mapping between search and solution spaces outlines the conditions for the metaheuristics and affects their ability to solve particular problems. The issue of encoding becomes especially pronounced when continuous metaheuristics are applied to discrete problems, in particular combinatorial problems with strict positional dependences in solution representations. This work takes a closer look at the decoding of combinations (fixed-length subsets) in continuous metaheuristics, demonstrates the inherent bias of simple combination decoding, and studies the effect of fair decoding in the context of the particle swarm optimization algorithm and the p-Median problem.

Examining algorithm behavior using recurrence quantification and landscape analyses

Differences in performance between algorithms can be attributed to the interaction between their unique rule-sets and the characteristics of the instance's landscape. However, understanding this interaction can be difficult because algorithms are often composed of multiple elements, and in the worst cases are described using opaque notation and metaphors. In this paper, we introduce a methodology for the behavioral analysis of optimization algorithms, based on comparing algorithm dynamics in a given problem instance. At the methodology's core lays the hypothesis that if two algorithms, with the exact same initial conditions, have similar dynamics, then their rule-sets are also similar. An examination of Grey Wolf Optimization, shows that it exhibits bias leading to similar behavioral patterns regardless of the function.

Dynamic computational resource allocation for CFD simulations based on pareto front optimization

Computational Fluid Dynamics (CFD) simulations can be extremely computationally demanding and usually rely on the use of High-performance computing (HPC) using both CPU and GPU resources. Modeling the behavior of bubbles size distribution often leads to a symmetric function evaluation problem. This paper proposes a dynamic computational resource allocation, based on Pareto-optimal solutions. The solutions are obtained from the formulation of the resource-constrained symmetric function evaluation problem as a multi-objective problem. After the Pareto-front is obtained, we suggest a dynamic selection method of the solutions that utilize the existing resources. To solve the multi-objective problem ϵ-MOEA, an algorithm known to obtain good diversity of Pareto-front solutions, is applied. As the problem formulated is new, brute-force search and two-specifically designed for this problem-heuristics are implemented and tested to serve as baselines. The methods are tested and compared to three dimensionalities of the problem. The results showed that ϵ-MOEA can successfully approximate the Pareto-front, allowing to utilize the resources optimally at each simulation time-step.

Using structural bias to analyse the behaviour of modular CMA-ES

The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is a commonly used iterative optimisation heuristic for optimising black-box functions. CMA-ES comes in many flavours with different configuration settings. In this work, we investigate whether CMA-ES suffers from structural bias and which modules and parameters affect the strength and type of structural bias. Structural bias occurs when an algorithm or a component of the algorithm biases the search towards a specific direction in the search space irrespective of the objective function. In addition to this investigation, we propose a method to assess the relationship between structural bias and the performance of configurations with different types of bias on the BBOB suite of benchmark functions. Surprisingly for such a popular algorithm, 90.3% of the 1 620 CMA-ES configurations were found to have Structural Bias. Some interesting patterns between module settings and bias types are presented and further insights are discussed.

WORKSHOP SESSION: Workshop black box optimization benchmarking

Constrained blackbox optimization with the NOMAD solver on the COCO constrained test suite

The context of this work is constrained blackbox optimization. It describes the mesh adaptive direct search (MADS) derivative-free optimization algorithm using the progressive barrier strategy to handle quantifiable and relaxable constraints. Through its implementation in the NOMAD solver, MADS is tested on the new bbob-constrained suite of analytical constrained problems from the COCO platform, and compared with the CMA-ES heuristic. Computational tests are illustrated with the postprocessing graphs from the COCO platform, as well as with data profiles, an established tool in the derivative-free optimization community, adapted here for the constrained case. The results illustrate that researchers must be very careful with the use of these tools, which are complementary and should ideally be used together. Their many variations may show different outcomes, and hence many graphs are necessary in order to provide the best overall view.

Benchmarking several strategies to update the penalty parameters in AL-CMA-ES on the bbob-constrained testbed

In this paper, we benchmark several versions of a population-based evolution strategy with covariance matrix adaptation, handling constraints with an Augmented Lagrangian fitness function. The versions only differ in the strategy to adapt the penalty parameter of the fitness function. We compare the resulting algorithms, AL-CMA-ES, with random search and Powell's derivative-free COBYLA on the recently released bbob-constrained test suite for constrained continuous optimization in dimensions ranging from 2 to 40. The experimental results allow identifying classes of problems where one algorithm is more advantageous to use. They also reveal features of the merit function used for performance assessment and in particular situations where even on simple problems the targets can be hard to meet for algorithms based on Lagrange multipliers.

Benchmarking of two implementations of CMA-ES with diagonal decoding on the bbob test suite

In this paper, we present a comparative benchmark of two implementations of CMA-ES, both with and without diagonal decoding. The benchmarked variants of CMA-ES with diagonal decoding adaptively split the update of the covariance matrix into an update with the original CMA-ES method and an update with the separable-CMA-ES method. Thus, the diagonal decoding should allow for improved performance on separable functions with minimal loss on nonseparable ones. To gain insight into how diagonal decoding impacts CMA-ES runs, an assessment of the performance gain or loss due to the use of diagonal decoding relative to the original CMA-ES, was performed on bbob problems using the COCO platform. We were also interested in variances that might emerge from the difference in the implementations. The data presented in this paper shows improved performance of the CMA-ES on separable functions when using diagonal decoding, without any apparent loss on nonseparable ones. In addition, a few performance variances were spotted in the weakly structured functions, which appeared uncorrelated with the use of diagonal decoding. However, they can be traced back to implementation differences, such as the stopping conditions that may result in different runs, as the data suggests.

Benchmarking CMA-ES with margin on the bbob-mixint testbed

The CMA-ES with Margin (CMA-ESwM) is a CMA-ES variant recently proposed for mixed-integer black-box optimization (MI-BBO), which introduces a lower bound on the marginal probability associated with integer variables. The CMA-ESwM shows promising performance compared to existing methods on simple benchmark functions. However, its performance has not been comprehensively investigated in other function classes, such as multimodal ones. In this work, we investigate the performance of the CMA-ESwM on the bbob-mixint testbed that includes problems of various properties for MI-BBO. The experimental results show that the CMA-ESwM outperforms the other MI-BBO methods at higher dimensions. The performance at low dimensions is competitive with the comparative methods.

Benchmarking ϵMAg-ES and BP-ϵMAg-ES on the bbob-constrained testbed

The ϵMAg Evolution Strategie and its bi-population version (BP-ϵMAg-ES), which performed well on the CEC 2018 and CEC 2020 benchmark sets, are evaluated using the COCO bbob-constrained 2.6.2 benchmark release. Furthermore, the performance of Matlab's fmincon is considered as a baseline algorithm. Unlike the results of the CEC competitions, the outcomes obtained on the COCO 2.6.2 test set do not reveal any advantage of using a bi-population approach. The reasons for these observations needs still to be investigated in future research.

Benchmarking an algorithm for expensive high-dimensional objectives on the bbob and bbob-largescale testbeds

We report benchmarks for a recently developed algorithm on the bbob and bbob-largescale benchmarking testbeds in COCO. This algorithm is designed for expensive high-dimensional multimodal objectives (such as arise in hyperparameter optimization or via simulations), and this regime introduces challenges for benchmarking. In particular, while the COCO experimental procedure yields evidence that this algorithm improves on the state of the art, the COCO framework also exhibits shortcomings for the very low evaluation budgets involved here. Consequently, we also report the results of ad hoc experiments that demonstrate a more obvious if less rigorous advantage for the algorithm over its nearest competition.

Benchmarking some variants of the CMAES-APOP using keeping search points and mirrored sampling combined with active CMA on the BBOB noiseless testbed

In this paper, we evaluate some variants of the CMAES-APOP using two additional methods. The first method is the combination of mirrored sampling with active covariance matrix adaptation. The second one is to keep some search points of the previous iteration for the evolution of the current iteration when the population size is large. Moreover, we propose a strategy to reduce the step-size when the population size attains its upper bound for several consecutive iterations. We will show that the techniques can enhance the performance on some multi-modal functions, especially on the weakly-structured multi-modal ones, and the overall performance in all dimensions.

Benchmarking the hooke-jeeves method, MTS-LS1, and BSrr on the large-scale BBOB function set

This paper investigates the performance of three black-box optimizers exploiting separability on the 24 large-scale BBOB functions, including the Hooke-Jeeves method, MTS-LS1, and BSrr. Although BSrr was not specially designed for large-scale optimization, the results show that BSrr has a state-of-the-art performance on the five separable large-scale BBOB functions. The results show that the asymmetry significantly influences the performance of MTS-LS1. The results also show that the Hooke-Jeeves method performs better than MTS-LS1 on unimodal separable BBOB functions.

WORKSHOP SESSION: Workshop evolutionary computation and explainable AI

Improving the search of learning classifier systems through interpretable feature clustering

Learning Classifier Systems (LCS) are a well-known machine learning method, producing sets of interpretable rules in order to solve a variety of problems. Despite this, a common issue that these systems run into is the creation of unhelpful rules, caused by having multiple features in the data representing similar areas of knowledge. While we can logically know that these rules will not be useful in conjunction with each other, this is much more difficult for the algorithm to innately know.

This paper presents an exploration into using clustering algorithms for feature selection in LCS, selecting features that represent each major cluster of feature information. Combined with the innate power of LCS at finding nonlinear decision boundaries, these selected features can achieve results close to that of the full feature set while reducing the training time required to reach those results. The feature selection performed is highly interpretable, allowing for different features to be selected while maintaining the information spread in the feature subset.

Learning Classifier Systems (LCS) [1] are an Evolutionary Computation method that learns a population of rules in order to perform tasks. One common application for LCS systems is classification, where each rule makes an individual mapping of the class from an input, and these heuristics are combined to form an overall class prediction.

The intersection of evolutionary computation and explainable AI

In the past decade, Explainable Artificial Intelligence (XAI) has attracted a great interest in the research community, motivated by the need for explanations in critical AI applications. Some recent advances in XAI are based on Evolutionary Computation (EC) techniques, such as Genetic Programming. We call this trend EC for XAI. We argue that the full potential of EC methods has not been fully exploited yet in XAI, and call the community for future efforts in this field. Likewise, we find that there is a growing concern in EC regarding the explanation of population-based methods, i.e., their search process and outcomes. While some attempts have been done in this direction (although, in most cases, those are not explicitly put in the context of XAI), we believe that there are still several research opportunities and open research questions that, in principle, may promote a safer and broader adoption of EC in real-world applications. We call this trend XAI within EC. In this position paper, we briefly overview the main results in the two above trends, and suggest that the EC community may play a major role in the achievement of XAI.

Interpretable AI for policy-making in pandemics

Since the first wave of the COVID-19 pandemic, governments have applied restrictions in order to slow down its spreading. However, creating such policies is hard, especially because the government needs to trade-off the spreading of the pandemic with the economic losses. For this reason, several works have applied machine learning techniques, often with the help of special-purpose simulators, to generate policies that were more effective than the ones obtained by governments. While the performance of such approaches are promising, they suffer from a fundamental issue: since such approaches are based on black-box machine learning, their real-world applicability is limited, because these policies cannot be analyzed, nor tested, and thus they are not trustable. In this work, we employ a recently developed hybrid approach, which combines reinforcement learning with evolutionary computation, for the generation of interpretable policies for containing the pandemic. These policies, trained on an existing simulator, aim to reduce the spreading of the pandemic while minimizing the economic losses. Our results show that our approach is able to find solutions that are extremely simple, yet very powerful. In fact, our approach has significantly better performance (in simulated scenarios) than both previous work and government policies.

Towards the evolutionary assessment of neural transformers trained on source code

In recent years, deep learning have become popular for solving tasks in a wide range of domains. With this growing diffusion, combined with architectures becoming increasingly complex and sophisticated, understanding how deep models make their predictions is now a crucial and challenging research issue.

In this work, we are interested in analysing the behaviour of networks that deal with source code, and we address the problem along two experimental directions: in the first one, we study the activations of the neurons of a transformer trained in the detection of software vulnerabilities so as to identify if (and, eventually, where) some human understandable concepts emerge in the network. In the second one, we generate programs by applying a grammar-based evolutionary algorithm with a fitness function that favours individuals which stimulate (or weaken) the activations in neurons where given concepts majorly emerge. We then study the output of the network on sets of such evolved programs, i.e. how the evolutionary pressure along the direction of a concept affects the prediction.

We finally discuss how this combination of evolutionary algorithms with fitness functions derived from the neural activations can be effective for explaining the decision process of deep models, and we suggest further research directions.

Evolving explainable rule sets

Most AI systems work like black boxes tasked with generating reasonable outputs for given inputs. Many domains, however, have explainablity and trustworthiness requirements not fulfilled by these approaches. Various methods exist to analyze or interpret black-box models post training. When it comes down to sensitive domains in which there is a mandate for white-box models, a better choice would be to use transparent models. In this work, we present a method which evolves explainable rule-sets using inherently transparent ordinary logic to make models. We showcase some sample domains we tackled and discuss their major desirable properties like bias detection, knowledge discovery, and modifiablity, to name a few.

Towards explainable metaheuristic: mining surrogate fitness models for importance of variables

Metaheuristic search algorithms look for solutions that either maximise or minimise a set of objectives, such as cost or performance. However most real-world optimisation problems consist of nonlinear problems with complex constraints and conflicting objectives.

The process by which a GA arrives at a solution remains largely unexplained to the end-user. A poorly understood solution will dent the confidence a user has in the arrived at solution. We propose that investigation of the variables that strongly influence solution quality and their relationship would be a step toward providing an explanation of the near-optimal solution presented by a metaheuristic.

Through the use of four benchmark problems we use the population data generated by a Genetic Algorithm (GA) to train a surrogate model, and investigate the learning of the search space by the surrogate model. We compare what the surrogate has learned after being trained on population data generated after the first generation and contrast this with a surrogate model trained on the population data from all generations.

We show that the surrogate model picks out key characteristics of the problem as it is trained on population data from each generation. Through mining the surrogate model we can build a picture of the learning process of a GA, and thus an explanation of the solution presented by the GA. The aim being to build trust and confidence in the end-user about the solution presented by the GA, and encourage adoption of the model.

An explainable visualisation of the evolutionary search process

The comprehension of the Evolutionary Algorithm (EA) search process is often eluded by challenges of transparency inherent to black-box EAs, thus affecting algorithm enhancement and hyperparameter optimisation. In this work, we develop algorithm insight by introducing the Population Dynamics Plot (PopDP). PopDP is a novel and intuitive visualisation capable of visualising the population of solutions, the parent-offspring lineage, solution perturbation operators, and the search process journey. We apply PopDP to NSGA-II to demonstrate the insight attained and the effectiveness of PopDP for visualising algorithm search on a series of discrete dual- and many-objective knapsack problems of different complexities, and our results demonstrate that the method can be used to produce a visualisation in which the lineage of solutions can be clearly seen. We also consider the efficacy of the proposed explainable visualisation against emerging approaches to benchmarking explainable AI methods and consider the accessibility of the resulting visualisations.

WORKSHOP SESSION: Workshop evolutionary computation and decision making

Desirable properties of performance indicators for assessing interactive evolutionary multiobjective optimization methods

Interactive methods support decision makers in finding the most preferred solution in multiobjective optimization problems. They iteratively incorporate the decision maker's preference information to find the best balance among conflicting objectives. Several interactive methods have been developed in the literature. However, choosing the most suitable interactive method for a given problem can prove challenging and appropriate indicators are needed to compare interactive methods. Some indicators exist for a priori methods, where preferences are provided at the beginning of the solution process. We present some numerical experiments that illustrate why these indicators are not suitable for interactive methods. As the main contribution of this paper, we propose a set of desirable properties of indicators for assessing interactive methods as the first step of filling a gap in the literature. We discuss each property in detail and provide simple examples to illustrate their behavior.

Preliminary results of advanced heuristic optimization in the risk-based energy scheduling competition

In this paper, multiple evolutionary algorithms are applied to solve an energy resource management problem in the day-ahead context involving a risk-based analysis corresponding to the proposed 2022 competition on evolutionary computation. We test numerous evolutionary algorithms for a risk-averse day-ahead operation to show preliminary results for the competition. We use evolutionary computation to follow the competition guidelines. Results show that the HyDE algorithm obtains a better solution with lesser costs when compared to the other tested algorithm due to the minimization of worst-scenario impact.

R-MBO: a multi-surrogate approach for preference incorporation in multi-objective bayesian optimisation

Many real-world multi-objective optimisation problems rely on computationally expensive function evaluations. Multi-objective Bayesian optimisation (BO) can be used to alleviate the computation time to find an approximated set of Pareto optimal solutions. In many real-world problems, a decision-maker has some preferences on the objective functions. One approach to incorporate the preferences in multi-objective BO is to use a scalarising function and build a single surrogate model (mono-surrogate approach) on it. This approach has two major limitations. Firstly, the fitness landscape of the scalarising function and the objective functions may not be similar. Secondly, the approach assumes that the scalarising function distribution is Gaussian, and thus a closed-form expression of an acquisition function e.g., expected improvement can be used. We overcome these limitations by building independent surrogate models (multi-surrogate approach) on each objective function and show that distribution of the scalarising function is not Gaussian. We approximate the distribution using Generalised value distribution. We present an a-priori multi-surrogate approach to incorporate the desirable objective function values (or reference point) as the preferences of a decision-maker in multi-objective BO. The results and comparison with existing mono-surrogate approach on benchmark and real-world optimisation problems show the potential of the proposed approach.

Interactive MOEA/D with multiple types of preference information

Multiobjective optimization has the ultimate aim of helping a decision-maker to find a satisfying solution for a problem with multiple conflicting objectives. Usually, multiobjective evolutionary algorithms have been utilized to approximate the complete Pareto optimal set. However, some such algorithms incorporate preference information to direct the search toward a region of interest. Interactive methods allow decision-makers to learn more about the problem and update their preference information iteratively. Although preference information can be represented in multiple ways, most multiobjective evolutionary algorithms restrict the decision-maker to a single type of preference information. This article proposes an interactive version of MOEA/D that incorporates three types of preference information: reference points, preferred solutions, and preferred ranges. The reference vectors assigned to the solutions that met the preferences are kept in the population for the next iteration, while the remaining ones are re-arranged. The proposed method is applied to solve a river pollution problem to show its potential to support decision-makers in finding a satisfying solution without limiting them to a single type of preference. This gives more flexibility for the decision-maker to direct the search for the most preferred solution. In addition, we compared our method with interactive RVEA utilizing some benchmark problems.

Interactive evolutionary multiobjective optimization with modular physical user interface

Incorporating the preferences of a domain expert, a decision-maker (DM), in solving multiobjective optimization problems increased in popularity in recent years. The DM can choose to use different types of preferences depending on his/her comfort, requirements, or the problem being solved. Most papers, where preference-based and interactive algorithms have been proposed, do not pay attention to the user interfaces and input devices. If they do, they use character or graphics-based preference input methods. We propose the option of using a physical or tactile input device that gives the DM a better sense of control over providing his/her preferences. However, off the shelf hardware devices are not tailored to solve multiobjective optimization problems and provide many controls that may increase the cognitive load on the DM. In this paper, we propose a fully modular physical user interface to input preference information for solving multiobjective optimization problems. The modularity allows to arrange each input module in various ways depending on the algorithm, DM's requirements, or the problem being solved. The device can be used with any computer and uses web-based visualizations. We demonstrate the potential of the physical interface by solving a real-world problem with an interactive decomposition-based multiobjective evolutionary algorithm.

WORKSHOP SESSION: Workshop evolutionary computation for the automated design of algorithms

Why functional program synthesis matters (in the realm of genetic programming)

In Genetic Programming (GP) systems, particularly those that target general program synthesis problems, it is common to use imperative programming languages to represent evolving code. In this work, we consider the benefits of using a purely functional, rather than an imperative, approach. We then demonstrate some of these benefits via an experimental comparison of the pure functional language Haskell and the imperative language Python when solving program synthesis benchmarks within a grammar-guided GP system. Notably, we discover that the Haskell programs yield a higher success rate on unseen data, and that the evolved programs often have a higher degree of interpretability. We also discuss the broader issues of adapting a grammar-based GP system to functional languages, and highlight some of the challenges involved with carrying out comparisons using existing benchmark suites.

Reinforcement learning based adaptive metaheuristics

Parameter adaptation, that is the capability to automatically adjust an algorithm's hyperparameters depending on the problem being faced, is one of the main trends in evolutionary computation applied to numerical optimization. While several handcrafted adaptation policies have been proposed over the years to address this problem, only few attempts have been done so far at apply machine learning to learn such policies. Here, we introduce a general-purpose framework for performing parameter adaptation in continuous-domain metaheuristics based on state-of-the-art reinforcement learning algorithms. We demonstrate the applicability of this framework on two algorithms, namely Covariance Matrix Adaptation Evolution Strategies (CMA-ES) and Differential Evolution (DE), for which we learn, respectively, adaptation policies for the step-size (for CMA-ES), and the scale factor and crossover rate (for DE). We train these policies on a set of 46 benchmark functions at different dimensionalities, with various inputs to the policies, in two settings: one policy per function, and one global policy for all functions. Compared, respectively, to the Cumulative Step-size Adaptation (CSA) policy and to two well-known adaptive DE variants (iDE and jDE), our policies are able to produce competitive results in the majority of cases, especially in the case of DE.

WORKSHOP SESSION: Workshop enhancing generative machine learning with evolutionary computation

Exploring expression-based generative adversarial networks

Before the advent of Generative Adversarial Networks (GANs) in 2014, Evolutionary Computation approaches made up the majority of the state of art for image generation. Such approaches have been replaced with adversarial models using deep convolutional networks specifically tailored for GPU computing. Nevertheless, motivated by recent successes in GPU-accelerated Genetic Programming (GP) and given the well-established disposition of expression-based solutions towards image evolution, the prospect of bridging the gap between using symbolic expressions as generators for GANs, instead of neural networks, seems enticing.

In this paper, we propose a novel GAN model called TGPGAN, where the traditional generator network is replaced with a GP approach. The generator iteratively evolves a population of expressions that are then passed to the discriminator module for training with traditional backpropagation methods. Our experimental results show that it is possible to achieve comparable results to a typical Deep Convolutional GAN while benefiting from the flexibility enabled by an expression-based genotype. Some experimentation has also been performed to mitigate the computational cost of our model. This work serves as a proof of concept for the evolution of symbolic expressions within adversarial models.

COIL: Constrained optimization in learned latent space: learning representations for valid solutions

Constrained optimization problems can be difficult because their search spaces have properties not conducive to search, e.g., multimodality, discontinuities, or deception. To address such difficulties, considerable research has been performed on creating novel evolutionary algorithms or specialized genetic operators. However, if the representation that defined the search space could be altered such that it only permitted valid solutions that satisfied the constraints, the task of finding the optimal would be made more feasible without any need for specialized optimization algorithms. We propose Constrained Optimization in Latent Space (COIL), which uses a VAE to generate a learned latent representation from a dataset comprising samples from the valid region of the search space according to a constraint, thus enabling the optimizer to find the objective in the new space defined by the learned representation. Preliminary experiments show promise: compared to an identical GA using a standard representation that cannot meet the constraints or find fit solutions, COIL with its learned latent representation can perfectly satisfy different types of constraints while finding high-fitness solutions.

Multi-target evolutionary latent space search of a generative adversarial network for human face generation

This article presents an evolutionary approach for multi-target synthesized human face image generation based on exploring the latent space of generative adversarial networks. The proposed approach seeks to generate different human face images those share similarities to two given target images. The optimization applies generative adversarial networks for face generation, facial recognition for similarity evaluation, and an ad-hoc evolutionary algorithm for exploring the search space. The main results show that realistic images are generated, properly blending the main features of the two given target images, and deceiving a well-known facial recognition system.

Evolving SimGANs to improve abnormal electrocardiogram classification

Machine Learning models often require a large amount of data in order to be successful. This is troublesome in domains where collecting real-world data is difficult and/or expensive. Data simulators do exist, but they do not sufficiently reflect the real world data due to factors such as a lack of real-world noise. Generative adversarial networks (GANs) have been modified to refine simulated image data to better fit real world characteristics, using the SimGAN method. While evolutionary computing has been used for GAN evolution, there are currently no frameworks that can evolve a SimGAN. In this paper we (1) extend the SimGAN method to refine one-dimensional data, (2) modify Easy Cartesian Genetic Programming (ezCGP), an evolutionary computing framework, to create SimGANs that more accurately refine simulated data, and (3) create new feature-based quantitative metrics to evaluate refined data. We also use our framework to augment an electrocardiogram (ECG) dataset, a domain that suffers from the issues previously mentioned. In particular, while healthy ECGs can be simulated there are no current simulators of abnormal ECGs. We show that by using an evolved SimGAN to refine simulated healthy ECG data to mimic real-world abnormal ECGs, we can improve the accuracy of ECG classifiers.

WORKSHOP SESSION: Workshop on evolutionary optimization in uncertainty quantification models

Evolutionary approach to model calibration with uncertainty: an application to breast cancer growth model

Dynamical systems in most scientific areas can be modelled using ordinary differential equations or difference equations. However, when analysing and simulating real-world phenomena, model and data uncertainty arises, mainly due to the exclusion from the model of minor aspects of the phenomenon or experimental measurement error in the data. Therefore, more and more models take uncertainty and its quantification into account.

One type of such models are random models, in which their parameters are assumed to be random variables. However, this approach to capturing the uncertainty of phenomena has so far been little employed in real-world problems, both because of its inherent difficulty and the lack of data to support the correct choice of model parameter distributions. This problem can be addressed by designing probabilistic calibration algorithms capable of finding distributions of model parameters that explain/capture data uncertainty.

In the latter sense, here we propose a technique based on the multi-objective particle swarm optimisation (MOPSO) algorithm to find the parameter probability distributions of a random model that capture the uncertainty of the data. To illustrate the method in a real application, a complete analysis is performed on a simple first-order difference model for breast cancer tumour growth.

Probability density function computation in evolutionary model calibration with uncertainty

Time-continuous dynamical systems governed by ordinary differential equations form a large and influential family of mathematical models used in physics, chemistry, biology, and engineering. When analyzing and simulating real-world phenomena, uncertainty in the model itself and the tolerance errors in measurement devices must be considered. This paper will discuss the role of the Liouville-Gibbs equation in the calibration of mathematical models based on ordinary differential equations with uncertainty. A brief introduction to some numerical tools will be shown. Finally, the techniques and concepts are coupled with the Multi-Objective Particle Swarm Optimization algorithm to calibrate a probabilistic breast cancer growth model with real-world data.

Approaching epistemic and aleatoric uncertainty with evolutionary optimization: examples and challenges

Uncertainty quantification is an emerging area in theory and applications. There are various approaches for modeling and dealing with uncertainty. However, when modeling real-world problems, the calibration considering uncertainty is a relevant issue on which little has been studied. We think that the applicability of evolutionary computation can be much more significant when dealing with uncertainty due to the capabilities of these kinds of algorithms to find suitable solutions in reasonable time budgets. In this context, evolutionary computation approaches have not been investigated extensively, except for a few papers that use metaheuristics and evolutionary algorithms to calibrate models with uncertainty successfully. This paper aims to motivate researchers to study and propose evolutionary algorithms to calibrate models with uncertainty in real-world problems and investigate new proposals. Uncertainty is present in data, measuring processes, evaluation, and models; hence it offers real challenges for the Evolutionary Computation community to propose more sophisticated algorithms and methods. In this paper, we make a general review of how uncertainty quantification has been treated in mathematical modeling, provide ideas on how to interpret uncertainty in specific problems, highlight its drawbacks and show two case studies in medicine.

WORKSHOP SESSION: Workshop 11th international workshop on genetic improvement

Genetic improvement of shoreline evolution forecasting models

Coastal development and climate change are changing the geography of our coasts, while more and more people are moving towards the coasts. Recent advances in artificial intelligence allow for automatic analysis of observational data. Cartesian Genetic Programming (CGP) is a form of Genetic Programming (GP) that has been successfully used in a large variety of tasks including data-driven symbolic regression. We formulate the problem of shoreline evolution forecasting as a Genetic Improvement (GI) problem using CGP to encode and improve upon ShoreFor, an equilibrium shoreline prediction model, to study the effectiveness of CGP in GI in forecasting tasks. This work presents an empirical study of the sensitivity of CGP to a number of evolutionary configurations and constraints and compares the performances of the evolved models to the base ShoreFor model.

Towards evolution-based autonomy in large-scale systems

To achieve truly autonomous behaviour systems need to adapt to not only previously unseen circumstances but also previously unimagined ones. A hierarchical evolutionary system is proposed which is capable of displaying the emergent behaviour necessary to achieve this goal. This paper reports on the practical challenges encountered in implementing this proposed approach in a large-scale open-source system.

In "Evolutionary Autonomous Networks" [3], Harvey et al. present their vision for an autonomous network utilising online evolution to achieve guided emergent behaviour. Their approach presents a radical strategy to improve a network's ability to cope with uncertainty and change. The aim of this paper is to present the initial steps we have taken towards realising this vision by examining how evolutionary-based autonomy may be introduced into an example large-scale system-a content delivery network (CDN). This goal is not without its challenges as there are significant hurdles imposed by the design and implementation of the CDN which must be overcome.

Automatically exploring computer system design spaces

While much research has focused on using search to optimize software, it is possible to use these same approaches to optimize other parts of the computer system. With decreased costs in silicon customization, and the return of centralized systems carrying out specialized tasks, the time is right to begin thinking about tools to optimize systems for the needs of specific software targets. In the approach outlined, the standard Genetic Improvement process is flipped with source code considered static and the remainder of the computer system altered to the needs of the software. The project proposed is preliminary research into incorporating grammar-based GP with an advanced computer architecture simulator to automatically design computer systems. I argue this approach has the potential to significantly improve the design of computer systems while reducing manual effort.

Opportunities for genetic improvement of cryptographic code

Cryptography is one of the main tools underlying the security of our connected world. Cryptographic code must achieve both high security requirements and high performance. Automatic generation and genetic improvement of such code are underexplored, making cryptographic code a prime target for future research.

With the proliferation of computers into all aspects of human life, the amount of sensitive data being processed keeps increasing. As cryptography is one of the main tools underlying the security of our modern and connected world, cryptographic software must meet not only high security requirements, but also exhibit excellent nonfunctional properties, such as high performance and low energy consumption. Hence, we see cryptography as a prime target domain for single- and multi-objective code optimization.

Amaru: a framework for combining genetic improvement with pattern mining

We present Amaru, a framework for Genetic Improvement utilizing Abstract Syntax Trees directly at the interpreter and compiler level. Amaru also enables the mining of frequent, discriminative patterns from Genetic Improvement populations. These patterns in turn can be used to improve the crossover and mutation operators to increase population diversity, reduce the number of individuals failing at run-time and increasing the amount of successful individuals in the population.

Genetic improvement in the shackleton framework for optimizing LLVM pass sequences

Genetic Improvement is a search technique that aims to improve a given acceptable solution to a problem. In this paper, we present the novel use of genetic improvement to find problem-specific optimized LLVM Pass sequences. We develop a Pass-level edit representation in the linear genetic programming framework, Shackleton, to evolve the modifications to be applied to the default optimization Pass sequences. Our GI-evolved solution has a mean of 3.7% runtime improvement compared to the default LLVM optimization level '-O3' which targets runtime. The proposed GI method provides an automatic way to find a problem-specific optimization sequence that improves upon a general solution without any expert domain knowledge. In this paper, we discuss the advantages and limitations of the GI feature in the Shackleton Framework and present our results.

Dissecting copy/delete/replace/swap mutations: insights from a GIN case study

Research studies are increasingly critical of publicly available code due to evidence of faults. This has led researchers to explore ways to improve such code, with static analysis and genetic code improvement previously singled out. Previous work has evaluated the feasibility of these techniques, using PMD (a static analysis tool) and GIN (a program repair tool) for enhancing Stack Overflow Java code snippets. Results reported in this regard pointed to the potential of these techniques, especially in terms of GIN's removal of PMD's performance faults from 58 programs. We use a contextual lens to explore these mutations in this study, to evaluate the promise of these techniques. The outcomes show that while the programs were syntactically correct after GIN's mutations (i.e., they compiled), many of GIN's mutations changed the semantics of the code, rendering its purpose questionable. However, certain code mutations tend to retain code semantics more than others. In addition, GIN's mutations at times affected PMD's parsing ability, potentially increasing false negatives. Overall, while these approaches may prove useful, full utility may not be claimed at this time. For enhancing the outcomes of these approaches, we outline ways to improve the utility of these techniques and multiple future research directions.

The case for grammatical evolution in test generation

Generating tests for software is an important, but difficult, task. Search-based test generation is promising, as it reduces the time required from human experts, but suffers from many problems and limitations. Namely, the inability to fully incorporate a tester's domain knowledge into the search, its difficulty in creating very complex objects, and the problems associated with variable length tests. This paper illustrates how Grammatical Evolution could address and provide a possible solution to each of these concerns.

Leveraging fuzzy system to reduce uncertainty of decision making in software engineering automation

Decision making in automated GI-based software engineering tasks can significantly affect the performance of the system. However, modern SE usually presents high uncertainty in such decision making process due to the existence of multiple solutions that reply on heuristics. We propose to apply the theory of Fuzzy System to obtain a final decision with lower uncertainty and higher accuracy. We also demonstrate a motivating example and discuss the challenges and opportunities for applying fuzzy system to SE tasks.

Py2Cy: a genetic improvement tool to speed up python

Due to its ease of use and wide range of custom libraries, Python has quickly gained popularity and is used by a wide range of developers all over the world. While Python allows for fast writing of source code, the resulting programs are slow to execute when compared to programs written in other programming languages like C. One of the reasons for its slow execution time is the dynamic typing of variables. Cython is an extension to Python, which can achieve execution speed-ups by compiler optimization. One possibility for improvements is the use of static typing, which can be added to Python scripts by developers. To alleviate the need for manual effort, we create Py2Cy, a Genetic Improvement tool for automatically converting Python scripts to statically typed Cython scripts. To show the feasibility of improving runtime with Py2Cy, we optimize a Python script for generating Fibonacci numbers. The results show that Py2Cy is able to speed up the execution time by up to a factor of 18.

Evaluation of genetic improvement tools for improvement of non-functional properties of software

Genetic improvement (GI) improves both functional properties of software, such as bug repair, and non-functional properties, such as execution time, energy consumption, or source code size. There are studies summarising and comparing GI tools for improving functional properties of software; however there is no such study for improvement of its non-functional properties using GI. Therefore, this research aims to survey and report on the existing GI tools for improvement of non-functional properties of software. We conducted a literature review of available GI tools, and ran multiple experiments on the found open-source tools to examine their usability. We applied a cross-testing strategy to check whether the available tools can work on different programs.

Overall, we found 63 GI papers that use a GI tool to improve nonfunctional properties of software, within which 31 are accompanied with open-source code. We were able to successfully run eight GI tools, and found that ultimately only two ---Gin and PyGGI--- can be readily applied to new general software.

WORKSHOP SESSION: Workshop evolutionary reinforcement learning

Safety-informed mutations for evolutionary deep reinforcement learning

Evolutionary Algorithms have been combined with Deep Reinforcement Learning (DRL) to address the limitations of the two approaches while leveraging their benefits. In this paper, we discuss objective-informed mutations to bias the evolutionary population toward exploring the desired objective. We focus on Safe DRL domains to show how these mutations exploit visited unsafe states to search for safer actions. Empirical evidence on a 12 degrees of freedom locomotion benchmark and a practical navigation task, confirm that we improve the safety of the policy while maintaining comparable return with the original DRL algorithm.

Learning to acquire novel cognitive tasks with evolution, plasticity and meta-meta-learning

A hallmark of intelligence is the ability to learn new flexible, cognitive behaviors - that is, behaviors that require memorizing and exploiting a certain information item for each new instance of the task. In meta-learning, agents are trained with an external, human-designed reinforcement learning algorithm to learn a specific cognitive task. However, animals are able to pick up such cognitive tasks automatically, from stimuli and rewards alone, as a result of their evolved neural architecture and synaptic plasticity mechanisms: evolution has designed animal brains as self-contained reinforcement (meta-)learning systems, capable not just of performing specific cognitive tasks, but of acquiring novel cognitive tasks, including tasks never seen during evolution. Can we harness this process to generate artificial agents with such abilities? Here we evolve neural networks, endowed with plastic connections and neuromodulation, over a sizable set of simple meta-learning tasks based on a framework from computational neuroscience. The resulting evolved networks can automatically acquire a novel simple cognitive task, never seen during evolution, through the spontaneous operation of their evolved neural organization and plasticity system. We suggest that attending to the multiplicity of loops involved in natural learning may provide useful insight into the emergence of intelligent behavior.

An important feature of intelligent behavior is the ability to learn not just simple, reactive tasks (associating a stimulus with a response), but also more complex, cognitive tasks. By "cognitive", we mean a task which require the acquisition, storage, processing and appropriate exploitation of novel, unpredictable pieces of information from the environment for each new instance of the task. This ability to "learn how to learn", or meta-learning, has been studied quantitatively in animals [9] and implemented in artificial neural networks [3, 5, 10, 18, 22, 24].

On the effect of the sampling ratio of past trajectories in the combination of evolutionary algorithm and deep reinforcement learning

Combinations of evolutionary approaches (EA) and deep reinforcement learning approaches (DRL) have been proposed to take advantages of two approaches: stability of EA and sample efficiency of DRL. CEM-RL is a promising approach combining cross-entropy method (CEM) and twin delayed deep deterministic policy gradient (TD3), showing competitive performance in this fields. In the TD3 part, interaction histories obtained by CEM-oriented agents and TD3-oriented agents are mixed in a single replay buffer, and mini-batch samples are drawn uniform-randomly from the replay buffer. The quality of the training in the TD3 part must depend on the distribution of samples in the replay buffer, hence it depends on the ratio of CEM-oriented and TD3-oriented samples, called replay sample ratio. In this paper we investigate the impact of the replay sample ratio in CEM-RL on 5 MuJoCo environments and confirm that a good ratio leads to outperforming the original CEM-RL on some environments. Based on this observation, we propose an adaptation mechanism for the replay sample ratio, and empirically show that the proposed variant, RSR-CEM-RL, outperforms the original CEM-RL on some environments.

WORKSHOP SESSION: Workshop evolutionary computation software systems

DynStack: a benchmarking framework for dynamic optimization problems in warehouse operations

Dynamic optimization problems (DOPs) are an underrepresented class in benchmarking evolutionary computation systems (ECS). Most benchmarks focus on more or less expensive problems, but which never change during the optimization. In real-world logistics operations however, dynamic changes and even uncertainty are natural and have to be dealt with. While evolutionary algorithms are certainly well suited methods to tackle such problems, the field lacks public and open source, easy-to-use, but still complex dynamic environments for comparing and further developing the methods. In this work, we highlight the framework that we have created and open sourced as part of the DynStack competition which was first held at GECCO 2020. We present the underlying principles of the framework, the architecture that eases the application, and potential ways to benchmark a range of methods. The environments implemented in this framework are real-world industrial scenarios, that have been simplified, but which still convey practical challenges in the application of ECS to real-world problems.

Recent developments in HNCO

We present the recent developments in HNCO which started as a C++ framework for the optimization of black box functions defined on bit vectors. Beyond changes in the library, we emphasize three new features: representations for data types other than booleans (integers, real and complex floating-point numbers, categorical values, and permutations), multi-objective optimization, and Python bindings. With the help of a command-line application, compiled once, it is now possible to define at runtime and optimize a wide range of functions. Those functions can be written as mathematical expressions or plain Python code. We also present the experiments available in HNCO which are built upon the same command-line application. Most experiments can be run in parallel, locally or remotely.

Facilitating the hybridization of parallel evolutionary algorithms in cluster computing environments

Evolutionary Algorithms (EAs) need domain-specific adaptations for achieving better results, tend to converge to suboptimal solutions and are computationally expensive when they are applied to complex and large-scale optimization problems. For addressing these challenges, hybridizing EAs with other algorithms and methods and parallelizing them in cluster computing environments represent essential solutions. In the present paper, a new software solution for supporting the hybridizations of parallel EAs is proposed. Unlike other software solutions for hybridizing EAs, the proposed software solution provides a flexible, generic and scalable mechanism for integrating any algorithmic approach like a Machine Learning (ML) algorithm to seed the initial population of parallel EAs. It is designed based on three modern software technologies, namely microservices, container virtualization and the publish/subscribe messaging paradigm. The applicability and generality of the presented software solution is tested by hybridizing the General Learning Evolutionary Algorithm and Method (GLEAM) with ML techniques for solving the problem of scheduling Distributed Energy Resources (DERs). The benchmarking tests are performed in a cluster computing environment. The obtained results show that the new software solution represents a successful approach to facilitate the hybridization of parallel EAs paving the road for future applications of EAs in several domains.

JGEA: a modular java framework for experimenting with evolutionary computation

We present the Java General Evolutionary Algorithm (JGEA) framework, a modular Java framework for experimenting with Evolutionary Computation (EC). We designed JGEA to be (a) aimed at providing a general interface to potentially all Evolutionary Algorithms (EAs), yet (b) solid and easy to use for people who rely on EC as a tool. To this extent, we developed JGEA including a range of ready-to-use EAs, backed by a modular architecture, comprising diverse layers of abstraction, which simplifies the implementation of new EAs and the addition of new features. Here, we detail the general structure of JGEA, focusing on its high-level components, and present the use case of the introduction of a new EA in the framework. To complete the picture, we illustrate the application of JGEA for solving a real world problem, from its formal definition in the framework to the saving and processing of results. The source code of JGEA is available at https://github.com/ericmedvet/jgea.

WORKSHOP SESSION: Workshop industrial applications of metaheuristics

Multi-depot periodic vehicle routing with variable visit patterns

In Periodic Vehicle Routing Problems (PVRPs), customers require multiple deliveries over a given time horizon. This paper investigates a generalization of the PVRP which was introduced to us by a logistics company specialized in the transportation of valuable goods where two different customer types are considered. The first type has a set of fixed visit patterns from which one must be selected. The second type of customer defines a maximum number of days between visits that must be respected, resulting in variable visiting patterns which enable greater flexibility when scheduling visits across the time horizon. Including this second type of customer considerably increases the size of the search space while also providing additional opportunities to improve solutions. We propose a heuristic for the problem and assess its quality by comparing its results against those achieved by state-of-the-art methods for a closely related problem from the literature. The results indicate that our heuristic is not only competitive, but also produces many new best-known solutions for benchmark instances. Finally, we introduce new instances based on real-world data and conduct an empirical analysis concerning the impact of the two customer types on solution quality, resource utilization and possible trade-offs when solving the problem.

Algorithmically-guided postharvest protocols by experimental combinatorial optimization

The application of search and learning to experimental domains, where the objective function cannot be accurately simulated, but rather requires a measurement in real industrial settings, lies in the focus of this study. We consider the problem of devising treatment protocols for fresh cucumbers, whose quality rapidly deteriorates once being harvested, by considering the combinatorial space of possible postharvest practices. The overall target is to prescribe a combination of treatments, with specified activation levels, that minimizes the cucumbers' quality loss after 4 weeks in two storage environments: 10°C and 20°C. This study engaged with a postharvest laboratory with industrial settings to research and develop a sequential experimentation procedure, in a closed feedback-loop fashion, and subject to strict budget and timeline constraints. The laboratory measurements comprise the assay of color, stiffness and mass, as well as external blemishes - in both harvest and post-4-weeks points in time. Their deviations constitute the aggregated objective function that undergoes minimization for both temperatures. After formulating the optimization problem, we outline our approach and report on the attained results. The obtained protocols significantly outperform the best-known human reference practice, and their nature is visualized and analyzed. Finally, we mention the impact and outlook for industry.

One-shot optimization for vehicle dynamics control systems: towards benchmarking and exploratory landscape analysis

Many real-world black-box optimization problems from industry are computationally expensive. Due to the advantage in wall-clock time, fully parallel sampling (one-shot search) is therefore often chosen over iterative search and adaptive sampling approaches. Our contribution shows how using a surrogate model (one-shot optimization with surrogate) can enhance the best solution found within the initial sample, requiring no further problem evaluations.

We test several surrogate types for one-shot optimization on a real-world problem from the field of vehicle dynamics control systems and the 24 well-known BBOB benchmark test functions.

For the real-world problem and most of the benchmark functions considered, a multi-layer perceptron (neural network) as surrogate model for one-shot optimization leads to worse solutions than the one-shot search, in contrast to random forest and support vector machine. Moreover, our results show that mean squared error as a commonly used quality metrics for regression models is not feasible for selecting a surrogate model for one-shot optimization.

To characterize the considered problems and to assess the similarity between the real-world problem and the benchmark functions, exploratory landscape analysis was performed. We provide some guidance on how to utilize this information to select a surrogate type for specific problems.

WORKSHOP SESSION: Workshop large-scale evolutionary optimization and learning

Accelerating genetic algorithm evolution via ant-based mutation and crossover for application to large-scale TSPs

Genetic Algorithms (GAs) have been commonly applied to difficult permutation problems such as the Traveling Salesman Problem (TSP). However, GAs have scalability issues when applied to very large-scale TSPs without resorting to local search. A GA relies on successively improving a population of solutions through crossover and mutation operations. However, these actions are performed blindly relying on natural selection to combine and improve upon good solutions. Alternatively, Ant Colony Optimisation simulates ants which learn the problem structure at a localised level via a pheromone matrix. This paper proposes that a GA could be accelerated by utilising the attributes of ants to construct solutions in a more intelligent manner, in effect hot-wiring GAs. This hypothesis is tested using ant-based mutation and an ant-based edge recombination crossover operator ER-ACO. Measured against art-based TSP instances of up to 200,000 cities both ant-based mutation and crossover improve GA evolution considerably.

Lexicase selection at scale

Lexicase selection is a semantic-aware parent selection method, which assesses individual test cases in a randomly-shuffled data stream. It has demonstrated success in multiple research areas including genetic programming, genetic algorithms, and more recently symbolic regression and deep learning. One potential drawback of lexicase selection and its variants is that the selection procedure requires evaluating training cases in a single data stream, making it difficult to handle tasks where the evaluation is computationally heavy or the dataset is large-scale, e.g., deep learning. In this work, we investigate how the weighted shuffle methods can be employed to improve the efficiency of lexicase selection. We propose a novel method, fast lexicase selection, which incorporates lexicase selection and weighted shuffle with partial evaluation. Experiments on both classic genetic programming and deep learning tasks indicate that the proposed method can significantly reduce the number of evaluation steps needed for lexicase selection to select an individual, improving its efficiency while maintaining the performance.

High-performance cartesian genetic programming on GPU for the inference of gene regulatory networks using scRNA-seq time-series data

The inference of Gene Regulatory Networks (GRNs) is an important topic with biotechnological and health applications, as comprehending patterns of gene interactions can lead to findings regarding living organisms. Evolutionary computation techniques, such as genetic programming, have been successfully applied to infer GRNs. However, evaluating candidate GRNs during the evolutionary search is computationally expensive and the adoption of such a type of method becomes an impediment when a large number of genes are involved. In this paper, we propose a strategy to infer GRN models from gene expression data using Cartesian Genetic Programming (CGP) and high-performance computing for reducing its processing time. In this regard, Graphics Processing Units (GPUs) are used through OpenCL to parallelize and accelerate the evaluation of the models. The results obtained in the computational experiments show that the high-performance CGP on GPU is suitable for inferring GRNs due to its significant reduction in the processing time when compared to the standard sequential CGP. Also, the proposal obtained better or competitive results when compared to state-of-art algorithms. Furthermore, the proposal outputs an interpretable solution that can help the domain experts in the field of Systems Biology.

WORKSHOP SESSION: Workshop international workshop on learning classifier systems

XCS on embedded systems: an analysis of execution profiles and accelerated classifier deletion

With the more widespread use of Learning Classifier Systems (LCS) in real-world applications, they are increasingly deployed on embedded computing systems with constrained computational power. We profile the most popular classifier system XCS on both an embedded system with an ARM CPU and an off-the-shelf desktop computer. A comparison of both systems yields distinct differences, as classifier deletion is the most runtime consuming function on the embedded system while matching dominates the execution on the desktop computer. To accelerate classifier deletion and make XCS more amenable for deployment on embedded systems, we propose and evaluate two different deletion mechanisms. While the first is an algorithmic modification of the original deletion approach preserving its behavior, the second is a hybrid scheme utilizing both global and local deletion. We find that the latter can achieve a speedup of up to 1.47 for the overall execution of XCS.

XCSF under limited supervision

XCSF is a well-researched function approximator used for supervised, online regression learning tasks. Hereby, XCSF relies on receiving a ground truth after each prediction step in order to update its internal rulebase. As there exist various conceivable scenarios where the amount of external supervision might be limited, we investigate how XCSF can be adapted in order to be applicable in such environments. We change the update mechanism such that only inexperienced classifiers are updated, and introduce an external covering procedure as well as a cache for saving external supervisions. Our experiments show that, despite performing significantly worse than an non-limited XCSF, our adaptations still provide acceptable results and limit the necessity for external supervision noticeably.

An overview of LCS research from 2021 to 2022

The International Workshop on Learning Classifier Systems (IWLCS) is an annual workshop at the GECCO conference where new concepts and results regarding learning classifier systems (LCSs) are presented and discussed. One recurring part of the workshop agenda is a presentation that reviews and summarizes the advances made in the field over the last year; this is intended to provide an easy entry point to the most recent progress and achievements. The 2020 and 2021 presentations were accompanied by survey workshop papers, a practice which we hereby continue. We give an overview of all the LCS-related publications from 11 March 2021 to 10 March 2022. The 42 publications we review are grouped into six overall topics: Formal theoretic advances, new LCS architectures, LCS-based reinforcement learning, algorithmic improvements to existing LCSs, combinations of LCS and Deep Learning models and, finally, a variety of applications of LCSs.

Preliminary tests of an anticipatory classifier system with experience replay

The paper describes the first attempts toward designing and evaluating Anticipatory Classifier System ACS2 in conjunction with Experience Replay (ER). Promising results verified by statistical tests are obtained both on single- and multi-step problems, albeit limited to deterministic and discrete tasks. The analysis indicates that ACS2 using memorized experiences has the potential for significant improvements in learning efficiency and knowledge generality.

WORKSHOP SESSION: Workshop neuroevolution at work

Heed the noise in performance evaluations in neural architecture search

Neural Architecture Search (NAS) has recently become a topic of great interest. However, there is a potentially impactful issue within NAS that remains largely unrecognized: noise. Due to stochastic factors in neural network initialization, training, and the chosen train/validation dataset split, the performance evaluation of a neural network architecture, which is often based on a single learning run, is also stochastic. This may have a particularly large impact if a dataset is small. We therefore propose to reduce this noise by evaluating architectures based on average performance over multiple network training runs using different random seeds and cross-validation. We perform experiments for a combinatorial optimization formulation of NAS in which we vary noise reduction levels. We use the same computational budget for each noise level in terms of network training runs, i.e., we allow less architecture evaluations when averaging over more training runs. Multiple search algorithms are considered, including evolutionary algorithms which generally perform well for NAS. We use two publicly available datasets from the medical image segmentation domain where datasets are often limited and variability among samples is often high. Our results show that reducing noise in architecture evaluations enables finding better architectures by all considered search algorithms.

Evolution of activation functions for deep learning-based image classification

Activation functions (AFs) play a pivotal role in the performance of neural networks. The Rectified Linear Unit (ReLU) is currently the most commonly used AF. Several replacements to ReLU have been suggested but improvements have proven inconsistent. Some AFs exhibit better performance for specific tasks, but it is hard to know a priori how to select the appropriate one(s). Studying both standard fully connected neural networks (FCNs) and convolutional neural networks (CNNs), we propose a novel, three-population, co-evolutionary algorithm to evolve AFs, and compare it to four other methods, both evolutionary and non-evolutionary. Tested on four datasets---MNIST, FashionMNIST, KMNIST, and USPS---coevolution proves to be a performant algorithm for finding good AFs and AF architectures.

On the impact of body material properties on neuroevolution for embodied agents: the case of voxel-based soft robots

Artificial agents required to perform non-trivial tasks are commonly controlled with Artificial Neural Networks (ANNs), which need to be carefully fine-tuned. This is where ANN optimization comes into play, often in the form of Neuroevolution (NE). Among artificial agents, the embodied ones are characterized by a strong body-brain entanglement, i.e., a strong interdependence between the physical properties of the body and the controller. In this work, we aim at characterizing said interconnection, experimentally evaluating the impact body material properties have on NE for embodied agents. We consider the case of Voxel-based Soft Robots (VSRs), a class of simulated modular soft robots which achieve movement through the rhythmical contraction and expansion of their modules. We experiment varying several physical properties of VSRs and assess the effectiveness of the evolved controllers for the task of locomotion, together with their robustness and adaptability. Our results confirm the existence of a deep body-brain interrelationship for embodied agents, and highlight how NE fruitfully exploits the physical properties of the agents to give rise to a wide gamut of effective and adaptable behaviors.

WORKSHOP SESSION: Workshop on quality diversity algorithm benchmarks

A discretization-free metric for assessing quality diversity algorithms

While Quality-Diversity algorithms attempt to produce a set of high quality solutions that are diverse throughout descriptor space, in reality decision makers are often interested in solutions with specific descriptor values. In this paper we suggest that current methods of evaluating Quality Diversity algorithm performance do not properly account for a decision maker's preference in a continuous descriptor space and suggest three approaches that attempt to capture the real-world trade-off between a solution's objective performance and distance from a desired set of target descriptors.

In this paper we propose a randomised metric, a process of Monte-Carlo sampling of n target points in descriptor space and a small number of random weights that represent different tolerances for mis-specification in a solution's descriptor values. This sampling allows us to simulate the requirements of all possible combinations of target-tolerance pairs and, by taking sufficient samples, estimate average performance.

We go on to formulate three simple methods for comparing average performance of algorithms; Continuous Quality Diversity score (CQD) and Hypervolume of the objective/distance Pareto front. We show that these measures are simple to implement and robust measures of performance without introducing artificial discretisation of the descriptor space.

A collection of quality diversity optimization problems derived from hyperparameter optimization of machine learning models

The goal of Quality Diversity Optimization is to generate a collection of diverse yet high-performing solutions to a given problem at hand. Typical benchmark problems are, for example, finding a repertoire of robot arm configurations or a collection of game playing strategies. In this paper, we propose a set of Quality Diversity Optimization problems that tackle hyperparameter optimization of machine learning models - a so far underexplored application of Quality Diversity Optimization. Our benchmark problems involve novel feature functions, such as interpretability or resource usage of models. To allow for fast and efficient benchmarking, we build upon YAHPO Gym, a recently proposed open source benchmarking suite for hyperparameter optimization that makes use of high performing surrogate models and returns these surrogate model predictions instead of evaluating the true expensive black box function. We present results of an initial experimental study comparing different Quality Diversity optimizers on our benchmark problems. Furthermore, we discuss future directions and challenges of Quality Diversity Optimization in the context of hyperparameter optimization.

WORKSHOP SESSION: Workshop surrogate-assisted evolutionary optimisation

Mono-surrogate vs multi-surrogate in multi-objective bayesian optimisation

Bayesian optimisation (BO) has been widely used to solve problems with expensive function evaluations. In multi-objective optimisation problems, BO aims to find a set of approximated Pareto optimal solutions. There are typically two ways to build surrogates in multi-objective BO: One surrogate by aggregating objective functions (by using a scalarising function, also called mono-surrogate approach) and multiple surrogates (for each objective function, also called multi-surrogate approach). In both approaches, an acquisition function (AF) is used to guide the search process. Mono-surrogate has the advantage that only one model is used, however, the approach has two major limitations. Firstly, the fitness landscape of the scalarising function and the objective functions may not be similar. Secondly, the approach assumes that the scalarising function distribution is Gaussian, and thus a closed-form expression of the AF can be used. In this work, we overcome these limitations by building a surrogate model for each objective function and show that the scalarising function distribution is not Gaussian. We approximate the distribution using Generalised extreme value distribution. The results and comparison with existing approaches on standard benchmark and real-world optimisation problems show the potential of the multi-surrogate approach.

WORKSHOP SESSION: Workshop genetic and evolutionary computation in defense, security, and risk management

Feature encoding with autoencoder and differential evolution for network intrusion detection using machine learning

With the increasing use of computer networks and distributed systems, network security and data privacy are becoming major concerns for our society. In this paper, we present an approach based on an autoencoder trained with differential evolution for feature encoding of network data with the goal of improving security and reducing data transfers. One of the novel elements used in differential evolution for intrusion detection is the enhancements in the fitness function by adding the performance of a machine learning algorithm. We conducted an extensive evaluation of six machine learning algorithms for network intrusion detection using encoded data from well-known publicly available network datasets UNSW-NB15. The experiments clearly showed the supremacy of random forest, support vector machine, and K-nearest neighbors in terms of accuracy, and this was not affected to a high degree by reducing the number of features. Furthermore, the machine learning algorithm that was used during training (Linear Discriminant Analysis classifier) got a 14 percentage points increase in accuracy. Our results also showed clear improvements in execution times in addition to the obvious secure aspects of encoded data. Additionally, the performance of the proposed method outperformed one of the most commonly used feature reduction methods, Principal Component Analysis.

Chaos engineering: stress-testing algorithms to facilitate resilient strategic military planning

We present our Chaos Engineering framework that accelerates the emergence of resilient strategic plans to maintain high levels of performance given the turbulent and unpredictable conditions in real-world military deployments. Our proposed Chaos Engine will stress agents by intelligently searching over scenario chaos factors reflecting real-world events such as weather changes, loss of communication, basing and access, platform failures, and enemy force composition and mission plans. Our work is split into four technical phases: 1) Evaluate: model and characterize agent performance in adaptively generated simulation environments; 2) Challenge: systematically find chaos factors that expose failures; 3) Fitness: balance challenge and feasibility; and 4) Diversity: maintain a population of scenarios that expose the agent to a range of challenges. The work presented here focuses on the Challenge phase and provides preliminary results for our adaptive domain generation paradigm, known as the adversarial architect. The adversarial architect will use an evolutionary-based optimizer to generate diverse scenarios that start simple and become more challenging as agents learn. We demonstrate the efficacy of our approach in the Acrobot domain, where we show the adversarial architect generating an Auto Curricula of inertial parameters that effectively guides the learning agent to resilient and effective policies.

CyberEvo: evolutionary search of knowledge-based behaviors in a cyber attack campaign

The ever-growing complexity of computer networks and advancement of cyber attack tactics have made exhaustively searching for potential attack seqences within most networks infeasible. We present CyberEvo, a cyber-agent framework to describe and simulate attacking and defensive agent behaviors on an abstracted network. In CyberEvo threat actors have an initial starting point in a network, iteratively observe network stimuli, decide upon tactics using rules, and take actions to extend their knowledge of the network and fulfill their goal. CyberEvo employs evolutionary search, in this example, to efficiently obtain optimal attack sequences. Evolutionary adaptation acts upon the parameters of fuzzy logic used in manually written rules, and uses a fitness function that prioritizes undetected and minimal actions. In a scenario with 230 possible fuzzy logic configurations, evolutionary search, with less than 1000 evaluations, found all 26 globally best configurations, equivalent attack sequences for a simple network.

WORKSHOP SESSION: Workshop quantum optimization

Probabilistic reasoning as quadratic unconstrained binary optimization

Probabilistic reasoning is an important tool for using uncertainty in AI, especially for automated reasoning. Partial probability assessments are a way of expressing partial probabilistic knowledge on a set of events. These assessments contain only the information about "interesting" events (hence it can be easily assessed by an expert). On the other hand, partial assessments can cause consistency problems. In this paper we show how to formulate the main tasks of probabilistic reasoning on partial probability assessments, namely check of coherence, correction, and inference, as QUBO problems. This transformation allows to solve these problems with a quantum or a digital annealer and thus providing new computational methods to perform these tasks.

A novel quantum-inspired evolutionary computation-based quantum circuit synthesis for various universal gate libraries

The quantum circuit is the prime factor to realize quantum computation. This paper proposes a novel quantum-inspired evolutionary computation method to synthesize quantum circuits effectively and efficiently. Recently, the Clifford+T gate library has become popular, as the T gate has a better fault-tolerant and error correction efficacy, which is essential to near-term quantum computation. The proposed quantum-inspired evolutionary computation is more flexible in constructing diverse solutions than the classical synthesis method. The entangled local search mechanism further enhances the ability to discover the optimal solutions on an efficient frontier. The encoding can be slightly changed to achieve various gate libraries circuit synthesis in the quantum or reversible Boolean circuits. The experiments demonstrate that our proposed method can be more effective in composing a compact quantum circuit than the state-of-the-art method. Meanwhile, the T-depth can attain an optimal value without ancilla bits. Furthermore, we provide an illustration to construct the function equivalent quantum circuit with the NOT, CNOT, Controlled-Square-Root-of-NOT quantum gate library (NCV). We also conduct exhaustion to prove that our synthesized circuit is an optimal solution, requiring far fewer resources in time and evaluation than exhaustion.

Evolutionary quantum architecture search for parametrized quantum circuits

Recent advancements in quantum computing have shown promising computational advantages in many problem areas. As one of those areas with increasing attention, hybrid quantum-classical machine learning systems have demonstrated the capability to solve various data-driven learning tasks. Recent works show that parameterized quantum circuits (PQCs) can be used to solve challenging reinforcement learning (RL) tasks with provable learning advantages. While existing works yield potentials of PQC-based methods, the design choices of PQC architectures and their influences on the learning tasks are generally underexplored. In this work, we introduce EQAS-PQC, an evolutionary quantum architecture search framework for PQC-based models, which uses a population-based genetic algorithm to evolve PQC architectures by exploring the search space of quantum operations. Experimental results show that our method can significantly improve the performance of hybrid quantum-classical models in solving benchmark reinforcement problems. We also model the probability distributions of quantum operations in top-performing architectures to identify essential design choices that are critical to the performance.

The applicability of reinforcement learning for the automatic generation of state preparation circuits

State preparation is currently the only means to provide input data for quantum algorithm, but finding the shortest possible sequence of gates to prepare a given state is not trivial. We approach this problem using reinforcement learning (RL), first on an agent that is trained to only prepare a single fixed quantum state. Despite the overhead of training a whole network to just produce one single data point, gradient-based backpropagation appears competitive to genetic algorithms in this scenario and single state preparation thus seems a worthwhile task. In a second case we then train a single network to prepare arbitrary quantum states to some degree of success, despite a complete lack of structure in the training data set. In both cases we find that training is severely improved by using QR decomposition to automatically map the agents' outputs to unitary operators to solve the problem of sparse rewards that usually makes this task challenging.

Modifying the quantum-assisted genetic algorithm

Based on the quantum-assisted genetic algorithm (QAGA) [11] and related approaches we introduce several modifications of QAGA to search for more promising solvers on (at least) graph coloring problems, knapsack problems, Boolean satisfiability problems, and an equal combination of these three. We empirically test the efficiency of these algorithmic changes on a purely classical version of the algorithm (simulated-annealing-assisted genetic algorithm, SAGA) and verify the benefit of selected modifications when using quantum annealing hardware. Our results point towards an inherent benefit of a simpler and more flexible algorithm design.

Hybrid quantum-classical heuristic for the bin packing problem

Optimization problems is one of the most challenging applications of quantum computers, as well as one of the most relevants. As a consequence, it has attracted huge efforts to obtain a speedup over classical algorithms using quantum resources. Up to now, many problems of different nature have been addressed through the perspective of this revolutionary computation paradigm, but there are still many open questions. In this work, a hybrid classical-quantum approach is presented for dealing with the one-dimensional Bin Packing Problem (1dBPP). The algorithm comprises two modules, each one designed for being executed in different computational ecosystems. First, a quantum subroutine seeks a set of feasible bin configurations of the problem at hand. Secondly, a classical computation subroutine builds complete solutions to the problem from the subsets given by the quantum subroutine. Being a hybrid solver, we have called our method H-BPP. To test our algorithm, we have built 18 different 1dBPP instances as a benchmarking set, in which we analyse the fitness, the number of solutions and the performance of the QC subroutine. Based on these figures of merit we verify that H-BPP is a valid technique to address the 1dBPP.

Techniques to enhance a QUBO solver for permutation-based combinatorial optimization

Many combinatorial optimization problems can be formulated as a problem to determine the order of sequence or to find a corresponding mapping of the objects. We call such problems permutation-based optimization problems. Many such problems can be formulated as a quadratic unconstrained binary optimization (QUBO) or Ising model by introducing a penalty coefficient to the permutation constraint terms. While classical and quantum annealing approaches have been proposed to solve QUBOs to date, they face issues with optimality and feasibility. Here we treat a given QUBO solver as a black box and propose techniques to enhance its performance. First, to ensure an effective search for good quality solutions, a smooth energy landscape is needed; we propose a data scaling approach that reduces the amplitudes of the input without compromising optimality. Second, we need to tune the penalty coefficient. In this paper, we illustrate that for certain problems, it suffices to tune the parameter by performing random sampling on the penalty coefficients to achieve good performance. Finally, to handle possible infeasibility of the solution, we introduce a polynomial-time projection algorithm. We apply these techniques along with a divide-and-conquer strategy to solve some large-scale permutation-based problems and present results for TSP and QAP.

AutoQubo: data-driven automatic QUBO generation

This paper presents a general method to derive automatically a Quadratic Unconstrained Binary Optimisation (QUBO) formulation of, in principle, any combinatorial optimisation problem from its high-level description given in any programming language in a free format. Currently ad-hoc approaches to formulating a problem at a time as a QUBO are the norm. The proposed method goes beyond the state-of-the-art as it is general in its applicability. The method is data-driven and based on sampling, but the obtained QUBO is exact rather than only an approximation of the original problem for optimisation problems occurring in practice. The method is of particular interest in the context of Quantum Ising Machines - quantum computers specialised to optimisation tasks. It can be understood as a compiler that makes these machines usable without specific expertise by automating the translation of high-level description of combinatorial optimisation problems to a low-level format suitable for the quantum hardware.

Algorithmic QUBO formulations for k-SAT and hamiltonian cycles

Quadratic Unconstrained Binary Optimization (QUBO) can be seen as a generic language for optimization problems. QUBOs attract particular attention since they can be solved with quantum hardware, like quantum annealers or quantum gate computers running QAOA. In this paper, we present two novel QUBO formulations for k-SAT and Hamiltonian Cycles that scale significantly better than existing approaches. For k-SAT we reduce the growth of the QUBO matrix from O(k) to O(log(k)). For Hamiltonian Cycles the matrix no longer grows quadratically in the number of nodes, as currently, but linearly in the number of edges and logarithmically in the number of nodes.

We present these two formulations not as mathematical expressions, as most QUBO formulations are, but as meta-algorithms that facilitate the design of more complex QUBO formulations and allow easy reuse in larger and more complex QUBO formulations.

Quantum parametric circuit optimization with estimation of distribution algorithms

Variational quantum algorithms (VQAs) offer some promising characteristics for carrying out optimization tasks in noisy intermediate-scale quantum devices. These algorithms aim to minimize a cost function by optimizing the parameters of a quantum parametric circuit. Thus, the overall performance of these algorithms, heavily depends on the classical optimizer which sets the parameters. In the last years, some gradient-based and gradient-free approaches have been applied to optimize the parameters of the quantum circuit. In this work, we follow the second approach and propose the use of estimation of distribution algorithms for the parameter optimization in a specific case of VQAs, the quantum approximate optimization algorithm. Our results show an statistically significant improvement of the cost function minimization compared to traditional optimizers.

Enhancing a QUBO solver via data driven multi-start and its application to vehicle routing problem

Quadratic unconstrained binary optimization (QUBO) models have garnered growing interests as a strong alternative modelling framework for solving combinatorial optimization problems. A wide variety of optimization problems that are usually studied using conventional Operations Research approaches can be formulated as QUBO problems. However, QUBO solvers do not guarantee optimality when solving optimization problems. Instead, obtaining high quality solutions using QUBO solvers entails tuning of multiple parameters. Here in our work, we conjecture that the initial states adjustment method used in QUBO solvers can be improved, where careful tuning will yield overall better results. We propose a data-driven multi-start algorithm that complements the QUBO solver (namely Fujitsu Digital Annealer) in solving constrained optimization problems. We implemented our algorithm on a variety of capacitated vehicle routing problem instances, and empirical results show that different initial state adjustment methods do make a difference in the quality of solutions obtained by the QUBO solver.

Quantum neuron selection: finding high performing subnetworks with quantum algorithms

Gradient descent methods have long been the de facto standard for training deep neural networks. Millions of training samples are fed into models with billions of parameters, which are slowly updated over hundreds of epochs. Recently, it's been shown that large, randomly initialized neural networks contain subnetworks that perform as well as fully trained models. This insight offers a promising avenue for training future neural networks by simply pruning weights from large, random models. However, this problem is combinatorically hard and classical algorithms are not efficient at finding the best subnetwork. In this paper, we explore how quantum algorithms could be formulated and applied to this neuron selection problem. We introduce several methods for local quantum neuron selection that reduce the entanglement complexity that large scale neuron selection would require, making this problem more tractable for current quantum hardware.

WORKSHOP SESSION: Workshop on symbolic regression

Genetic programming, standardisation, and stochastic gradient descent revisited: initial findings on SRBench

The use of Z-score standardisation to improve the performance of genetic programming is well known within symbolic regression. Additionally, Z-score standardisation is known to be a key element in the effective application of stochastic gradient descent. However, a thorough treatment of genetic programming with stochastic gradient descent (GPZGD) does not exist in the literature. This paper introduces a recalibrated variant of GPZGD and tests its performance within the recently-proposed SRBench framework: the resulting variant of GPZGD demonstrates excellent performance relative to existing symbolic regression methods. Additionally, this paper provides some exploration of SRBench itself and suggests areas of potential improvement to increase the utility of the SRBench framework.

Interaction-transformation evolutionary algorithm with coefficients optimization

Symbolic Regression is the task of finding a mathematical expression to describe the relationship between one or more independent variables with a dependent variable. The search space can be vast and include any algebraic function; thus, finding optimal values for coefficients may not be a trivial task. The Interaction-Transformation representation alleviates this problem enforcing that the coefficients of the expression is part of a linear transformation, allowing the application of least squares. But this solution also limits the search space of the expressions. This paper proposes four different strategies to optimize the coefficients of the nonlinear part of the Interaction-Transformation representation. We benchmark the proposed strategies by applying the Interaction-Transformation Evolutionary Algorithm (ITEA) to six well-known data sets to evaluate four optimization heuristics combining linear and non-linear methods. The results show that optimizing the non-linear and linear coefficients separately was the best strategy to find better-performing expressions with a higher run-time and expression size. The non-linear optimization method alone was the worst-performing method.

Bingo: a customizable framework for symbolic regression with genetic programming

In this paper, we introduce Bingo, a flexible and customizable yet performant Python framework for symbolic regression with genetic programming. Bingo maintains a modular code structure for simple abstraction and easily swappable components. Fitness functions, selection methods, and constant optimization methods allow for easy problem-specific customization. Bingo also maintains several features for increased efficiency such as parallelism, equation simplification, and a C++ backend. We compare Bingo's performance to other genetic programming for symbolic regression (GPSR) methods to show that it is both competitive and flexible.

Coefficient mutation in the gene-pool optimal mixing evolutionary algorithm for symbolic regression

Currently, the genetic programming version of the gene-pool optimal mixing evolutionary algorithm (GP-GOMEA) is among the top-performing algorithms for symbolic regression (SR). A key strength of GP-GOMEA is its way of performing variation, which dynamically adapts to the emergence of patterns in the population. However, GP-GOMEA lacks a mechanism to optimize coefficients. In this paper, we study how fairly simple approaches for optimizing coefficients can be integrated into GP-GOMEA. In particular, we considered two variants of Gaussian coefficient mutation. We performed experiments using different settings on 23 benchmark problems, and used machine learning to estimate what aspects of coefficient mutation matter most. We find that the most important aspect is that the number of coefficient mutation attempts needs to be commensurate with the number of mixing operations that GP-GOMEA performs. We applied GP-GOMEA with the best-performing coefficient mutation approach to the data sets of SR-Bench, a large SR benchmark, for which a ground-truth underlying equation is known. We find that coefficient mutation can help rediscovering the underlying equation by a substantial amount, but only when no noise is added to the target variable. In the presence of noise, GP-GOMEA with coefficient mutation discovers alternative but similarly-accurate equations.

Uncertainty in equation learning

Equation learning is a deep learning framework for explainable machine learning in regression settings, with applications in engineering and the natural sciences. Equation learners typically do not capture uncertainty about the model or its predictions, although uncertainty is often highly structured and particularly relevant for these kinds of applications. We show how simple, yet effective forms of Bayesian deep learning can be used to build structure and explainable uncertainty over a set of found equations. Specifically, we use a mixture of Laplace approximations, where each mixture component captures a different equation structure, and the local Laplace approximations capture parametric uncertainty within one family of equations. We present results on both synthetic and real world examples.

WORKSHOP SESSION: Student workshop

Coevolution of neural networks for agents and environments

Evolutionary strategies have recently shown great results in the field of policy search. Compared to some classical artificial neural networks used in reinforcement learning, evolutionary strategies generate populations of agents which are evaluated on a specific task. The algorithm detailed here demonstrates how artificial neural networks can be evolved in a process of neuroevolution and used as agents in the 2D-game Zelda, producing relevant behaviors. Moreover, to increase the diversity and quantity of available maps, this paper shows how it is possible to generate environments using evolutionary strategies and neuroevolution, as well as neural cellular automata. Finally, to evolve populations of environments and agents cohesively, a coevolution algorithm was developed. Results demonstrate the potential of coevolution in the field of videogames by creating a wide range of diverse environments, and by creating agent strategies to solve these levels. However, these results also highlight the complexity of continuously generating novelty as agents and maps tend to converge quickly on similar patterns.

GUI-based, efficient genetic programming for Unity3D

Unity3D is a game development environment that could be co-opted for agent-based machine learning research. We present a GUI-driven, and efficient Genetic Programming (GP) system for this purpose. Our system, ABL-Unity3D, addresses challenges entailed in co-opting Unity3D: making the simulator serve agent learning rather than humans playing a game, lowering fitness evaluation time to make learning computationally feasible, and interfacing GP with an AI Planner to support hybrid algorithms that could improve performance. We achieve this through development of a GUI using the Unity3D editor's programmable interface, and performance optimizations. These optimizations result in at least a 3x speed up. We describe ABL-Unity3D by explaining how to use it for an example experiment using GP and AI Planning.

Multi-objective recommender system for corporate MOOC

Mandarine Academy is a major MOOC (Massive Open Online Course) operator with more than 100 active e-learning platforms in multiple languages and 550K overall users. The company is providing a wide range of content types to help users in their learning process. With thousands of pedagogical online resources in an everyday growing catalog, users can have a hard time finding relevant content. Mandarine Academy is looking to improve the overall user experience while minimizing both information overload and drop rate levels. This paper details the conception and implementation of a multi-objective e-learning recommender system. The proposed approach takes advantage of multiple known implementations like content-based and collaborative filtering among other techniques to generate an initial population of solutions. A custom genetic operator is proposed and compared to classical operators using multiple algorithms (NSGA-II, NSGA-III, SPEA-2, IBEA, and MOEA/D). Using the best configurations found by the tuning process, we showcase the initial results obtained by each approach when applied to real-world data from a public MOOC provided by the company.

Evolutionary training of deep neural networks on heterogeneous computing environments

Deep neural networks are typically trained using gradient-based optimizers such as error backpropagation. This study proposes a framework based on Evolutionary Algorithms (EAs) to train deep neural networks without gradients. The network parameters, which may vary up to millions, are considered optimization variables. We demonstrate the training of an encoder-decoder segmentation network (U-Net) and Long Short-Term Memory (LSTM) model using (μ + λ)-ES, Genetic Algorithm, and Particle Swarm Optimization. The framework can train models with forward propagation on machines with different hardware in a cluster computing environment. We compare prediction results from the two models trained using our framework and backpropagation. We show that the neural networks can be trained in less time on CPUs as compared to the training on specialized compute-intensive GPUs.

Untangling phylogenetic diversity's role in evolutionary computation using a suite of diagnostic fitness landscapes

Diversity is associated with success in evolutionary algorithms. To date, diversity in evolutionary computation research has mostly been measured by counting the number of distinct candidate solutions in the population at a given time point. Here, we aim to investigate the value of phylogenetic diversity, which takes into account the evolutionary history of a population. To understand how informative phylogenetic diversity is, we run experiments on a set of diagnostic fitness landscapes under a range of different selection schemes. We find that phylogenetic diversity can be more predictive of future success than traditional diversity metrics under some conditions, particularly for fitness landscapes with a single, challenging-to-find global optimum. Our results suggest that phylogenetic diversity metrics should be more widely incorporated into research on diversity in evolutionary computation.

Towards landscape-aware parameter tuning for the (1 + (λ, λ)) genetic algorithm for permutations

The choice of parameter values greatly affects the performance of evolutionary algorithms. Many current parameter tuning approaches require multiple runs of the tuned algorithm, which makes it hard to use them in budget-constrained environments. Recently, an approach to parameter tuning on binary string problems was proposed, which uses machine learning and fitness landscape analysis to transfer knowledge on good parameter choices between similar problem instances. This approach allows using the performance data obtained on simple benchmark problems with different fitness landscape features to quickly choose suitable parameters for a given problem instance.

In this paper, we aim to extend this approach to permutation-based problems by tuning the recently proposed version of the (1 + (λ, λ)) genetic algorithm for permutations-based problems. To do this, we develop a set of fitness landscape features that can be computed for permutations. We collect the algorithm's performance dataset on multiple instances of the W-model benchmark problem layered over the Ham problem for permutations. Finally, we present the preliminary experimental evaluation of the (1 + (λ, λ)) genetic algorithm tuned by the proposed approach.

Genetic algorithm cleaning in sequential data mining: analyzing solutions to parsons' puzzles

Sequence mining has been receiving increasing attention from the data science community as it helps reveal the underlying patterns in a given phenomenon. For instance, clustering sequences of "actions" taken by users provide understandable trajectories of the events. However, such sequences often include inessential actions recorded during data collection. It is generally challenging to distinguish between noise and information during clustering. This paper proposes using a Genetic Algorithm (GA) to denoise the sequences properly before they undergo clustering. The intent is to improve the clustering quality, as measured by Silhouette Score. We present preliminary results obtained by applying this approach to the data collected by a well-established Intelligent Tutoring System (Epplets.org) used by novice programmers. Our findings reveal that the GA can discover sequence cleanings that improve the quality of the clustering process. These results motivate further research into applying Evolutionary Techniques to data cleaning in sequence mining.

Analysis of neutral rewrite operator effects on arithmetic domain

We investigate the potential benefits of semantically neutral rewrites in Genetic Programming. These rewrites are expressed in the form of domain-specific axioms and building-block strategies of their application. We develop the strategies with the goal to improve best-of-run fitness convergence metric of evolutionary process. The approach is evaluated using benchmark problems defined on arithmetic domain.

The performance of a classic Koza-style Genetic Programming approach, augmented with axioms-based semantically-neutral rewrites, is compared to the basic version, with and without random mutations. The results suggest that semantically neutral rewrites improve the convergence speed, measured by best-of-run fitness, in the majority of the considered benchmark problems.