GECCO '20: Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion

Full Citation in the ACM Digital Library

SESSION: Competition entry: Competition evolutionary computation in the energy domain: Smart grid applications

Applying some EDAs and hybrid variants to the ERM problem under uncertainty

The Energy Resource Management (ERM) under uncertainty of a smartgrid is a highly complex problem (a Mixed Integer Non-Linear Problem) with the objective of maximizing profits by reducing the need to buy energy from the day-ahead market or external suppliers at high prices. In this paper, some approaches of Hybrid Estimation of Distribution Algorithm were implemented to solve the ERM Problem Under Uncertainty. Also, using the WCCI/GECCO 2020 ERM competition framework, a comparison amongst the variants implemented, the framework baseline algorithm, and the algorithms of the previous competitions, was done. The experimental results showed that these EDAs and hybrid methodologies are good tools to solve the ERM problem under uncertainty.

SESSION: Competition entry: Competition on the optimal camera placement problem (OCP) and the unicost set covering problem (USCP)

Weighting-based parallel local search for optimal camera placement and unicost set covering

This paper presents a weighting-based parallel local search algorithm (WPLS) for solving the optimal camera placement problem (OCP) and the unicost set covering problem (USCP). The neighborhood exploration of WPLS is parallelizable and hence can be accelerated using parallel computing architectures such as multi-core CPU or GPGPU.

SESSION: Competition entry: Competition open optimization competition 2020

Deep statistics: more robust performance statistics for single-objective optimization benchmarking

The performance measures and statistical techniques selected affect the conclusions we can draw on the behavior of the algorithms. For this reason, we propose more robust performance statistics for addressing statistical and practical significance, as well as investigating the exploration and exploitation powers of stochastic optimization algorithms. They are introduced by the Deep Statistical Comparison approach and its variants. Its implementations are available as part of the DSCTool, which provides web services for robust ranking and hypothesis testing, including a proper selection of an omnibus statistical test and post-hoc tests if needed.

PerformViz: a machine learning approach to visualize and understand the performance of single-objective optimization algorithms

To visually present the overall performance of several algorithms tested on several benchmark problems on one plot, we present a machine learning approach, called performViz. It allows one to clearly see, from a single plot, which algorithms are most suited for a given problem, the influence of each problem on the overall algorithm performance and similarities among both algorithms and problems. It consists of four steps: i) selecting a performance measure, ii) selecting a statistic to calculate a representative value of the performance measure for each algorithm on each problem, iii) performing hierarchical clustering, and iv) using heatmaps to visualize the clustering result.

SESSION: Competition entry: Competition on single objective bound constrained numerical optimization

SOMA-CL for competition on single objective bound constrained numerical optimization benchmark: a competition entry on single objective bound constrained numerical optimization at the genetic and evolutionary computation conference (GECCO) 2020

This paper describes a competition entry for the CEC 2020 benchmark set on a single objective bound-constrained numerical optimization at The Genetic and Evolutionary Computation Conference (GECCO) 2020 by a novel algorithm titled Self-organizing Migrating Algorithm with CLustering-aided migration (SOMA-CL).

SESSION: Competition entry: Competition on single objective constrained numerical optimization

A modified covariance matrix adaptation evolution strategy for real-world constrained optimization problems

Most of the real-world black-box optimization problems are associated with multiple non-linear as well as non-convex constraints, making them difficult to solve. In this work, we introduce a variant of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) with linear timing complexity to adopt the constraints of Constrained Optimization Problems (COPs). CMA-ES is already well-known as a powerful algorithm for solving continuous, non-convex, and black-box optimization problems by fitting a second-order model to the underlying objective function (similar in spirit, to the Hessian approximation used by Quasi-Newton methods in mathematical programming). The proposed algorithm utilizes an e-constraint-based ranking and a repair method to handle the violation of the constraints. The experimental results on a group of real-world optimization problems show that the performance of the proposed algorithm is better than several other state-of-the-art algorithms in terms of constraint handling and robustness.

A self-adaptive spherical search algorithm for real-world constrained optimization problems

Determination of the global optimum of complex non-convex optimization problems of the real-world applications has remained a challenging task. Many researchers have been developing various types of effective direct search-based methods to tackle these problems. In this paper, we introduce a new variant of the recently developed Spherical Search (SS) algorithm, which contains a powerful and effective self-adaptation structure to enhance the performance. To analyze the performance, proposed algorithm is tested on the 57 test problems collected from different real-world applications. The obtained results statistically confirm the efficacy and efficiency of the proposed algorithm.

SESSION: Hot off the press

Sharp bounds for genetic drift in estimation of distribution algorithms (Hot-off-the-press track at GECCO 2020)

Estimation of distribution algorithms (EDAs) are a successful branch of evolutionary algorithms (EAs) that evolve a probabilistic model instead of a population. Analogous to genetic drift in EAs, EDAs also encounter the phenomenon that the random sampling in the model update can move the sampling frequencies to boundary values not justified by the fitness. This can result in a considerable performance loss.

This work gives the first tight quantification of this effect for three EDAs and one ant colony optimizer, namely for the univariate marginal distribution algorithm, the compact genetic algorithm, population-based incremental learning, and the max-min ant system with iteration-best update. Our results allow to choose the parameters of these algorithms in such a way that within a desired runtime, no sampling frequency approaches the boundary values without a clear indication from the objective function.

This paper for the Hot-off-the-Press track at GECCO 2020 summarizes the work "Sharp Bounds for Genetic Drift in Estimation of Distribution Algorithms" by B. Doerr and W. Zheng, which has been accepted for publication in the IEEE Transactions on Evolutionary Computation [5].

The univariate marginal distribution algorithm copes well with deception and epistasis

In their recent work, Lehre and Nguyen (FOGA 2019) show that the univariate marginal distribution algorithm (UMDA) needs time exponential in the parent populations size to optimize the DeceptiveLeadingBlocks (DLB) problem. They conclude from this result that univariate EDAs have difficulties with deception and epistasis.

In this work, we show that this negative finding is caused by an unfortunate choice of the parameters of the UMDA. When the population sizes are chosen large enough to prevent genetic drift, then the UMDA optimizes the DLB problem with high probability with at most λ(n/2 + 2e ln n) fitness evaluations. Since an offspring population size λ of order n log n can prevent genetic drift, the UMDA can solve the DLB problem with O(n2 log n) fitness evaluations. Together with the result of Lehre and Nguyen, our result for the first time rigorously proves that running EDAs in the regime with genetic drift can lead to drastic performance losses.

This extended abstract summarizes our work "The Univariate Marginal Distribution Algorithm Copes Well with Deception and Epistasis", which appeared in the Proceedings of Evolutionary Computation in Combinatorial Optimization (EvoCOP), 2020, pp. 51--66, and won the conference's best-paper award.

Is the statistical significance between stochastic optimization algorithms' performances also significant in practice?

To transfer the learned knowledge that is coming from benchmarking studies, which involve stochastic optimization algorithms, we should find a way to decide if the statistical significance between their performance is also important for real-world applications. For this reason, we have recently proposed a practical Deep Statistical Comparison (pDSC). It takes into account practical significance when making a statistical comparison of meta-heuristic stochastic optimization algorithms for single-objective optimization problems. Experimental results showed that our recently proposed approach provided very promising results.

Large scale biomedical data analysis with tree-based automated machine learning

Tree-based Pipeline Optimization Tool (TPOT) is an automated machine learning (AutoML) system that recommends optimal pipeline for supervised learning problems by scanning data for novel features, selecting appropriate models and optimizing their parameters. However, like other AutoML systems, TPOT may reach computational resource limits when working on big data such as whole-genome expression data. We develop two novel features for TPOT, Feature Set Selector and Template, which leverage domain knowledge, greatly reduce the computational expense and flexibly extend TPOT's application to biomedical big data analysis.

Empirical linkage learning

Linkage learning is employed by many state-of-the-art evolutionary methods designed for solving problems in discrete domains. The effectiveness of these methods is dependent on linkage quality. The linkage may suffer to two different inaccuracy types. If some of the gene dependencies are not discovered, then the missing linkage takes place. Oppositely, if linkage identifies some gene dependencies that do not exist, then the false linkage takes place. To the best of our knowledge, all linkage learning techniques proposed that far predict that some genes are dependent and can commit both of the mistake types. Therefore, we propose a more direct approach. We disturb the genotype and check how these disturbances have influenced the effects of local search. We prove that the proposed Linkage Learning based on Local Optimization (3LO) will never report any false linkage, although it may still miss some true gene dependencies. 3LO is fundamentally different from other linkage learning techniques. Its main disadvantage is a high computational cost and it is not suitable for already known evolutionary methods that frequently compute linkage. Therefore, we propose an evolutionary method that employs 3LO. More details considering 3LO, linkage quality and diversity may be found in the original paper [5].

SGP-DT: towards effective symbolic regression with a semantic GP approach based on dynamic targets

Semantic Genetic Programming (SGP) approaches demonstrated remarkable results in different domains. SGP-DT is one of the latest of such approaches. Notably, SGP-DT proposes a dynamic-target approach that combines multiple GP runs without relying on any form of crossover. On eight well-known datasets SGP-DT achieves small RMSE, on average 25% smaller than state-of-the-art approaches.

Using exploratory landscape analysis to visualize single-objective problems

Selecting the relevant algorithm for a given problem is a crucial first step for achieving good optimization algorithm performance. Exploratory Landscape Analysis can help with this problem by calculating landscape features that numerically describe individual problems.

To understand the problem space in single-objective numerical optimization, we recently presented a preliminary study of how Exploratory Landscape Analysis can be used to visualize different optimization benchmark problems, with the ultimate goal of visualizing problems that are similar to one another close together. In this paper, we examine how the selection of landscape features affects such a visualization, and show that proper preprocessing of landscape features is crucial for producing a good visualization. In particular, we show that only a subset of landscape features is invariant to simple transforms such as translation and scaling. Further, we examine how such an approach can be used to visually compare problems from different optimization benchmark sets.

SESSION: Late-breaking abstract

Combined kriging surrogate model for efficient global optimization using the optimal weighting method

When solving design optimization problems using evolutionary algorithms, the optimization process can be computationally expensive. To accelerate the optimization process, ordinary Kriging (OK) surrogate models are often used with the efficient global optimization (EGO) framework. However, in some cases the EGO framework can lead to a globally inaccurate OK surrogate model when many sample points are close to each other. One way to tackle this issue is to use a regression OK model instead of an interpolation OK model. In this paper, we propose an interpolation method which solve the issue by combining a local and a global OK model fitted to different set of the sample points. This paper describes the optimal weighting method used to combine the different Kriging models and compares the performance of the new method to interpolation and regression OK for the modified Branin test function. We find that when many sample points exist close to each other, the combined Kriging method outperform both the interpolation and the regression OK.

A daily stock index predictor using feature selection based on a genetic wrapper

In this experiment, we applied a genetic feature selection to global macroeconomics indicators, and used support vector regression to create a model for predicting the daily increase/decrease rate of Korea Composite Stock Price Index (KOSPI). Experiments were conducted by dividing the time series of 2007 to 2018 into various yearly time intervals. Overall, the mean absolute error was approximately 15% lower when the genetic algorithm-applied feature selection method was used. The feature selection improved the predictive performance by assembling discriminatory subsets of macroeconomic indicators that effectively represent the stock market trends in a given time interval.

A genetic algorithm to optimize SMOTE and GAN ratios in class imbalanced datasets

Class imbalance is one of the problem easily encountered in the fields of data analysis and machine learning. When there is an imbalance in learning dataset, machine learning models become biased and learn inaccurate classifiers. To resolve such data imbalance problems, a strategy that increases the volume of data of minority classes is often used by applying the synthetic minority oversampling technique (SMOTE). Furthermore, the use of generative adversarial networks (GANs) for data oversampling has recently become more common. This research used a genetic algorithm to search and optimize the combinations of oversampling ratios based on the SMOTE and GAN techniques. The case in which the proposed method was used was compared with the cases in which a single technique was used to train either the imbalanced data or oversampled data. From the results, it was established that the classifier that learned the oversampled data with the optimized ratio using the proposed method was superior in classification performance.

Data driven building of realistic neuron model using IBEA and CMA evolution strategies

Recently the building of large neuronal circuits from realistic neuron models has gained traction. This bottom-up approach relies on the accurate description of the primitive elements composing the brain such as neurons and astrocytes, that are then aggregated into larger and larger circuits. However, as of today, this data-intensive approach is slowed done by the lack of complete biological description that would be needed to build such models. In the present study, we compare the use of different optimizers in the context of building numerical neuron models presenting realistic electrical behaviours despite only having a sparse description of the original neuron behaviour. To do so, we perform single and multi-objective optimization of the neuron model parameters using as targets electrical features (e-features) extracted from voltage recording obtained through patch-clamp experiments. The purpose of the optimizers is therefore to find the optimal set of parameters for the neuron model such that it presents a firing behaviour similar to the original neurons when exposed to the same stimuli. This neuron model building approach is not new [7, 10], however, to the authors knowledge, it is the first time that multi-objective covariance matrix adaptation evolution strategy is used in such a highly-dimensional parameter space using e-features obtained from experimental recordings.

A bio-inspired approach for the spectrum allocation problem in IoT networks

This paper addresses the spectrum allocation optimization problem for the Internet of Things. We aim to achieve maximum signal-to-interference ratio and attended active devices. To overcome this, we apply a binary particle swarm optimization algorithm, a genetic algorithm, and an artificial bee colony algorithm. To compare the results obtained from those bio-inspired algorithms, we use Monte Carlo simulation, which is a classical method that is usually applied to solve this type of problem.

A surrogate model using deep neural networks for optimal oil skimmer assignment

Optimization of equipment assignments for oil spill responses entails several real-world constraints. We proposes a surrogate model which utilizes a deep neural network for the optimization of oil-removal equipment assignments. The surrogate model was constructed by applying machine-learning to 20,000 assignment plan data, all of which satisfy various constraining conditions based on deep neural networks. Compared to the existing optimization model, the constructed model showed a 61% increase in efficiency.

Usage of a genetic algorithm for optimizing stock usage

We are presenting a common supply chain problem and how we were able to use a Genetic Algorithm (GA) to solve it. We start with a very short introduction on the problem, the GA used and its preliminary results. We also present at the end future works and fields of improvement.

Automatic evolutionary learning of composite models with knowledge enrichment

This paper provides the main concepts of the knowledge-enriched AutoML approach and shortly describes the current results of the proof of concept implementation within the FEDOT framework. By knowledge enrichment, we mean the insertion of domain-specific models and expert-like meta-heuristics. Also, we involve multi-scale learning as a part of complex models identification. The proposed concepts make it possible to create effective and interpretable composite models.

An RTS-based algorithm for noisy optimization by strategic sample accumulation

This paper proposes a noisy optimization algorithm that can effectively and efficiently deal with the noise in the objective function by appropriately allotting samples to individuals in the population of an overlapping generation model. During the survival selection, our algorithm makes probabilistic decisions to determine when and to whom additional samples should be given, intending to maximally save the samples. Since the samples allotted to individuals are accumulated as long as they remain in the population, long-lasting individuals tend to have a large accumulation and thus a quite accurate evaluation. Test results with benchmark functions confirm the performance of the proposed algorithm.

Teng-Yue algorithm: a novel metaheuristic search method for fast cancer classification

This contribution is about a novel swarm search algorithm designed to enhance the feature selection (FS) performance in building an incremental Bayesian network (IBN) for efficient cancer data classification. IBN has the advantage of finding out casualty relation between the variable and offering reasonable prediction accuracy, both are in demand for medical data analysis (MDA). FS is challenging in this scenario because the choice of the feature subset would have to satisfy both objectives of accuracy and generalization for the IBN. An extensive simulation experiment was carried comparing one dozen of swarm-based FS methods by different contemporary metaheuristics. In particular, a new search method called Teng-Yue Algorithm (TYA) which means motions of "gallop" and "leap" is formulated that mimic no living creation but just the two fundamental dynamics of local intensification and global exploration in the very original metaheuristic search design. It was observed from the experimental results that TYA outperformed all other metaheuristics by pairing with correlation-based feature subset selection and IBN, with a careful tuning on the balance of the local and global searching efforts distribution. The result is encouraging as TYA is relatively simple and able to empower fast and accuracy BN inference that MDA may leverage.

Constraint handling in genotype to phenotype mapping and genetic operators for project staffing

Project staffing in many organisations involves the assignment of people to multiple projects while satisfying multiple constraints. The use of a genetic algorithm with constraint handling performed during a genotype to phenotype mapping process provides a new approach. Experiments show promise for this technique.

Antimander: open source detection of gerrymandering though multi-objective evolutionary algorithms

Redrawing congressional district boundaries for political advantage (i.e. gerrymandering) is a recognized problem in the United States. Legal cases opposing gerrymandering have been stymied by the lack of objective measures showing that a districting is unnecessarily biased relative to other viable designs, given a set of competing considerations (such as fairness, compactness, and competitiveness). As a result, there is interest in methods that can show that a candidate districting's fairness could be significantly improved without sacrificing any other considerations. We propose multi-objective evolutionary algorithms as a promising approach for identifying gerrymandering, and districting as a real-world benchmark for the field. Our contributions are (1) to design an encoding and operators appropriate to the problem, and explore enhancements such as novelty search and feasible-infeasible search, (2) to set baseline results, and (3) to release an open-source tool called Antimander, with the hope of inspiring future research aimed at solving an important political problem.

Towards improvement of SUNA in multiplexers with preliminary results of simple logic gate neuron variation

Spectrum-Diverse Neuroevolution with Unified Neural Models (SUNA) has been shown to be a successful alternative to the algorithm NeuroEvolution of Augmenting Topologies (NEAT). Requiring less parameters than NEAT yet possessing a more unified representation power and effective spectrum-based diversity preservation, SUNA outperformed NEAT on most of the problems to be experimented. However, we think a simple improvement approach can be made to improve SUNA's efficiency in the strategic decision-making problem tested by the model itself, i.e. the multiplexer problem. In the proposed method, we try to incorporate the idea of logical gates to the hidden neurons in the model, suggesting it the solutions that solve the problem in the real world in the form of neurons. It is shown that with the simple logic gates neuron variations, SUNA can be slightly enhanced to resolve the multiplexer problem.

Quality enhancement of stochastic feature subset selection via genetic algorithms. Assessment in bioinformatics data sets

This contribution introduces a new approach for the quality enhancement of the solutions reached by a stochastic feature subset selection procedure using a scatter search algorithm via a genetic-based algorithm feature subset selector. The scatter search approach makes use of correlation measures in the context of classification algorithms and in a natural way the genetic approach does also the exploration and exploitation of the search space through a correlation-based metric. It is said that data preparation helps to improve the effectivity of classification models as well as to reduce the feature space. Additionally, the feature subset selection application once is very common though doing the procedure twice is less frequent and is the scenario of this paper. The results are better than in the proposal with a single meta-heuristic and the reduction in the feature space is very appealing.

On the order of variables for multitasking optimization

As an emerging paradigm in evolutionary computation, multi-tasking algorithm can solve multiple tasks simultaneously by using the hidden parallelism to transfer knowledge between tasks. Broadly speaking, in the literature, the order of variables of one task has not been explored so far. In this paper, we thus analyze the disturbed individuals and genetic mechanism in the multitasking environment with only two tasks. The experiment results on four instances revealed that, the effect of the reserve order on multi-tasking algorithm is not significant in practice.

An effective variable transfer strategy in multitasking optimization

As an emerging paradigm in evolutionary computation, multitasking evolutionary algorithm can solve multiple self-contained tasks simultaneously. Its performance has largely relied on task similarity. In this paper, a novel variable transfer strategy is proposed to boost the task similarity and reduce negative transfer of useful knowledge between tasks. The experiment results on nine instances revealed that, the proposed strategy has great exploitation ability, enhancing the convergence speed.

On the co-authorship network in evolutionary computation

With increased research achievements in the field of evolutionary computation, accurate identification and analysis of co-authorship characteristics have become more important than before. Therefore, in this study, a co-authorship network was used to analyze the aspects of collaboration in the evolutionary computation field. Decennial co-authorship networks were constructed using a bibliography database, following which analyses of the macroscopic network properties and aspect changes of authors who play a central role were conducted. In particular, for macroscopic features, the results were compared with those from the entire computer science field to derive the differences.

POSTER SESSION: Ant colony optimization and swarm intelligence

Evolutionary curriculum learning approach for transferable cellular automata rule optimization

This paper proposes a novel method for supervised optimization of cellular automata rules using curriculum learning. The optimized edge detector manages to generalize a rule from synthetic data that is applicable to magnetic resonance images, removing the need for manual annotation of medical data. The method achieves competitive results with classical edge detectors on our test set and may be incorporated in computer-aided diagnosis systems.

The effect of differential quality and differential zealotry in the best-of-n problem

In this paper, we study the interplay between differential option quality and differential quantity of individuals with fixed option (henceforth called zealots), in a best-of-n problem with n - 2 options. We study how the consensus equilibria change with respect to these two factors. We perform systematic computer simulations in an antagonistic scenario whereby one option has a higher quality but a minority of zealots compared to the other option.

Multi-objective ant colony optimization for task allocation in vehicle-based crowdsourcing

With the continuous popularization of various mobile devices, spatial crowdsourcing tasks have also developed rapidly. Recently, vehicles are being used to complete spatial crowdsourcing tasks. Drivers and passengers can provide the ability to actively complete query tasks, while sensors on the vehicle can passively complete sensing tasks. In this paper, we consider a spatial crowdsourcing scenario where a worker can complete a location-based query task and also a sensing task at the same time. The formal description of the problem is given, and then a multiobjective ant colony algorithm for vehicle-based crowdsourcing is proposed to optimize both task query reliability and sensing coverage. The performance of the algorithm is evaluated on a real-world traffic trace dataset.

Improving BPSO-based feature selection applied to offline WI handwritten signature verification through overfitting control

This paper investigates the presence of overfitting when using Binary Particle Swarm Optimization (BPSO) to perform the feature selection in a context of Handwritten Signature Verification (HSV). SigNet is a state of the art Deep CNN model for feature representation in the HSV context and contains 2048 dimensions. Some of these dimensions may include redundant information in the dissimilarity representation space generated by the dichotomy transformation (DT) used by the writer-independent (WI) approach. The analysis is carried out on the GPDS-960 dataset. Experiments demonstrate that the proposed method is able to control overfitting during the search for the most discriminant representation.

POSTER SESSION: Complex systems (artificial life/artificial immune systems/generative and developmental systems/evolutionary robotics/evolvable hardware)

Evolving hypernetworks for game-playing agents

This work investigates the evolution of indirectly-encoded neural networks through a hypernetwork approach. We find that for some Atari games, a hypernetwork with over 14 times fewer parameters, can compete or even outperform directly-encoded policy networks. While hypernetworks perform worse than directly encoded networks in the game Frostbite, in the game Gravitar, the approach reaches a higher score than any other evolutionary method and outperforms complicated deep reinforcement learning setups such as Rainbow.

Comparing indirect encodings by evolutionary attractor analysis in the trait space of modular robots

In evolutionary robotics, the representation of the robot is of primary importance. Often indirect encodings are used, whereby a complex developmental process grows a body and a brain from a genotype. In this work, we aim at improving the interpretability of robot morphologies and behaviours resulting from indirect encoding. We develop and use a methodology that focuses on the analysis of evolutionary attractors, represented in what we call the trait space: Using trait descriptors defined in the literature, we define morphological and behavioural Cartesian planes where we project the phenotype of the final population. In our experiments we show that, using this analysis method, we are able to better discern the effect of encodings that differ only in minor details.

A preliminary study towards an improved shepherding model

Shepherding is a biologically inspired swarm control methodology based on the herding of sheep by sheepdogs. The approach is applicable not only to sheep herding, but also to the guidance of one group of agents (eg.robots) by another. The pertinent research questions involve modeling the dynamics of the sheep and sheep-dog agent behaviours, in order to fully understand the sources of complexity in the problem and improve shepherding performance. To this end, we examine an existing model by Strömbom et al. and present a preliminary study towards improving it.

Safer reinforcement learning through evolved instincts

An important goal in reinforcement learning is to create agents that can quickly adapt to new goals but at the same time avoid situations that might cause damage to themselves or their environments. One way agents learn is through exploration mechanisms, which are needed to discover new policies. However, in deep reinforcement learning, exploration is normally done by injecting noise in the action space. While performing well in many domains, this setup has the inherent risk that the noisy actions lead agents to unsafe environment states. In this paper, we introduce a novel approach called Meta-Learned Instinctual Networks (MLIN) that allows agents to perform lifetime learning while avoiding hazardous states. At the core of the approach is a plastic network trained through reinforcement learning and an evolved "instinctual" network, which does not change during the agent's lifetime but can modulate the noisy output of the plastic network. We test our idea on a simple 2D navigation task with hazard zones, in which the agent has to learn to approach new targets during deployment. While a standard meta-trained network performs poorly in these tasks, MLIN allows agents to learn to navigate to new targets while minimizing collisions with hazard zones. These results suggest that meta-learning augmented with an instinctual network is a promising approach for safe AI.

Exploring the BipedalWalker benchmark with MAP-Elites and curiosity-driven A3C

The MAP-Elites algorithm can efficiently explore reinforcement learning problems and optimize collections of solutions over user-defined ranges of possible behaviors. However, MAP-Elites can be difficult to scale to highly dimensional problems, such as the optimization of large deep neural networks. Traditional deep reinforcement can train agents with a complex network model by relying on human-designed extrinsic rewards. In complex problems, this translates into reward landscapes that are extremely sparse and hard to explore. This has led to the development of curiosity-driven reinforcement learning algorithms that use intrinsic rewards to continuously optimize for novel policies. While this approach encourages exploration, it remains to be seen if it can be used similarly to MAP-Elites to search for a collection of highly diverse solutions.

Diversity-based design assist for large legged robots

We combine MAP-Elites and highly parallelisable simulation to explore the design space of a class of large legged robots, which stand at around 2m tall and whose design and construction is not well-studied. The simulation is modified to account for factors such as motor torque and weight, and presents a reasonable fidelity search space. A novel robot encoding allows for bio-inspired features such as legs scaling along the length of the body. The impact of three possible control generation schemes are assessed in the context of body-brain co-evolution, showing that even constrained problems benefit strongly from coupling-promoting mechanisms. A two stage process in implemented. In the first stage, a library of possible robots is generated, treating user requirements as constraints. In the second stage, the most promising robot niches are analysed and a suite of human-understandable design rules generated related to the values of their feature variables. These rules, together with the library, are then ready to be used by a (human) robot designer as a Design Assist tool.

Diversity in swarm robotics with task-independent behavior characterization

Evolutionary computation provides methods to automatically generate controllers for swarm robotics. Many approaches rely on optimization and the targeted behavior is quantified in form of a fitness function. Other methods, like novelty search, increase exploration by putting selective pressure on unexplored behavior space using a domain-specific behavioral distance function. In contrast, minimize surprise leads to the emergence of diverse behaviors by using an intrinsic motivation as fitness, that is, high prediction accuracy. We compare a standard genetic algorithm, novelty search and minimize surprise in a swarm robotics setting to evolve diverse behaviors and show that minimize surprise is competitive to novelty search.

Evolving an artificial creole

There has been a significant amount of research on computational modeling of language evolution to understand the origins and evolution of communication [5--8]. However, there has relatively been relatively little computational modeling of environmental factors that enable the evolution of creole languages [2, 9], specifically, modeling lexical term transmission between intersecting language groups, within the context of artificial creole language evolution [4]. This study used an iterative agent-based naming game [8] simulation to investigate the impact of population size and lexical similarity [7] of interacting language groups on the evolution of an artificial creole lexicon. We applied the synthetic methodology [6], using agent-based artificial language evolution as an experimental platform to investigate two objectives. First, to investigate the impact of population size of interacting groups (with differing lexicons) on the evolution of a common (creole) lexicon. Second, to evaluate the concurrent impact of lexical similarity between interacting agent groups on the evolution of a creole lexicon.

The expense of neuro-morpho functional machines

An unsolved problem in both natural and artificial evolutionary systems is determining the exact environmental and evolutionary conditions that enable complexity to evolve [16]. This is especially pertinent in evolutionary robotics [4] where possible problem-solving behavior is constrained by brain (controller) and body (morphology) complexity [12]. We evaluate the impact of environments and complexity costs on robotic controller and morphology evolution across various evolutionary robotics task scenarios. This study uses evolutionary robotics as an experimental platform to investigate the arrow of complexity hypothesis [3], previously demonstrated to hold in artificial evolutionary systems given an imposed complexity cost [2]. Specifically, we test whether energy costs imposed on evolving robot controller and morphology complexity enables the evolution of increasingly complex controller and morphological designs concomitant with increasing environment complexity. Morphological complexity was equated with possible sensor configurations for a physical counterpart Khepera III mobile robot [8], and neural complexity was equated to artificial neural network topological configurations that coupled with a robot's evolved morphology.

Adaptive reinforcement learning through evolving self-modifying neural networks

The adaptive learning capabilities seen in biological neural networks are largely a product of the self-modifying behavior emerging from online plastic changes in synaptic connectivity. Current methods in Reinforcement Learning (RL) only adjust to new interactions after reflection over a specified time interval, preventing the emergence of online adaptivity. Recent work addressing this by endowing artificial neural networks with neuromodulated plasticity have been shown to improve performance on simple RL tasks trained using backpropagation, but have yet to scale up to larger problems. Here we study the problem of meta-learning in a challenging quadruped domain, where each leg of the quadruped has a chance of becoming unusable, requiring the agent to adapt by continuing locomotion with the remaining limbs. Results demonstrate that agents evolved using self-modifying plastic networks are more capable of adapting to complex meta-learning learning tasks, even outperforming the same network updated using gradient-based algorithms while taking less time to train.

Divergent search for image classification behaviors

When data is unlabelled and the target task is not known a priori, divergent search offers a strategy for learning a wide range of skills. Having such a repertoire allows a system to adapt to new, unforeseen tasks. Unlabelled image data is plentiful, but it is not always known which features will be required for downstream tasks. We propose a method for divergent search in the few-shot image classification setting and evaluate with Omniglot and Mini-ImageNet. This high-dimensional behavior space includes all possible ways of partitioning the data. To manage divergent search in this space, we rely on a meta-learning framework to integrate useful features from diverse tasks into a single model. The final layer of this model is used as an index into the `archive' of all past behaviors. We search for regions in the behavior space that the current archive cannot reach. As expected, divergent search is outperformed by models with a strong bias toward the evaluation tasks. But it is able to match and sometimes exceed the performance of models that have a weak bias toward the target task or none at all. This demonstrates that divergent search is a viable approach, even in high-dimensional behavior spaces.

Novelty producing synaptic plasticity

A learning process with the plasticity property often requires reinforcement signals to guide the process. However, in some tasks (e.g. maze-navigation), it is very difficult to measure the performance of an agent to provide reinforcements, since the position of the goal is not known. This requires finding the correct behavior among a vast number of possible behaviors without having any feedback. In these cases, an exhaustive search may be needed. However, this might not be feasible especially when optimizing artificial neural networks in continuous domains. In this work, we introduce novelty producing synaptic plasticity (NPSP), where we evolve synaptic plasticity rules to produce as many novel behaviors as possible to find the behavior that can solve the problem. We evaluate the NPSP on deceptive maze environments that require the achievement of subgoals. Our results show that the proposed NPSP produces more novel behaviors compared to Random Search and Random Walk.

An artificial sequential immune responses model for anomaly detection

Self-non-self discrimination and danger theory have long been the fundamental model of modern theoretical immunology. Based on these principles, some effective and efficient artificial immune algorithms have been proposed and applied to a wide range of engineering applications. Over the past years, a new idea called sequential immune responses (SIR) has been developed to challenge the classical negative selection model and danger theory. In this conceptual paper, we look at SIR from the perspective of artificial immune system (AIS) practitioners. An overview of the SIR is presented with particular emphasis on analogies in the AIS world. A number of potential application areas are then used to provide a framing for a critical assessment of the concept, and its relevance for AIS.

POSTER SESSION: Digital entertainment technologies and arts

Adea - Evolving glyphs for aiding creativity in typeface design

Being creative in Graphic Design often requires protected experimentation processes. We present an evolutionary engine for generating glyphs, aiding designers to explore during the creative process. The system employs a Genetic Algorithm to evolve svc using both interactive and automatic fitness assignment. We present topological variation operators that promote the exploration of adequate topologies and we compare the performance of topological and conventional operators for generating uppercase "A"s. The results demonstrate that the topological crossover operator performs more efficiently both regarding fitness and phenotypes.

Evolving neural network agents to play atari games with compact state representations

Recent success in solving hard reinforcement learning problems can be partly credited to the use of deep neural networks, which can extract high-level features and learn compact state representations from high-dimensional inputs, such as images. However, the large networks required to learn both state representation and policy using this approach limit the effectiveness and benefits of neuroevolution methods that have proven effective at solving simpler problems in the past. One potential solution to this problem is to separate state representation and policy learning and only apply neuroevolution to the latter. We extend research following this approach by evolving small policy networks for Atari games using NEAT, that learn from compact state representations provided by the recently released Atari Annotated RAM Interface (Atari ARI). Our results show that it is possible to evolve agents that exceed expert human performance using these compact state representations, and that, for some games, successful policy networks can be evolved that contain only a few or even no hidden nodes.

POSTER SESSION: Evolutionary combinatorial optimization and metaheuristics

Evolving search trajectories

Solving an optimization problem with a local search algorithm consists in navigating within a search landscape, guided by a fitness function often directly defined from the objective function of the problem. The exploration of the resulting fitness landscape may become difficult, in particular in presence of ruggedness and multiple local optima. We shift the problem of searching a solution, from searching a fitness function, which maximizes the efficiency of a basic local search algorithm. We use evolution strategies to generate basic hill-climbing algorithms that aim at reaching the best possible solutions.

An efficient evolutionary solution to the joint order batching - order picking planning problem

This paper introduces an innovative solution to the joint Order Batching - Order Picking Planning Problem. Previous research on this field has mainly focused on solving the two problems separately. Failing to consolidate the two components, however, this approach leads to sub-optimal solutions. In this work we propose a Genetic Algorithm for Joint Optimization (GAJO), which optimizes the integrated problem. Thorough experimentation shows GAJO to be both more effective and faster than the baseline models.

QoS-constrained multi-objective distributed data-intensive web service composition - NSGA-II with repair method

Distributed Data-intensive Web service composition (DWSC) aims to find service compositions with multiple conflicting objectives, i.e. minimising response time and cost. Additionally, it often needs to satisfy user-defined QoS constraints, i.e. budget constraints or response time constraints. To solve QoS-constrained multi-objective distributed DWSC problems we propose a knowledge-based repair method for NSGA-II to effectively and efficiently search for Pareto-optimal service composition solutions that satisfy QoS constraints. Experiments show that our method outperforms an existing state-of-the-art repair method on several commonly used on standard benchmark datasets.

A preliminary approach to evolutionary multitasking for dynamic flexible job shop scheduling via genetic programming

Genetic programming, as a hyper-heuristic approach, has been successfully used to evolve scheduling heuristics for job shop scheduling. However, the environments of job shops vary in configurations, and the scheduling heuristic for each job shop is normally trained independently, which leads to low efficiency for solving multiple job shop scheduling problems. This paper introduces the idea of multitasking to genetic programming to improve the efficiency of solving multiple dynamic flexible job shop scheduling problems with scheduling heuristics. It is realised by the proposed evolutionary framework and knowledge transfer mechanism for genetic programming to train scheduling heuristics for different tasks simultaneously. The results show that the proposed algorithm can dramatically reduce the training time for solving multiple dynamic flexible job shop tasks.

POSTER SESSION: Evolutionary machine learning

Evolving network structures for text classification using genetic algorithms

Convolutional Neural Networks (CNNs) have shown themselves in recent years to be strong contenders for text classification and sentiment analysis, achieving high accuracies challenging other state of the art methods. However, their architectures need to be manually designed and the parameters need to be manually tuned. They suffer from requiring good domain knowledge to design the architecture for each individual problem, which often isn't possible and can increase operating costs for companies wishing to implement such methods. In this paper, two different methods are proposed that use Genetic Algorithms to automatically evolve CNNs without requiring any human intervention. The experimental results show that one of the proposed methods rivals results obtained from other CNN based methods based on classification accuracy, reaching high accuracy on the IMDB baseline dataset while requiring only a few hours training.

An adaptive neuroevolution-based hyperheuristic

According to the No-Free-Lunch theorem, an algorithm that performs efficiently on any type of problem does not exist. In this sense, algorithms that exploit problem-specific knowledge usually outperform more generic approaches, at the cost of a more complex design and parameter tuning process. Trying to combine the best of both worlds, the field of hyperheuristics investigates the automatized generation and hybridization of heuristic algorithms. In this paper, we propose a neuroevolution-based hyperheuristic approach. Particularly, we develop a population-based hyperheuristic algorithm that first trains a neural network on an instance of a problem and then uses the trained neural network to control how and which low-level operators are applied to each of the solutions when optimizing different problem instances. The trained neural network maps the state of the optimization process to the operations to be applied to the solutions in the population at each generation.

Multi-objective data stream clustering

A Data stream is a massive sequences of data coming continuously. Clustering this type of data requires some restrictions in time and memory. Most of the clustering algorithms follow only one cluster validity measure. Given different data properties, a single validity measure does not work well for all datasets. In this paper, we introduce MOC-Stream based on Multi-objective clustering and data stream concepts. The goal of MOC-Stream is to find clusters by applying several algorithms corresponding to several objective functions. It uses a two-phase process: 1) online phase: creating several clustering solutions based on different algorithms and genetic operators 2) offline phase: building an optimal partition from the discovered clusters. Experiments on large stream datasets show the effectiveness of MOC-Stream for detecting arbitrary shaped, compact, and well-separated clusters with better execution time.

A first step toward incremental evolution of convolutional neural networks

We introduce a novel algorithm - ConvNEAT - that evolves a convolutional neural network (CNN) from a minimal architecture. Convolutional and dense nodes are evolved without restriction to the number of nodes or connections between nodes. The proposed work advances the field with ConvNEAT's ability to evolve arbitrary minimal architectures with multi-dimensional inputs using GPU processing.

Automatically extracting features for face classification using multi-objective genetic programming

This paper proposes a new multi-objective feature extraction algorithm using genetic programming (GP) for face classification. The new multi-objective GP-based feature extraction algorithm with the idea of non-dominated sorting, which aims to maximise the objective of the classification accuracy and minimise the objective of the number of extracted features. The results show that the proposed algorithm achieves significantly better performance than the baseline methods on two different face classification datasets.

Diversity-driven wide learning for training distributed classification models

In order to address scalability issues which are a challenge for Deep Learning methods, we propose Wide Learning --- a model that scales horizontally rather than vertically enabling distributed learning. This approach first trains a repertoire of architecturally diverse neural networks of low complexity in parallel. Each network in the repertoire extracts a set of features from the dataset; these are then aggregated in a second short training phase in a centralised model to solve the classification task. The repertoire is generated using a quality diversity evolutionary algorithm (MAP-Elites) which returns a set of neural networks which are diverse w.r.t. feature descriptors partially describing their architecture and optimised w.r.t. their ability to solve the task. The technique is shown to perform well on two benchmark classification problems which have been tackled in the literature with Deep Learning techniques. Additional experiments provide insight into the role that diversity plays in contributing to the performance of the repertoire. We propose that evolving neural networks by promoting architectural diversity could in future lead to better results in some domains where current approaches have fallen short.

Turing learning with hybrid discriminators: combining the best of active and passive learning

We propose a hybrid formulation of Turing Learning and study its application in mobile robotics. Instead of using a single type of discriminator, in the hybrid formulation, both active and passive discriminators are used. Active discriminators come to their judgments while interacting with the system under investigation, which helps improve model accuracy. Passive discriminators come to their judgments while only observing the system, allowing the reuse of data samples, which for real robots would be costly to obtain. To validate these ideas, we present a case study where a simulated embodied robot is required to calibrate its distance sensor through a process of self-modeling, and without metric information of where it resides within the environment. The results show that the hybrid formulation achieves a good level of accuracy with significantly fewer data samples from the robot. The findings suggest that the self-modeling process could be realized on a mobile physical robot with a limited time and energy budget.

Evolving deep autoencoders

Autoencoders have seen wide success in domains ranging from feature selection to information retrieval. Despite this success, designing an autoencoder for a given task remains a challenging undertaking due to the lack of firm intuition on how the backing neural network architectures of the encoder and decoder impact the overall performance of the autoencoder. In this work we present a distributed system that uses an efficient evolutionary algorithm to design a modular autoencoder. We demonstrate the effectiveness of this system on the tasks of manifold learning and image denoising. The system beats random search by nearly an order of magnitude on both tasks while achieving near linear horizontal scaling as additional worker nodes are added to the system.

Enabling XCSF to cope with dynamic environments via an adaptive error threshold

The learning classifier system XCSF is a variant of XCS employed for function approximation. Although XCSF is a promising candidate for deployment in autonomous systems, its parameter dependability imposes a significant hurdle, as a-priori parameter optimization is not feasible for complex and changing environmental conditions. One of the most important parameters is the error threshold, which can be interpreted as a target bound on the approximation error and has to be set according to the approximated function. To enable XCSF to reliably approximate functions that change during runtime, we propose the use of an error threshold, which is adapted at run-time based on the currently achieved approximation error. We show that XCSF with an adaptive error threshold achieves superior results over static thresholds in dynamic scenarios, where in general there exists no one-fits-all static threshold.

Towards a Pittsburgh-style LCS for learning manufacturing machinery parametrizations

We present a first evaluation of a new accuracy-based Pittsburgh-style learning classifier system (LCS) for supervised learning of multi-dimensional continuous decision problems: The SupRB-1 (Supervised Rule-Based) learning system. Designed primarily for finding parametrizations for industrial machinery, SupRB-1 learns an approximation of a continuous quality function from examples (consisting of situations, choices and associated qualities---all continous, the first two possibly multi-dimensional) and is then able to make an optimal choice as well as predict the quality of a choice in a given situation. This paper shows and discusses preliminary results of SupRB-1's performance on an additive manufacturing problem.

The data-driven physical-based equations discovery using evolutionary approach

The modern machine learning methods allow one to obtain the data-driven models in various ways. However, the more complex the model is, the harder it is to interpret. In the paper, we describe the algorithm for the mathematical equations discovery from the given observations data. The algorithm combines genetic programming with the sparse regression.

This algorithm allows obtaining different forms of the resulting models. As an example, it could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.

The main idea is to collect a bag of the building blocks (it may be simple functions or their derivatives of arbitrary order) and consequently take them from the bag to create combinations, which will represent terms of the final equation. The selected terms pass to the evolutionary algorithm, which is used to evolve the selection. The evolutionary steps are combined with the sparse regression to pick only the significant terms. As a result, we obtain a short and interpretable expression that describes the physical process that lies beyond the data.

In the paper, two examples of the algorithm application are described: the PDE discovery for the metocean processes and the function discovery for the acoustics.

An evolution-based approach for efficient differentiable architecture search

We propose a hybrid neural architecture search method of evolution-based and gradient-based approaches. In this method, the architecture variables are optimized by evolutionary algorithm and exploited by gradient-based optimization. This hybridity allows an efficient architecture search and also maintains the extensibility of the evolution-based search. On the CIFAR-10 dataset, our method discovers architectures that achieve competitive performance with state-of-the-art models.

A Multi-objective architecture search for generative adversarial networks

This study aimed to search for better network architectures and their hyper-parameters for generative adversarial networks (GANs). To this end, we employ an evolutionary algorithm to search for network architectures and their hyper-parameters. We applied our method to several datasets and demonstrate that our method can discover promising network architectures together with appropriate hyper-parameters.

Towards evolving robust neural architectures to defend from adversarial attacks

Neural networks are known to misclassify a class of subtly modified images known as adversarial samples. Recently, numerous defences have been proposed against these adversarial samples; however, none have improved the robustness of neural networks consistently. Here, we propose to use adversarial samples as a function evaluation to explore for robust neural architectures that can resist such attacks. Experiments on existing neural architecture search algorithms from the literature reveal that although accurate, they are not able to find robust architectures. An essential cause for this lies in their confined search space. We were able to evolve an architecture that is intrinsically accurate on adversarial samples by creating a novel neural architecture search. Thus, the results here demonstrate that more robust architectures exist as well as opens up a new range of possibilities for the development and exploration of neural networks using neural architecture search.

Equilibrium in classification: a new game theoretic approach to supervised learning

A new Framework based on Optimization and Game Theory (FROG) for the supervised classification problem is proposed. The problem is converted into a normal form game in which players are instances of the data that have to choose their class in order to maximize a payoff computed based on the F1 score. The Nash equilibrium of the game represents the correct classification and can be computed as the optimum of a real-valued function. This function is used to estimate the parameters of a classification model such that the resulting probabilities correspond or at least approximate the game equilibrium. CMA-ES is used to minimize this function. Numerical experiments are used to illustrate the potential of this approach.

Improving an evolutionary wrapper for attack detection by including feature importance information

Detect attacks is a major problem in cybersecurity, due to its wide variety of types, behaviors and dynamic structure. In this paper, we include a special crossover operator, which considers the features importance, providing additional information to a wrapped evolutionary algorithm, which uses random forest as classification technique. Results show that random forest improves its classification quality, when the wrapper evolutionary algorithm includes feature importance information for feature selection

DarwiNN: efficient distributed neuroevolution under communication constraints

Neuroevolution (NE), defined as the application of evolution-based training methods to Deep Neural Networks (DNNs), has recently demonstrated encouraging results on a variety of learning tasks. NE is highly parallel and relies on DNN inference as its main computational kernel and therefore can potentially leverage large-scale distributed inference-specific hardware infrastructure in the cloud or edge. We introduce chromosome updates (CU), a novel communication-optimized method for distributing NE computation, and DarwiNN, an open-source, GPU-accelerated distributed NE toolbox based on PyTorch, which implements CU and other algorithms for distributing NE.

Generation of consistent sets of multi-label classification rules with a multi-objective evolutionary algorithm

Multi-label classification consists in classifying an instance into two or more classes simultaneously. Recently, the interest in interpretable classification models have grown, partially as a consequence of regulations such as the General Data Protection Regulation. In this context, we propose a multi-objective evolutionary algorithm that generates multiple rule-based multi-label classification models, allowing users to choose among models that offer different compromises between predictive power and interpretability. The most important contributions of this work are: the generated models are based on sets (unordered collection) of rules and the rule creation mechanism employs a conflict avoidance algorithm which guarantees that all rules within a model are consistent with each other. We conducted experiments on synthetic and real-world datasets and compared our results with state-of-the-art algorithms in terms of predictive performance (F-Score) and interpretability (model size), and demonstrate that our best models had comparable F-Score and smaller model sizes.

Searching for activation functions using a self-adaptive evolutionary algorithm

The introduction of the ReLU function in neural network architectures yielded substantial improvements over sigmoidal activation functions and allowed for the training of deep networks. Ever since, the search for new activation functions in neural networks has been an active research topic. However, to the best of our knowledge, the design of new activation functions has mostly been done by hand. In this work, we propose the use of a self-adaptive evolutionary algorithm that searches for new activation functions using a genetic programming approach, and we compare the performance of the obtained activation functions to ReLU. We also analyze the shape of the obtained activations to see if they have any common traits such as monotonicity or piece-wise linearity, and we study the effects of the self-adaptation to see which operators perform well in the context of a search for new activation functions. We perform a thorough experimental study on datasets of different sizes and types, using different types of neural network architectures. We report favorable results obtained from the mean and standard deviation of the performance metrics over multiple runs.

BACS: integrating behavioral sequences to ACS2

In many real-world environments, only partial observations are provided, thus presenting challenges for Anticipatory Learning Classifier Systems (ALCS). The perceptual aliasing issue occurs when systems cannot differentiate situations that are truly distinct. To tackle the perceptual aliasing issue, ALCS classifiers can be chained in order to build Behavioral Sequences. Those sequences permit ALCS to deal with this issue, but they have never been implemented within ACS2 (Anticipatory Classifier System 2), although this is one of the most advanced ALCS. This paper introduces a novel learning classifier system, BACS, that integrates Behavioral sequences to ACS2. This integration required the adaptation of the action selection policy, the integration of an aliasing detection algorithm that let the system build behavioral classifiers, and the adaptation of the anticipatory learning process. The results obtained over a maze environment benchmark show that behavioral sequences are a promising approach to address the perceptual aliasing issue.

A genetic programming method for classifier construction and cost learning in high-dimensional unbalanced classification

Cost-sensitive learning has been widely used to address the problem of class imbalance. However, cost matrices are often manually designed. In many real-world applications, cost values are often unknown because of the limited domain knowledge. This paper proposes a new genetic programming method to construct cost-sensitive classifiers, which do not require the manually designed cost values. The experimental results show that the proposed method often outperforms existing GP methods.

Evolutionary super-resolution

Super-resolution increases the resolution of an image. Using evolutionary optimization, we optimize the noise injection of a super-resolution method for improving the results. More generally, our approach can be used to optimize any method based on noise injection.

Population-based evolutionary distributed SGD

Neural model training is a time-consuming task where exploiting parallelism is of utmost importance. Employing data-parallelism in stochastic gradient descent (SGD) by partitioning the training dataset is a popular approach; however, algorithmic inefficiencies when operating at large minibatch sizes limit the degree of parallelism, a problem termed the scalability limit. In the face of this scalability challenge, we propose using Evolutionary Algorithms (EA) as a meta-algorithm together with SGD for training neural models. Our training scheme, <u>P</u>opulation-based <u>E</u>volutionary <u>SGD</u> (PESGD) combines local SGD training on each training node with a periodic evolutionary step which selects the best performing models to generate a new population of models for the next iteration. We believe that the complementarity of SGD and EA for neural model training can be exploited well in PESGD.

HyperFDA: a bi-level optimization approach to neural architecture search and hyperparameters' optimization via fractal decomposition-based algorithm

This paper introduces Hyper-FDA: the application of the meta-heuristic "Fractal Decomposition Algorithm" (FDA) to the optimization of the hyperparameters of deep neural network architectures. FDA is a metaheuristic that was recently proposed to solve high dimensional continuous optimization problems. This approach is based on a geometric fractal decomposition which divides the search space while looking for the optimal solution. The presented approach uses bi-level optimization to separate the architecture search composed of discrete parameters from hyperparameters optimization with continuous parameters. The architecture search is conducted by a random walker aiming to generate chained-structured architectures. The best ones are selected to be fined tuned using FDA. The benchmark CIFAR-10 is used to test the approach and results are among the state of the art considering the low number of parameters and the chained-structured architecture of the network. The encouraging results are emphasized by the fact that the entire process was conducted using low computational power with only 3 GPUs.

Evolutionary neural network structure search for DNN pruning and features separation

In deep neural networks (DNNs), by removing unnecessary subnetworks can reduce neural network computing redundancy, or by extracting sub-networks with features can explain how the DNN works. In this paper, we propose a neural network structure search algorithm based on evolutionary algorithm (EA) for DNN pruning and features separation, especially considering the evolutionary efficiency issues and individual evaluation. We get verified on the CIFAR-10 (C10) and CIFAR-100 (C100) datasets. With the classification accuracy with a variation of +/- 1% as the precondition, our method significantly reduces the amount of calculations.

POSTER SESSION: Evolutionary multiobjective optimization

An EDA with swarm intelligence for the multi-objective flexible job-shop problem

The Flexible Job-Shop Problem (FJSP) is one of the most complicated scheduling problems. Particle Swarm Optimization (PSO) is a technique based on the swarm intelligence and Estimation Distribution Algorithms (EDA) are evolutionary algorithms based on probabilistic models. Many evolutionary algorithms have been proposed to solve the FJSP, like DIPSO (Particle Swarm Optimization with Diversity, a hybrid PSO) and SEDA (Simple Estimation of Distribution Algorithm). Both have particular characteristics when searching for a position in the search space and this study, a Simple Estimation Distribution Algorithm with Swarm Intelligence (SEDASI) is proposed for the multi-objective FJSP (MOFJSP). The algorithm is a hybrid evolutionary technique with aspects of PSO and EDA. The results show a good perspective with a small improvement when compared to SEDA.

A generic and computationally efficient automated innovization method for power-law design rules

Automated Innovization (AutoI) originally aimed to extract power-law based rules from a design optimization task without any human intervention. Existing AutoI methods have employed Evolutionary Algorithms (EAs) twice: first to search for Pareto-optimal (PO) solutions, and second to extract rules hidden in them. Furthermore, these methods are limited in scope, in that, they are not capable of tackling both continuous and discrete variables for either single-cluster or multi-cluster rules. In a unique departure from the state-of-the-art, this paper presents a computationally efficient, single EA-based, AutoI method capable of simultaneously extracting single-cluster or multi-cluster rules for both continuous and discrete variable spaces. The robustness of the proposed method is evident from its successful rule extraction tasks even with small-size datasets, where existing AutoI methods fail to apply. The generic scope, computational efficiency, and robustness of the proposed method are demonstrated through a number of benchmark design problems.

Framework to select an improved radiographic image using Speed-constrained modified particle swarm optimization

Radiographic images suffer from contrast problems. Contrast Limited Adaptive Histogram Equalization (CLAHE) is a widely used local contrast enhancement algorithm that can solve this problem. There are no fixed values of CLAHE input parameters that always produce the best output images in terms of contrast enhancement for radiographic images, so to find the parameters that satisfy the optimal condition in a radiography image we use the Speed-constrained Modified Multi-objective Particle Swarm Optimization (SMPSO). Since the output is a set of solutions (called Pareto Set), in this paper we propose to use and compare a posteriori decision methods with a multi-objective approach, in order to select the solutions of the Pareto front that are aligned with the visual criteria of an expert. Three decision making methods were used (utility function-based, distance-based and a fuzzy method). In order to validate which decision methods gave better results, they were evaluated by a medical expert, concluding that methods based on utility functions and distance obtain better quality information. The Fuzzy method was discarded because it selects solutions with a lot of distortion.

An improved multiobjective optimization evolutionary algorithm based on decomposition with hybrid penalty scheme

The multiobjective evolutionary algorithm based on decomposition (MOEA/D) decomposes a multiobjective optimization problem (MOP) into a number of single-objective subproblems. Penalty boundary intersection (PBI) in MOEA/D is one of the most popular decomposition approaches and has attracted significant attention. In this paper, we investigate two recent improvements on PBI, i.e. adaptive penalty scheme (APS) and subproblem-based penalty scheme (SPS), and demonstrate their strengths and weaknesses. Based on the observations, we further propose a hybrid penalty sheme (HPS), which adjusts the PBI penalty factor for each subproblem in two phases, to ensure the diversity of boundary solutions and good distribution of intermediate solutions. HPS specifies a distinct penalty value for each subproblem according to its weight vector. All the penalty values of suboroblems increase with the same gradient during the first phase, and they are kept unchanged during the second phase.

A local hypervolume contribution schema to improve spread of the pareto front and computational time

Nowadays, it is well-known that incorporating hypervolume-based selection mechanisms in Multi-objective Evolutionary Algorithms (MOEAs) has some drawbacks. For instance, it is computationally expensive to use these MOEAs for solving many-objective problems. Further, the optimization process concentrated around the knee of the Pareto front of some Multi-objective Problems (MOPs). Here, we proposed a modified hypervolume-based selection mechanism which adopts the use of an angle distance for choosing the worst candidate solutions to compute the contribution of hypervolume. Our approach tries to spread the solutions along the Pareto front to get a trade-off between the hypervolume indicator and angle reference distance. We incorporated this modified hypervolume-based selection mechanism in a MOEA. We showed that our approach could solve MOPs with irregular Pareto shapes even with many-objective problems with an improvement on running times when compared with the typical contribution schema.

Time complexity analysis of the dominance degree approach for non-dominated sorting

Non-dominated sorting is a procedure which is heavily used by many evolutionary multiobjective algorithms. Despite it can be done in polynomial time, it remains an asymptotical bottleneck in many of these algorithms. What is more, some algorithms for non-dominated sorting, which are fast on average, are ridiculously slow in the worst case. Here we prove that a recent algorithm, the dominance degree approach from the paper "Ranking Vectors by Means of the Dominance Degree Matrix" by Zhou et al., has the worst-case time complexity of Θ(M N2 + N3). The Θ(N3) term, which is unusually high for this problem, indicates that this particular algorithm can not be recommended for performance-critical applications.

Learning-based multi-objective optimization through ANN-assisted online Innovization

Learning for effective problem information from early iterations in an optimization run and utilizing it to improve convergence properties has presented important directions in evolutionary computation (EC) research. In this paper, we propose an ANN-assisted modeling approach which learns from pairs of solutions - the improvements in the variable space, and uses the resulting ANN as an innovized repair operator in an iterative manner. Although the concept can be easily applied to single-objective problems, in this paper, we develop the overall procedure for multi-objective optimization. On a number of test and engineering problems involving two and three-objective problems, we demonstrate a faster execution in terms of the hypervolume metric. The results are encouraging and motivate further exploration to more challenging problems.

Performance evaluation of the MOEA/D algorithm for the solution of a microgrid planning problem

Microgrids have been introduced as powerful concepts to tackle the transition toward a decentralized power supplying. For this reason, complex methodologies for their planning have been proposed, and the selection of effective algorithms for its solution has become a transcendental task. In this work, the Multiobjective Evolutionary Algorithm Based On Decomposition (MOEA/D) is tested in the PG&E 69-bus. Optimization planning and results show the performance and advantages of their implementation for solving this practical problem.

PaletteStarViz: a visualization method for multi-criteria decision making from high-dimensional pareto-optimal front

Visual representation of a many-objective Pareto-optimal front in four or more dimensional objective space requires a large number of data points. Moreover, choosing a single point from a large set even with certain preference information is problematic, as it causes a large cognitive burden on the part of the decision-makers. Therefore, many-objective optimization and decision-making practitioners have been interested in effective visualization methods to enable them to filter down a large set to a few critical points for further analysis. Most existing visualization methods are borrowed from other data analytic domains and they are too generic to be effective for many-criteria decision making. In this paper, we propose an alternative visualization method, following an earlier concept, using star-coordinate plots for effectively visualizing many-objective trade-off solutions. The proposed PaletteStarViz respects some basic topological, geometric, and functional decision-making properties of high-dimensional trade-off points mapped to a "two-and-a-half" dimensional space. We demonstrate the use of PaletteStarViz to a number high-dimensional Pareto-optimal fronts.

Using gradient-free local search within MOEAs for the treatment of constrained MOPs

Evolutionary algorithms are widely used for the treatment of multi-objective optimization problems due to their global nature, robustness, and their minimal assumptions on the model. In turn, it is widely accepted that they still need quite a few resources in order to obtain a suitable finite size approximation of the Pareto set/front of a given problem. In this work, we make a first effort to study the effect of computing multi-objective descent directions for local search within evolutionary algorithms without explicitly using gradient information. Numerical results on some bi-objective problems show the benefit of the chosen approach.

Dynamic Multi-objective Optimisation Problems with Intertemporal Dependencies

A core goal of research in dynamic multi-objective optimisation (DMOO) is to develop algorithms that can find the best possible trade-off solutions for real-world DMOO problems (RWPs). A useful comparison of DMOO algorithms for RWPs require benchmark functions that are representative of RWPs. However, only a few standard DMOO benchmark functions contain complex intertemporal dependencies found in RWPs.

This study evaluates the performance of two DMOO algorithms on two benchmark functions (BFs) with various combinations of frequency and severity of change, as well as extended versions of these BFs that include intertemporal dependencies. The results indicate that the performance of the algorithms was significantly worse on the BFs with intertemporal dependencies.

Improving NSGA-III for flexible job shop scheduling using automatic configuration, smart initialization and local search

This paper provides a short summary of a novel algorithm tailored towards multi-objective flexible job shop scheduling problems (FJSP). The result shows that for challenging real-world problems in combinatorial optimization, off-the-shelf implementations of multi-objective optimization evolutionary algorithms (MOEAs) might not work, but by using various adaptations, these methods can be tailored to provide excellent results. This is demonstrated for a state of the art MOEA, that is NSGA-III, and the following adaptations: (1) initialization approaches to enrich the first-generation population, (2) various crossover operators to create a better diversity of offspring, (3) parameter tuning, to determine the optimal mutation probabilities, using the MIP-EGO configurator, (4) local search strategies to explore the neighborhood for better solutions. Using these measures, NSGA-III has been enabled to solve benchmark multi-objective FJSPs and experimental results show excellent performance.

A many-objective route planning benchmark problem for navigation

Route planning is one of the key elements in logistics, mobile robotics, and other applications, where engineers face many conflicting objectives. However, most of the current route planning algorithms consider merely up to three objectives. In this paper, we propose a scalable many-objective benchmark problem covering most of the key features for routing applications based on real-world data. We define five objective functions representing distance, travelling time, delays, and two route-specific features such as curvature and elevation. We define one instance for this test problem and provide its true Pareto-front to analyse the problem difficulties. We apply three well-known evolutionary algorithms. Since this test benchmark can be easily transferred to real-world routing problems, we construct a routing problem from OpenStreetMap data. We evaluate the three optimisation algorithms and observe that we can provide promising results for such a real-world application. The proposed benchmark represents a scalable many-objective route planning optimisation problem enabling researchers and engineers to evaluate their navigation and routing approaches.

Worst-case conditional hardness and fast algorithms with random inputs for non-dominated sorting

We study the computational complexity of the non-dominated sorting problem (NDS): Given a set P of n points in Rm, for each point pP, compute ℓ, the length of longest domination chain p1 > p2 > ··· > pℓ = p, where x dominates y (denoted as x > y) if x is not larger than y in every coordinate. A special case of NDS, which we label as NDS1, is to find all the non-dominated points in P. NDS has emerged as a critical component for multi-objective optimization problems (MOPs). For m ≤ 3, Θ(n log n)-time is known. For a fixed small m > 3, the best bound is O(n logm-2 n log log n). For larger m, the best result is an O(mn2)-time algorithm.

We show that the O(mn2) running time is nearly optimal by proving an almost matching conditional lower bound: for any ∈ > 0, and ω(log n) ≤ m ≤ (log n)o(1), there is no O(mn2-ε)-time algorithm for NDS or NDS1 unless a popular conjecture in fine-grained complexity theory is false. To complete our results, we present an algorithm for NDS with an expected running time O(mn + n2/m + n log2 n) on uniform random inputs.

POSTER SESSION: Evolutionary numerical optimization

Solving min-max optimisation problems by means of bilevel evolutionary algorithms: a preliminary study

Min-max optimisation is a special instance of a bilevel problem. It deals with the minimisation of the maximum output in all scenarios of a given problem. In this paper, numerical experiments are conducted to assess the accuracy and efficiency of three bilevel algorithms - known to perform well in general bilevel problems - on 13 unconstrained min-max test-functions. This study aims to bring the bilevel and min-max evolutionary community together and create a common ground for both optimisation problems.

A view of estimation of distribution algorithms through the lens of expectation-maximization

We show that a large class of Estimation of Distribution Algorithms, including, but not limited to, Covariance Matrix Adaption, can be written as a Monte Carlo Expectation-Maximization algorithm, and as exact EM in the limit of infinite samples. Because EM sits on a rigorous statistical foundation and has been thoroughly analyzed, this connection provides a new coherent framework with which to reason about EDAs.

Continuous optimization by hierarchical gaussian mixture with clustering embedded resource allocation

Continuous optimization problems, which are highly related to real-world problems, are difficult since they are multimodal and rugged. In this paper, we introduce a new algorithm called Global and Local Adaptation with Clustering Embedded Resource Allocation (GLACERA) to solve problems with relaxed conditions. We believe that continuous optimization problems are solvable as long as the local optima can serve as clues for finding the global optimum. To solve the problems more efficiently, a new clustering technique is proposed to help identify potential local optima. Then, a new multi-armed bandit technique, aiming to reach global optimum with greater probability, is presented to allocate resources for each local optimum. Finally, a new formula is proposed for balancing exploration and exploitation according to remaining number of function evaluations. We compare GLACERA with four milestone algorithms that are commonly used for continuous optimization, and show that it can solve a new class of functions while others failed, while remain competitive in solving benchmark problems of IEEE CEC2005 and IEEE CEC2013.

Performance2vec: a step further in explainable stochastic optimization algorithm performance

When working on a new stochastic optimization algorithm, one task that should be performed is to compare its performance with those of state-of-the-art algorithms. The literature suggests that the most commonly applied approaches for comparing algorithms' performances use statistical analyses. However, to provide a more meaningful explanation about algorithms' performances, we propose a methodology, named performance2vec, which computes a vector representation of each algorithm's performance by embedding it in some performance space determined by a set of benchmark problems. Experimental results show that the proposed embeddings, paired with clustering approaches, provide a more in-depth explanation regarding the algorithms' performance by exploring the relations between them and the benchmark problems.

Look-ahead natural evolutionary strategies

Natural1 evolution strategy (NES) algorithms have seen a variety of successful applications recently, but still need to improve algorithmic convergence. To address this issue, we propose the use of a look-ahead scheme predictively to estimate the natural acceleration in the algorithm. This estimation is to assist convergence towards the global optimum faster. The resultant algorithm is termed a "Look-Ahead Natural Evolutionary Strategy" (LANES) algorithm. It is tested against a combination of multiple benchmarks using a number of well-known benchmark problems. Compared with the original NES, the LANES increases the optimizer overheads, but offers a steady improvement in optimality, accuracy, and convergence.

Parametrized Benchmarking: an outline of the idea and a feasibility study

Performance of real-parameter global optimization algorithms is typically evaluated using sets of test problems. We propose a new methodology of extending these benchmarks to obtain a more balanced experimental design. This can be done by selectively removing some of the transformations originally used in the definitions of the test problems such as rotation, scaling, or translation. In this way, we obtain several variants of each problem parametrized by interpretable, high-level characteristics. These binary parameters are used as predictors in a multiple regression model explaining the algorithmic performance. Linear models allow for the attribution of strength and direction of performance changes to particular characteristics of the optimization problems and thus provide insight into the underlying mechanics of the investigated algorithms. The proposed ideas are illustrated with an application example showing the feasibility of the new benchmark. Parametrized benchmarking is a step towards obtaining multi-faceted insight into algorithmic performance and the optimization problems. The overall goal is to systematize a method of matching problems to algorithms and in this way constructively address the limitations imposed by the no free lunch theorem.

A quantum simulation algorithm for continuous optimization

This paper presents a novel multi-threaded quantum inspired optimization algorithm targeted at global search in continuous domains. The proposed approach is based on a Diffusion Monte Carlo (DMC) physical model and is characterized by a set of parallel quantum walk processes. The effectiveness of the proposed algorithm is demonstrated by experimental results on the 24 noiseless functions from the Black Box Optimization Benchmark of the Comparing Continuous Optimization benchmarking platform (COCO).

Expediting the convergence of evolutionary algorithms by identifying promising regions of the search space

Evolutionary algorithms (EAs) are widely used for optimization. Their use is particularly beneficial for problems where underlying objective functions and/or constraints are highly non-linear / black-box and the classical exact techniques cannot be applied. However, EAs evolve a population of solutions over a large number of generations by applying variation operators and selection schemes. The migration of population towards the optimum solution can be slow and require a large number of function evaluations. In this paper, we propose to expedite this convergence by identifying smaller sub-spaces that potentially envelope the optimum using `bump hunting', followed by intensification of offspring generation in these regions. We implement the method within a Differential Evolution (DE) framework and demonstrate the improvement over a range of benchmark test problems.

Bayesian CMA-ES: a new approach

This paper introduces a novel theoretically sound approach for the celebrated CMA-ES algorithm. Assuming the parameters of the multi variate normal distribution for the minimum follow a conjugate prior distribution, we can derive the optimal update at each iteration step thanks to Bayesian statistics. Update formulae are very similar to the vanilla (μ/λ) CMA-ES. We also use variance contraction and dilatation to accommodate for local and global search. As a result, this Bayesian framework provides a justification for the update of the CMA-ES algorithm and gives a new version of CMA-ES assuming normal-Inverse Wishart prior that has similar convergence speed and efficiency as the vanilla (μ/λ) CMA-ES on test functions ranging from cone, Rastrigin to Schwefel 1 and 2 functions.

Differential Evolution with Reversible Linear Transformations

Differential evolution (DE) is a well-known type of evolutionary algorithms (EA). Similarly to other EA variants it can suffer from small populations and loose diversity too quickly. This paper presents a new approach to mitigate this issue: We propose to generate new candidate solutions by utilizing reversible linear transformations applied to a triplet of solutions from the population. In other words, the population is enlarged by using newly generated individuals without evaluating their fitness. We assess our methods on three problems: (i) benchmark function optimization, (ii) discovering parameter values of the gene repressilator system, (iii) learning neural networks. The empirical results indicate that the proposed approach outperforms vanilla DE and a version of DE with applying differential mutation three times on all testbeds.

Differential evolution with explicit control of diversity for constrained optimization

Evolutionary Algorithms (EAs) have been quite successful both in constrained and unconstrained single-objective optimization. However, in the constrained case little attention has been paid to controlling the diversity explicitly with the aim of avoiding premature convergence. This paper proposes Differential Evolution with Clustering-based Diversity (DECD). DECD uses a novel replacement strategy that controls the diversity explicitly. Particularly, the replacement strategy forces the selection of some distant individuals but it simultaneously allows the survival of some other individuals that form clusters to promote intensification. The experimental validation shows the important benefits provided by the explicit control of diversity.

POSTER SESSION: Genetic algorithms

Impact of additional hardware resources on a parallel genetic algorithm

All road authorities are required to make sound maintenance investment decisions to maximise value from available budgets. As an indication of the complexity of this task, the schedule of pavement maintenance and rehabilitation for a small pavement network consisting of 200 segments, with four treatment alternatives over a planning period of five years has (2004)5 = 1.05 * 1046 possible alternatives. This study investigates the number and quality of solutions obtained by adding additional computing resources to a budget constrained implementation of a Parallel Genetic Algorithm based pavement management treatment scheduling system.

A parallel and distributed multi-population GA with asynchronous migrations: energy-time analysis for heterogeneous systems

Speed and energy efficiency are two concepts that currently should be taken into account when creating parallel code, especially in time-consuming applications such as GAs. Although the performance of single-computer systems continues to increase, it does not improve at the same rate as the computing requirements do. The depletion of Moore's law, the high frequencies that microprocessors already reach, or the difficulty in continuing to reduce lithography are some of the causes that make single-computer systems not suitable for many applications. In response, distributed systems emerge to overcome hardware limitations and meet the requirements of the applications. With this in mind, this paper provides an efficient multi-population GA with asynchronous migrations and parallelism at multiple levels by exploiting the capabilities of a heterogeneous four-node cluster. The procedure is evaluated from an energy-time point of view and compared to a synchronous version. The results show the importance of developing efficient methods to achieve good performance and demonstrate that energy-aware computing is the way to continue on the right track.

A fast GA for automatically evolving CNN architectures

Convolutional neural networks (CNNs) have been proved to be effective models to solve a series of challenging computer vision tasks. However, designing CNN architectures with good performance is still a challenging task. Automatic CNN architecture search algorithms have been proposed in recent years which can find competitive CNN architectures without manual intervention. But automatic search algorithms usually consume considerable computational time and resources. In addition, they only use deep blocks and ignore wide blocks of CNNs, which limits the performance of evolved CNNs. In order to address the above issues, a fast approach for automatically evolving CNN architectures based on deep and wide blocks (FAE-CNN) is proposed through genetic algorithms in this paper. In FAE-CNN, a refined fitness evaluation method based on divided datasets is designed with the purpose to speed up the running time. By introducing the Inception Block, FAE-CNN can evolve optimal CNN architectures from both deep and wide directions. Experimental results on CIFAR10 and CIFAR100 show that FAE-CNN can automatically design CNNs with flexible architectures and better performance in a very short running time.

Efficient machine learning through evolving combined deep neural networks

The usage of Artificial Neural Networks (ANNs) with a fixed topology is becoming more popular in daily life. However, there are problems where it is difficult to build an ANN manually. Therefore, genetic algorithms like NeuroEvolution of Augmented Topologies (NEAT) have been developed to find topologies and weights. The downside of NEAT is that it often generates inefficient large ANNs for different problems.

In this paper, we introduce an approach called Turbo NEAT, which combines divide and conquer methods with NEAT to allow a symbiosis of specialised smaller ANNs. In addition, we optimise the weights of the ANNs through backpropagation in order to better compare the topologies. Experiments on several problems show that these approaches allow the handling of complex problems and lead to efficient ANNs.

SHX: search history driven crossover for real-coded genetic algorithm

In evolutionary algorithms, genetic operators iteratively generate new offspring which constitute a potentially valuable set of search history. To boost the performance of crossover in real-coded genetic algorithm (RCGA), in this paper we propose to exploit the search history cached so far in an online style during the iteration. Specifically, survivor individuals over past few generations are collected and stored in the archive to form the search history. We introduce a simple yet effective crossover model driven by the search history (abbreviated as SHX). In particular, the search history is clustered and each cluster is assigned a score for SHX. In essence, the proposed SHX is a data-driven method which exploits the search history to perform offspring selection after the offspring generation. Since no additional fitness evaluations are needed, SHX is favorable for the tasks with limited budget or expensive fitness evaluations. We experimentally verify the effectiveness of SHX over 4 benchmark functions. Quantitative results show that our SHX can significantly enhance the performance of RCGA, in terms of accuracy.

Cooperative coevolutionary genetic algorithm using hierarchical clustering of linkage tree

Genetic algorithms (GAs) are stochastic optimization methods that mimic the evolution of living organisms and have been applied in various fields. Cooperative Coevolution (CC) is one of the methods to make GAs more efficient. CC divides a decision variable into several sub-problems and evolves the individuals independently in each of the sub-problems. Variables with dependencies need to be optimized mutually. In this paper, we focus on Linkage Tree as a method of analyzing the dependency of decision variables. Linkage Tree calculates the dependency between decision variables from the distribution of individuals as the distance between the decision variables, and builds a tree by using hierarchical clustering. We propose an algorithm that can efficiently search for problems with dependencies between variables by introducing Linkage Tree into CC methods. We evaluated the search performance of the proposed method using benchmark functions and confirmed the performance improvement by comparing it with the conventional methods.

A test problem with difficulty in decomposing into sub-problems for model-based genetic algorithms

Fully or nearly decomposable problems have been often used as test problems in genetic algorithm (GA) researches. The two types of problems have the common feature that all bits in each sub-problem are used for calculating its fitness value. The present model-based GAs appropriately decompose the whole of such a problem into parts and efficiently solve it. However, so far there has not been a test problem in which sub-problems are overlapped and bits of each sub-problem in the overlapped parts used for calculating a fitness value of the sub-problem vary depending on a bit pattern of a given solution candidate. This type of problems might cause difficulty in the decomposition to model-based GAs. In the study we propose the overlapped chains problem (OCP) as this type of problem. The results of applying several GAs to instances of the OCP show that some model-based GA cannot yield better search performance than a simple operator-based GA.

Illuminating super mario bros: quality-diversity within platformer level generation

We aim to build on the current understanding of how Quality-Diversity (QD) algorithms can be applied to Procedural Content Generation (PCG) for video games with our platform, Illuminated-Mario. The work presented here has two core aims: to explore the viability of our approach to PCG and to compare the performances of MAP-Elites and SHINE, two state of the art QD algorithms. Our experimental results suggest both that our approach to level generation is a viable foundation for future PCG systems, and that while the SHINE variants implemented do not significantly outperform MAP-Elites, they warrant further investigation.

POSTER SESSION: General evolutionary computation and hybrids

Hybrid bayesian evolutionary optimization for hyperparameter tuning

In this paper, we present a Hybrid Bayesian-Evolutionary tuning algorithm (HBEtune) for tuning machine learning algorithms or evolutionary algorithms, and analyze its performance. HBEtune combines meta-EA and Bayesian optimization techniques.

As hyperparameter tuning is a noisy, black-box optimization problem with expensive target functions, practical tuners must aim to minimize the number of necessary samples. In our method, we guide the EA's recombination operator towards more promising samples by employing the expected improvement acquisition criterion commonly used in Bayesian optimization. The expected improvement is evaluated on a surrogate model using a Gaussian process regression.

HBEtune shows generally competitive performance when compared with the state of the art irace tuner. Performance is analyzed across a suite of synthetic and real-world benchmark problems.

Automated design of efficient swarming behaviours: a Q-learning hyper-heuristic approach

While the sector of unmanned aerial vehicles (UAVs) is experiencing an exponential growth since several years, the majority of applications consider single devices which come with limitations such as flight duration or payload capacity. A promising way to overcome these is the usage of multiple autonomous UAVs synergistically, also referred to as swarms. Many metaheuristics have been manually designed to optimise the performance of swarms of unmanned vehicles. However developing and fine tuning efficient collective behaviours can be a challenging and time-consuming task. This article proposes to automate the generation of UAV swarming behaviours which optimise the Coverage of a Connected UAV Swarm (CCUS) problem where both the coverage time and the network connectivity are considered. To this end, we introduce a novel generative hyper-heuristic based on Q-Learning (QLHH) and evaluate the performance of the heuristics it generates to manually designed heuristics using state-of-the-art coverage and connectivity metrics. The obtained results demonstrate the capacity of QLHH to generate efficient distributed heuristics for the CCUS optimisation problem.

Finding a better basis on binary representation through DNN-based epistasis estimation

Epistasis indicating problem difficulty can be used to evaluate the basis of the problem space. However, calculating epistasis is difficult as it requires searching all solutions. In this study, a method that estimates epistasis using deep neural networks is proposed for basis evaluation. As a result of applying the proposed method to the Variant-OneMax problem, the method was able to successfully make estimations epistasis and significantly reduce the computing time.

Parallelized bayesian optimization for problems with expensive evaluation functions

Many black-box optimization problems rely on simulations to evaluate the quality of candidate solutions. These evaluations can be computationally expensive and very time-consuming. We present and approach to mitigate this problem by taking into consideration two factors: The number of evaluations and the execution time. We aim to keep the number of evaluations low by using Bayesian optimization (BO) - known to be sample efficient - and to reduce wall-clock times by executing parallel evaluations. Four parallelization methods using BO as optimizer are compared against the inherently parallel CMA-ES. Each method is evaluated on all the 24 objective functions of the Black-Box-Optimization-Benchmarking test suite in their 20-dimensional versions. The results show that parallelized BO outperforms the state-of-the-art CMA-ES on most of the test functions, also on higher dimensions.

Evolving genetic programming trees in a rule-based learning framework

Rule-based machine learning (RBML) algorithms such as learning classifier systems (LCS) are well suited to classification problems with complex interactions and heterogeneous associations. Alternatively, genetic programming (GP) has a complementary set of strengths and weaknesses best suited to regression problems and homogeneous associations. Both approaches yield largely interpretable solutions. An ideal ML algorithm would have the capacity to adapt and blend representation to best suit the problem at hand. In order to combine the strengths of these respective algorithm representations, a framework allowing coexistence and co-evolution of trees and rules is needed. In this work, we lay the empirical groundwork for such a framework by demonstrating the capability of GP trees to be evolved within an LCS-algorithm framework with comparable performance to a set of standard GP frameworks. We discuss how these results support the feasibility of a GP-LCS framework and next-step challenges to be addressed.

On the Effect of Walsh/Fourier transform in surrogate-assisted genetic algorithms

There have been many investigations on constructing efficient surrogate models to reduce the computation time of the black-box optimization problems, which require extensive computation. Here, we transformed the basis of each training data in the continuous domain and discrete domain, and constructed a surrogate model using the transformed solution. In the continuous domain, discrete Fourier transform was used; in the discrete domain, the discrete Walsh transform was used. Various benchmarks problems were experimented with, and the practical evidence for the accuracy of the proposed model was provided. In particular, it was confirmed that when the fitness function of the genetic algorithm was substituted with the proposed model, the genetic algorithm can find a better solution.

POSTER SESSION: Genetic programming

Benchmarking parent selection for program synthesis by genetic programming

In genetic programming, the parent selection method determines which individuals in the population are selected to be parents for the next generation, and how many children they create. This process directly impacts the search performance by determining on which areas of the search space genetic programming focuses its attention and how it balances exploration and exploitation. Many parent selection methods have been proposed in the literature, with aims of improving problem-solving performance or other characteristics of the GP system. This paper aims to benchmark many recent and common parent selection methods by comparing them within a single system and set of benchmark problems. We specifically focus on the domain of general program synthesis, where solution programs must make use of multiple data types and control flow structures, and use an existing benchmark suite within the domain. We find that a few methods, all variants of lexicase selection, rise to the top and demand further study, both within the field of program synthesis and in other domains.

Counterexample-driven genetic programming without formal specifications

Counterexample-driven genetic programming (CDGP) uses specifications provided as formal constraints in order to generate the training cases used to evaluate the evolving programs. It has also been extended to combine formal constraints and user-provided training data to solve symbolic regression problems. Here we show how the ideas underlying CDGP can also be applied using only user-provided training data, without formal specifications. We demonstrate the application of this method, called "informal CDGP," to software synthesis problems.

Transfer Learning of Genetic Programming Instruction Sets

The performance of a genetic programming system depends partially on the composition of the collection of elements out of which programs can be constructed, and by the relative probability of different instructions and constants being chosen for inclusion in randomly generated programs or for introduction by mutation. In this paper we develop a method for the transfer learning of instruction sets across different software synthesis problems. These instruction sets outperform unlearned instruction sets on a range of problems.

General controllers evolved through grammatical evolution with a divergent search

In this work, we analyse the performance of Novelty Search (NS) in a set of generalization experiments in a navigation task with Grammatical Evolution. Agents are trained on a single, simple environment, and tested on a selection of related, increasingly more difficult environments. We show that agents discovered with NS, although using a tiny number (six) of training samples, successfully generalise to these more difficult environments.

Image feature learning with a genetic programming autoencoder

Learning features from raw data is an important topic in machine learning. This paper presents a novel GP approach to learn high-level features from 2D images. It is a generative approach that resembles the concept of an autoencoder. Our approach executes multiple GP runs, each run generates a (partial) model that focuses on a particular high-level feature of the training images. Then, it combines the models generated by each run into a parametric function that reconstructs the observed images. We evaluated our approach on the popular MNIST dataset of 2D images representing handwritten digits. Our evaluation results show that our parametric approach can precisely reconstruct the MNIST hand-written digits.

Why and when are loops useful in genetic programming?

Executing a code segment repeatedly using loops is crucial for human programmers to write complex software. Similarly, in genetic programming, mechanisms that enable the multiple executions of a group of instructions seem to be useful in increasing the success rates for a number of software synthesis problems. In this paper, we investigate the effects of loops on the success rates for different types of problems.

Feature engineering for improving robustness of crossover in symbolic regression

Isolating the fitness-contribution of substructures is typically a difficult task in Genetic Programming (GP). Hence, useful substructures are lost when the overall structure (model) performs poorly. Furthermore, while crossover is heavily used in GP, it typically produces offspring models with significantly lower fitness than that of the parents. In symbolic regression, this degradation also occurs because the coefficients of an evolving model lose utility after crossover. This paper proposes isolating the fitness-contribution of various substructures and reducing the negative impact of crossover by evolving a set of features instead of monolithic models. The method then leverages multiple linear regression (MLR) to optimise the coefficients of these features. Since adding new features cannot degrade the accuracy of an MLR produced model, MLR-aided GP models can bloat. To penalise such additions, we use Adjusted R2 as the fitness function. The paper compares the proposed method with standard GP and GP with linear scaling. Experimental results show that the proposed method matches the accuracy of the competing methods within only 1/10th of the number of generations. Also, the method significantly decreases the rate of post-crossover fitness degradation.

Refined typed genetic programming as a user interface for genetic programming

Evolutionary algorithms often have to balance local search with the evolutionary search. Restricting the language of individuals acts as local search inside a Genetic Programming (GP) approach. We propose the use of Refined Typed Genetic Programming, an enhanced hybrid of Strongly Typed Genetic Programming (STGP) and Grammar-Guided Genetic Programming (GGGP) that features an advanced type system with polymorphism and dependent refinements. This approach allows the end-user to express the restrictions on the search problem in the same language as the solution.

POSTER SESSION: Real world applications

A GA for non-uniform sampling harmonic analysis

In this paper, we use a real valued genetic algorithm (GA) to model a large noisy periodic signal. The information that must be extracted are the amplitude, angular velocity and phase of the sines composing the signal. The algorithm outperforms the Fourier Transform (FT) method which has limitations when it comes to determining the phase component of a real part signal. The GA returns the phase component of the signal without the need of de-noising in a very short time, thanks to an efficient parallelization on GPGPU cards (≈ X400 acceleration compared to the same sequential code).

Beer organoleptic optimisation: utilising swarm intelligence and evolutionary computation methods

This paper addresses the personalisation of beer properties in the specific case of craft beers where the production process is more flexible. The problem is investigated by using three swarm intelligence and evolutionary computation techniques that enable brewers to map physico-chemical properties to target organoleptic properties to design a specific brew. The process is illustrated by a number of experiments designing craft beers where the results are investigated by "cloning" popular commercial brands based on their known properties. The proposed approach allows for the discovery of new recipes, personalisation and alternative high-fidelity reproduction of existing ones.

Multi-objective optimization for worker cross-training: the tri-objective case

Worker cross-training is a problem arising in many industries and companies that involve human work, since workers that possess multiple skills, i.e., a qualification profile, may be employed more flexible on a day-to-day basis. At the same time it can be assumed that these workers are also incur a higher personnel cost. It is therefore of high interest to a company to balance the available skills such that customer deadlines can be met in a cost-efficient way. In this work we extend a simulation-based optimization approach with a third objective and apply NSGA-II.

NGAP: a novel hybrid metaheuristic algorithm for round-trip carsharing fleet planning

The growing awareness of the environmental movement greatly influences the transportation scene of this century leading to several transportation alternatives. One among them is carsharing service which has been gaining traction and support in major cities around the globe. It is also undeniable that the location planning of the fleet vehicles can contribute to its success. The fleet vehicles must be easily accessed and in the proximity of various transportation hubs and facilities. In this paper, we study the Vehicle Placement Problem (VPP) for round-trip carsharing and propose a novel hybrid algorithm, NGAP, which is a combination of NSGA-III and Pareto Local Search (PLS) to enhance the quality of the results over NSGA-III. The proposed algorithm is tested on 10 synthetic and four real-world instances. NGAP is shown to be significantly more efficient than NSGA-III on almost all instances in terms of Inverted Generational Distance (IGD), and Hypervolume.

Cable-stayed bridge optimization solution space exploration

In this work, we explore and optimise solutions for a controlled cable-stayed footbridge with a pre-determined length and width. The problem consists of designing a complete and cost-efficient bridge solution with geometry, cross-section, prestress and control design variables. The dynamic and static constraints create a deceptive landscape of the solution space, which, in turn, makes this problem hard to optimise via standard optimisation. We start with a conventional Genetic Approach and show that we are able to fulfill the structural requirements. Afterwards, we move to explore the variables and conditions that enable us to traverse in the solution space, optimising structural requirements while lowering the cost of solutions. The results present the sensitivities for all design variables for the baseline solution. We observed that most design variables could not be changed in any direction without compromising the structural constraints or cost. The results demonstrate that the optimisation of cable-stayed bridges requires finding a very delicate balance between multiple structural constraints and cost, which poses an interesting benchmark problem for the use of evolutionary algorithms in structural optimisation.

Multi-objective exploration of a granular matter design space

Granular materials, such as sands, grains, soils and powders are central to many industrial processes from mining and food production to pharmaceuticals and construction. These materials display many interesting properties, including their ability to flow like a liquid at low densities and jam in to a solid state at high densities. However, controlling the microscopic properties of granular media to elicit bespoke functional granular systems remains challenging due to the complex relationship between the individual particle morphologies and the related emergent behaviour of the bulk state. Here, we investigate the use of evolution to explore the functional landscapes of granular systems. We employ a superellipsoid representation of the particle shape, and examine a range of bi-disperse systems of prolate particles. Results show the ability to determine optimal particle morphologies and trade-offs for density and two different structural order measures. This represents an important further step towards the creation of bespoke jammed systems with a range of practical applications across broad swathes of industry.

A realistic scooter rebalancing system via metaheuristics

This paper addresses a realistic electric scooter rebalancing task that includes business rules (e.g., time constraints, available truck fleet). We explore an integer encoding approach and three metaheuristics (hill climbing, simulated annealing, genetic algorithm), discussing the obtained results, current limitations and future work directions.

Wind-turbine design optimization using a many-objective evolutionary algorithm

In this work, we study the performance of an evolutionary algorithm for solving a real-world wind turbine design optimization problem which was a part of the Evolutionary Computation Competition 2019 organized by the Japanese Society of Evolutionary Computation. The problem involves 5 objectives, 32 continuous variables and 22 constraints, which are evaluated using WISDEM and OpenMDAO tools. The results obtained by the proposed algorithm are compared with several state-of-the-art algorithms to demonstrate its effectiveness.

Optimising word embeddings with search-based approaches

Word embeddings have rapidly become an all-purpose tool for a diverse range of real world applications. This development is nurtured by the availability and applicability of pre-trained models. However, their usage faces the risk of being inaccurate when used in domains different from the ones they were trained on.

In this paper, we formulate the adaptation of word embeddings as a vector multiplication problem, which enables us to apply search methods to explore potential word embedding adaptations with respect to their semantic correctness. To assess the effectiveness of our proposal, we empirically investigate the use of both local and global search-based approaches (i.e. Hill Climbing, Tabu Search and Genetic Algorithm) in order to maximise the semantic correctness of a popular Word2Vec pre-trained model (namely GoogleNews) when applied to another domain (i.e. the MEN dataset).

The results of our study reveal that Hill Climbing, Tabu Search and Genetic Algorithm perform equally well and all outperform the original GoogleNews model as well as a baseline model based on Random Search. This shows that optimising word embeddings with search-based approaches is possible and effective.

A new objective function for super-resolution deconvolution of microscopy images by means of a genetic algorithm

The SUPPOSe algorithm for super-resolution image deconvolution relies in assuming that the image source distribution can be modeled as a superposition of point sources of equal intensities, achieving a fivefold improvement in the spatial resolution. A genetic algorithm is used to find the positions of the sources by optimizing an objective function that also depends on their intensities. In this work we present a new objective function for the SUPPOSe algorithm that is independent of the source intensities. We compare both methods and prove the same performance but without the necessity to fit the intensities. This will allow to replace the optimization of the intensities with an optimization step to fit the instrument response function for a blind deconvolution.

A genetic algorithm for matching oil spill particles

Learning the movement of the oil spills is a challenging task. To address this, we propose an algorithm that extracts particles from the expansion aspect of an oil spill and tracks the location of the particles in time-series of oil spill data. Using principal component analysis, the oil spill data were divided, and the particles matched; this approach aims to minimize the variance of the distance of each oil spill through the use of a genetic algorithm and principal component analysis. Using the oil spill visualization data set in the previous study, the algorithm was determined to be suitable for oil spill monitoring, with the average data error of the particles 3.2%.

An improved brain storm optimization algorithm for fuzzy distributed hybrid flowshop scheduling with setup time

The distributed hybrid flowshop scheduling (DHFS) problem, as one of the typical scheduling problems, has been researched in both academic and industrial fields during recent years. Consiering the DHFS with fuzzy processing time and setup time constraints, we develop an improved version of BSO, named IBSO. The objective is to minimize the maximize fuzzy completion time among all the factories. First, each solution is represented by a two-dimensional vectors. The two realistic constraints, i.e., the fuzzy processing time under uncertain enviornment and the setup time, make the problem close to the reality. Then, a novel constructive heuristic based on the Nawaz-Enscore-Ham (NEH) method, called distributed NEH, is proposed. Several local search heuristics considering the problem features and the objective are developed to enhance the local search abilities. Moreover a SA-based acceptance criterion is embeded to enhance the exploration abilities. Experimental results verify that the proposed algorithm is efficient and effective for solving the considered DHFS problems in comparison with other recently published efficient algorithms.

Batch correction of genomic data in chronic fatigue syndrome using CMA-ES

Modern genomic sequencing machines can measure thousands of probes from different specimens. Nevertheless, theoretically comparable datasets can show considerably distinguishable properties, depending on both platform and specimen, a phenomenon known as batch effect. Batch correction is the technique aiming at removing this effect from the data. A possible approach to batch correction is to find a transformation function between different datasets, but optimizing the weights of such a function is not trivial: As there is no explicit gradient to follow, traditional optimization techniques would fail. In this work, we propose to use a state-of-the-art evolutionary algorithm, Covariance Matrix Adaptation Evolution Strategy, to optimize the weights of a transformation function for batch correction. The fitness function is driven by the classification accuracy of an ensemble of algorithms on the transformed data. The case study selected to test the proposed approach is mRNA gene expression data of Chronic Fatigue Syndrome, a disease for which there is currently no established diagnostic test. The transformation function obtained from three datasets, produced from different specimens, remarkably improves the performance of classifiers on the task of diagnosing Chronic Fatigue. The presented results are an important steppingstone towards a reliable diagnostic test for this syndrome.

Effective image clustering using self-organizing migrating algorithm

Image segmentation is an important task in computer vision. Clustering is a common image segmentation approach which divides an image into homogeneous regions, but conventional clustering algorithms such as k-means have a tendency of getting stuck in local optima. In this paper, we propose a novel clustering algorithm based on the Self-Organizing Migrating Algorithm (SOMA). In particular, we adopt SOMA Team To Team Adaptive (SOMA T3A), a recent variant of SOMA, to image clustering. Experimental results on a set of benchmark images show excellent image clustering performance, also in comparison to other state-of-the-art metaheuristics.

Evolved ensemble of detectors for gross error detection

In this study, we evolve an ensemble of detectors to check the presence of gross systematic errors on measurement data. We use the Fisher method to combine the output of different detectors and then test the hypothesis about the presence of gross errors based on the combined value. We further develop a detector selection approach in which a subset of detectors is selected for each sample. The selection is conducted by comparing the output of each detector to its associated selection threshold. The thresholds are obtained by minimizing the 0-1 loss function on training data using the Particle Swarm Optimization method. Experiments conducted on a simulated system confirm the advantages of ensemble and evolved ensemble approach.

Real-time genetic optimization of large file transfers

Transfer configurations play a significant role in achieved throughput for file transfers in high-speed networks. However, finding an optimal setting for a given transfer task is an intractable, non-linear problem. Existing solutions thus rely on offline models to configure only a subset of parameters, yielding suboptimal performance when network conditions deviate from what is observed in historical data. In this paper, we apply genetic algorithms to discover optimal configurations for file transfer settings in real-time. Experimental results show that the genetic algorithm yields up-to 33% higher transfer throughput compared to the state-of-the-art solutions by finding near-optimal transfer settings with minimal overhead.

Moving target defense through evolutionary algorithms

Moving target defense is a technique for protecting internet-facing systems via the creation of a variable attack surface, that is, a changing profile that, however, is able to provide the same service to legitimate users. In the case of internet servers, it can be achieved via the generation of different configurations that change the service profile, and that can be included in a policy of restarting services with new configurations after a random time and with a random frequency. In this paper we will present a method based on evolutionary algorithms that uses industry-standard practices to score the vulnerability of a server and is designed to generate multiple configurations with optimized score in every run of the algorithm. We make improvements over a previous version of the method by tuning the evolutionary algorithm with the challenge of the very costly fitness evaluation that only allows for a very limited evaluation budget.

An application of GA and EDA for passive in-building distributed antenna systems

The continuous growth in mobile data traffic has made it increasingly difficult to rely on outdoor base stations to support the traffic generated indoors. A passive in-building distributed antenna system (IB-DAS) provides indoor data coverage by connecting antennas to a base station through coaxial cables and passive components. This paper focuses on the automated design of IB-DAS based on the real-world requirements of a telecom service provider. Particularly, it presents a binary representation based solution to the vertical design of the DAS problem and presents initial results on the performance of 4 binary Evolutionary Algorithms (EAs). The aim is to incorporate better-performing algorithms into a DAS network planning tool, used by our industrial partner.

Evolving multi-objective ranking models for GMV optimization in E-Commerce

In this paper, we focus on the challenge of GMV optimization in an e-commerce marketplace. We develop a production grade ranking system capable of evolving neural networks to efficiently trade off between different components while minimizing the impact on conversion. We further validate our approach with both offline analysis and online A/B experiments in a large scale production search engine. Finally, we contribute our production source code to open source to encourage further exploration in this space.

Preliminary study of applied binary neural networks for neural cryptography

Adversarial neural cryptography is deemed as an encouraging area of research that could provide different perspective in the post-quantum cryptography age, specially for secure transmission of information. Nevertheless, it is still under explored with a handful of publications on the subject. This study proposes the theoretical implementation of a neuroevolved binary neural network based on boolean logic functions only (BiSUNA), with the purpose of encrypting/decrypting a payload between two agents, hiding information from a competitor.

Towards realistic optimization benchmarks: a questionnaire on the properties of real-world problems

Benchmarks are a useful tool for empirical performance comparisons. However, one of the main shortcomings of existing benchmarks is that it remains largely unclear how they relate to real-world problems. What does an algorithm's performance on a benchmark say about its potential on a specific real-world problem? This work aims to identify properties of real-world problems through a questionnaire on real-world single-, multi-, and many-objective optimization problems. Based on initial responses, a few challenges that have to be considered in the design of realistic benchmarks can already be identified. A key point for future work is to gather more responses to the questionnaire to allow an analysis of common combinations of properties. In turn, such common combinations can then be included in improved benchmark suites. To gather more data, the reader is invited to participate in the questionnaire at: https://tinyurl.com/opt-survey

Dynamic vessel-to-vessel routing using level-wise evolutionary optimization

Modern practical optimization problems are too often complex, nonlinear, large-dimensional, and sometimes dynamic making gradient-based and convex optimization methods too inefficient. Moreover, most such problems which must be solved for a reasonably approximate solution routinely in every few hours or every day must use a computationally fast algorithm. In this paper, we present a formulation of a dynamic vessel-to-vessel service ship scheduling problem. In a span of several hours, the service ship must visit as many moving vessels as possible and complete the trip in as small a travel time as possible. Thus, the problem is bi-objective in nature and involves a time-dependent traveling salesman problem. We develop a level-wise customized evolutionary algorithm to find multiple trade-off solutions in a generative manner. Compared to a mixed-integer programming (MIP) algorithm, we demonstrate that our customized evolutionary algorithm achieves similar quality schedules in a fraction of the time required by the MIP solver. We are currently developing an interactive decision support tool based on our proposed method for finding multiple trade-off schedules simultaneously and selecting a single preferred one.

Stochastic simulation optimization benchmarking method in consideration of finite period of service

Stochastic1 Simulation Optimization (SSO) is one of the most useful decision-making tools for those who simulate the real-world problems in which uncertainty appear, such as manufacturing or logistics issues. In recent years many SSO algorithms, such as stochastic surrogate model, optimal computing budget allocation, and various evolutionary optimization algorithms, have been developed. We have exploited test functions for benchmarking the algorithms, in which a deterministic part whose optimal values are known is accompanied by stochastic noise with zero expected value. Using the test functions, we have evaluated the algorithm in the viewpoint of closeness between the obtained fitness and the optimal value, which indicates how good the obtained solution will be for infinitely long period since the fitness converges to the deterministic value at infinity. In the real-world decision making, however, we often use the obtained solution during the finite service time, and after that, we make a decision again. From the perspective of real-world application, we propose a method to evaluate the SSO algorithms considering finite period of service. We illustrate that the optimal solution may differ depending on the period of service when the simulation has heterogeneous noise, which may influence the performance ranking of the algorithms. Using the well-known COCO benchmarking platform, we carry out the numerical experiments on noisy test functions, and show how the service time affects the performance.

Multiobjective direction driven local search for constrained supply chain configuration problem

Supply chain configuration (SCC) is to make optimal choices for the members in supply chain management. The configurations of members with different choices can influence the total cost (totalCost) and the total time (totalTime) for the productions of the supply chain. This paper focuses on a constrained SCC (CSCC) problem to minimize the totalCost and to meet the totalTime constraint (i.e., the deadline limitation). To handle the constrained problem, existing methods may use constraints handing techniques like penalty or repair to help obtain feasible solutions. Differently, this paper considers the constraint of totalTime as another objective, and proposes a multiobjective direction driven local search (MDDLS) algorithm to solve the CSCC problem. The MDDLS explores the search path from a solution with smaller totalCost (or totalTime) to another solution with smaller totalTime (or totalCost). This way, MDDLS can obtain a set of solutions with different totalCost and totalTime values. The solution with best (smallest) totalCost value among all the feasible solutions (whose totalTime values satisfy the time constraint) is regarded as the final solution. The experiments show that MDDLS outperforms some compared algorithms for the CSCC problem.

POSTER SESSION: Search-based software engineering

Combining sequential model-based algorithm configuration with default-guided probabilistic sampling

General-purpose automated algorithm configuration procedures have enabled impressive improvements in the state of the art in solving challenging problems from AI, operations research and other areas. The most successful configurators combine multiple techniques to search vast combinatorial spaces of parameter settings for a given algorithm as efficiently as possible. Specifically, two of the most prominent general-purpose algorithm configurators, SMAC and irace, can be seen as combinations of Bayesian optimisation and racing, and of racing and an estimation of distribution algorithm, respectively. Here, we investigate an approach that combines all three of these techniques into one single configurator, while exploiting prior knowledge contained in expert-chosen default parameter values. We demonstrate significant performance improvements over irace and SMAC on a broad range of running time optimisation scenarios from AClib.

Handling uncertainty in code smells detection using a possibilistic SBSE approach

Code smells, also known as anti-patterns, are indicators of bad design solutions. However, two different experts may have different opinions not only about the smelliness of a particular software class but also about the smell type. This causes an uncertainty problem that should be taken into account. Unfortunately, existing works reject uncertain data that correspond to software classes with doubtful labels. Uncertain data rejection could cause a significant loss of information that could considerably degrade the performance of the detection process. Motivated by this observation and the good performance of the possibilistic K-NN classifier in handling uncertain data, we propose in this paper a new evolutionary detection method, named ADIPOK (Anti-pattern Detection and Identification using Possibilistic Optimized K-NN), that is able to cope with the uncertainty factor using the possibility theory. The comparative experimental results reveal the merits of our proposal with respect to four relevant state-of-the-art approaches.

Search-based many-criteria identification of microservices from legacy systems

The expensive maintenance of legacy systems lead companies to migrate such systems to a microservice architecture. This migration requires the identification of microservice candidates, which requires analysis of many criteria. Existing search-based approaches to solve this problem are only based on the coupling and cohesion criteria. To overcome these limitations, we propose a many-objective search-based approach for identifying microservice candidates. Its five objectives correspond to criteria pointed as useful by experienced developers. Our approach was evaluated in the context of a legacy system. The results show that our approach is very similar on optimizing the traditional criteria of coupling and cohesion, but much better when taking into account the additional criteria.

Recommending peer reviewers in modern code review: a multi-objective search-based approach

Modern code review is a common practice used by software developers to ensure high software quality in open source and industrial projects. During code review, developers submit their code changes which should be reviewed, via tool-based code review platforms, before being integrated into the codebase. Then, reviewers provide their feedback to developers, and may request further modifications before finally accepting or rejecting the submitted code changes. However, the identification of appropriate reviewers is still a tedious task as the number of code reviews to be performed is inflated with the increasing number of code changes and the increasing size of software development teams in today's large and active software projects. To help developers with the review process, we introduce a multi-objective search-based approach to find the appropriate set of reviewers. We use the Non-dominated Sorting Genetic Algorithm (NSGA-II) to optimize two conflicting objectives (i) maximize reviewers expertise with the changed files, and (ii) minimize reviewers workload in terms of their current open code reviews. We conduct a preliminary evaluation on two open source projects to evaluate our approach. Results indicate that our approach is efficient as compared to state-of-the-art approaches.

Crash reproduction using helper objectives

Evolutionary-based crash reproduction techniques aid developers in their debugging practices by generating a test case that reproduces a crash given its stack trace. In these techniques, the search process is typically guided by a single search objective called Crash Distance. Previous studies have shown that current approaches could only reproduce a limited number of crashes due to a lack of diversity in the population during the search. In this study, we address this issue by applying Multi-Objectivization using Helper-Objectives (MO-HO) on crash reproduction. In particular, we add two helper-objectives to the Crash Distance to improve the diversity of the generated test cases and consequently enhance the guidance of the search process. We assessed MO-HO against the single-objective crash reproduction. Our results show that MO-HO can reproduce two additional crashes that were not previously reproducible by the single-objective approach.

Exploiting fault localisation for efficient program repair

Search-based program repair generates variants of a defective program to find its repair. This could reduce the time and effort necessary for the manual software development and maintenance. However, applying even a limited set of mutations on a small piece of code (that repairs only trivial defects) generates a huge number of possible program variants (also called a search space). The reduction of the search space, while preserving the number and quality of repairs, would make these tools more efficient and practical.

We present an end-to-end repair tool for Java programs. It localises lines of source code that introduced a defect into the history of the program's development and applies a set of mutations targeting only these lines. In the reduced search space, our tool repaired defects covered by failing tests in an open-source Java program.

On the prediction of continuous integration build failures using search-based software engineering

Continuous Integration (CI) aims at supporting developers in integrating code changes quickly through automated building. However, in such context, the build process is typically time and resource-consuming. As a response, the use of machine learning (ML) techniques has been proposed to cut the expenses of CI build time by predicting its outcome. Nevertheless, the existing ML-based solutions are challenged by problems related mainly to the imbalanced distribution of successful and failed builds. To deal with this issue, we introduce a novel approach based on Multi-Objective Genetic Programming (MOGP) to build a prediction model. Our approach aims at finding the best prediction rules based on two conflicting objective functions to deal with both minority and majority classes. We evaluated our approach on a benchmark of 15,383 builds. The results reveal that our technique outperforms state-of-the-art approaches by providing a better balance between both failed and passed builds.

A new approach to distribute MOEA pareto front computation

Multi-Objective Evolutionary Algorithms (MOEAs) offer compelling solutions to many real world problems, including software engineering ones. However, their efficiency decreases with the growing size of the problems at hand, hindering their applicability in practice.

In this paper we propose a novel master-worker approach to distribute the computation of the Pareto Front (PF) for MOEAs (dubbed MOEA-DPF) and empirically evaluate it on a real-world software project management problem. With respect to previous work, our proposal can be used with any MOEA to tackle multiobjective problems regardless of their formulation/representation.

Our results show that MOEA-DPF runs significantly faster (up to 3.1x speed-up using two workers) than its sequential counterpart while maintaining (and even improving) the quality of the PF. We conclude that MOEA-DPF provides an effective and simple solution to speed-up the execution of MOEAs by distributing the PF computation, making them effective for real-world problems.

Divide and conquer: Seeding strategies for multi-objective multi-cloud composite applications deployment

The multi-cloud composite applications deployment problem (MCADP) aims to select proper cloud resources from multiple cloud providers at different locations to deploy applications with shared constituent services in order to optimize performance and cost. In this paper, we propose divide-and-conquer seeding strategies to find diverse and high-quality deployment plans as seeds. By using the real-world datasets, the experimental results demonstrate that the proposed seeding strategies can outperform recently proposed approaches, i.e. AO-Seed and SO-Seed, for MCADP.

POSTER SESSION: Theory

Critical evaluation of sine cosine algorithm and a few recommendations

Sine Cosine Algorithm (SCA) is one of the highly-referred optimization algorithms in the literature. The present study contributes by discovering three shortcomings of SCA and making a few recommendations. We show that the mathematical model of SCA does not work as explained in the original paper and its performance can be improved by modifying the position-updating equation of SCA as stated in the original paper. Moreover, we empirically and statistically show that sine and cosine functions, which make this algorithm different from the others, can be replaced with a simple random variable having a value in the range [`-1, 1] without degrading the overall performance of the algorithm. Furthermore, we demonstrate that the behavior of SCA is biased for the functions having the global optimum at the origin. Finally, on the basis of the analysis of SCA, we make two recommendations for the meta-heuristics designers regarding the selection of the benchmarks and mapping of the inspiration when designing a new algorithm.

Population control meets doob's martingale theorems: the noise-free multimodal case

We study a test-based population size adaptation (TBPSA) method, inspired from population control, in the noise-free multimodal case. In the noisy setting, TBPSA usually recommends, at the end of the run, the center of the Gaussian as an approximation of the optimum. Combined with a more naive recommendation, namely recommending the visited point which had the best fitness value so far, TBPSA is also powerful in the noise-free multimodal context.

Radial model of differential evolution dynamics

Despite extensive research into the state-of-the-art global optimization technique of Differential Evolution (DE) its theoretical foundations still need development. This paper provides a dynamics model of a population exploiting a radial (central-symmetric), unimodal function. Derivations are based on the analysis of moments of the population distribution and lead to formulae describing them in consecutive iterations. This allows for simulating the execution of DE for radial functions without running this algorithm. Analytical insights include an explicit, approximate formula linking the convergence speed of DE population with a generalized scaling factor, crossover probability and the search space dimension.

Redundant binary representations with rigorous trade-off between connectivity and locality

We design a general mathematical framework of redundant binary representations that are easy to apply for evolutionary algorithms and are efficient with respect to several measures such as synonymity, connectivity, and locality. The proposed representations are easy to scale for any problem dimension and any level of neutrality, and they can exhibit different values of connectivity and locality.

TUTORIAL SESSION: Introductory tutorials

Evolutionary computation: a unified approach

Evolutionary multi-objective optimization: past, present and future

A gentle introduction to theory (for non-theoreticians)

Neuroevolution for deep reinforcement learning problems

Evolutionary many-objective optimization

Runtime analysis of population-based evolutionary algorithms: introductory tutorial at GECCO 2020

Evolution of neural networks

Genetic programming: a tutorial introduction

Representations for evolutionary algorithms

Introductory mathematical programming for EC

Learning classifier systems: from principles to modern systems

Model-based evolutionary algorithms: GECCO 2020 tutorial

Evolutionary computation and games

Hyper-heuristics tutorial

TUTORIAL SESSION: Advanced tutorials

Design principles for matrix adaptation evolution strategies

Quality-diversity optimisation

Statistical analyses for meta-heuristic stochastic optimization algorithms: GECCO 2020 tutorial

Recent advances in particle swarm optimization analysis and understanding

Visualization in multiobjective optimization

Genetic improvement: taking real-world source code and improving it using genetic programming

Solving complex problems with coevolutionary algorithms

Decomposition multi-objective optimisation: current developments and future opportunities

Semantic genetic programming

Evolutionary computation for digital art

Dynamic control parameter choices in evolutionary computation: GECCO 2020 tutorial

Sequential experimentation by evolutionary algorithms

Theory and practice of population diversity in evolutionary computation

Fitness landscape analysis to understand and predict algorithm performance for single- and multi-objective optimization

Benchmarking and analyzing iterative optimization heuristics with IOHprofiler

A hands-on guide to distributed computing paradigms for evolutionary computation

TUTORIAL SESSION: Specialized tutorials

Automated algorithm configuration and design

Evolutionary computer vision

Search based software engineering: challenges, opportunities and recent applications

Evolutionary computation and machine learning in cryptology

Push

Evolutionary algorithms and machine learning: Synergies, Challenges and Opportunities

Addressing ethical challenges within evolutionary computation applications: GECCO 2020 tutorial

Evolutionary algorithms in biomedical data mining: challenges, solutions, and frontiers

Theory of estimation-of-distribution algorithms

Evolutionary computation for feature selection and feature construction

Swarm intelligence in cybersecurity

Evolutionary computation and evolutionary deep learning for image analysis, signal processing and pattern recognition

WORKSHOP SESSION: Workshop evolutionary many-objective optimization

Preliminary study of adaptive grid-based decomposition on many-objective evolutionary optimization

This work proposes a method to adaptively determine the grid granularity for the grid decomposition-based evolutionary algorithm for many-objective optimization. The decomposition of the objective space is one of the promising approaches for evolutionary many-objective optimization. Although the weight-based decomposition is frequently employed recently, it has been known that the grid-based decomposition is also effective for many-objective optimization. The grid-based evolutionary algorithm (GrEA) decomposes the objective space into a grid environment with the grid division parameter. The grid division parameter determines the grid granularity and affects search performance. To find out an appropriate grid division parameter, we need to perform a parameter tuning, which executes multiple runs with different parameters and is generally time-consuming. In this work, we propose an adaptive GrEA (A-GrEA) automatically determines the grid division parameter during a single run of the search. Experimental results using WFG variant problems with 6 to 10 objectives show that the proposed A-GrEA does not only to determine the grid division parameter adaptively but also achieves comparative or higher search performance than the conventional GrEA with the best grid division parameter, MOEA/D, and NSGA-III.

WORKSHOP SESSION: Workshop the evolution of robots for the real world

Path towards multilevel evolution of robots

Multi-level evolution is a bottom-up robotic design paradigm which decomposes the design problem into layered sub-tasks that involve concurrent search for appropriate materials, component geometry and overall morphology. Each of the three layers operate with the goal of building a library of diverse candidate solutions which will be used either as building blocks for the layer above or provided to the decision maker for final use. In this paper we provide a theoretical discussion on the concepts and technologies that could potentially be used as building blocks for this framework.

If it evolves it needs to learn

We elaborate on (future) evolutionary robot systems where morphologies and controllers of real robots are evolved in the real-world. We argue that such systems must contain a learning component where a newborn robot refines its inherited controller to align with its body, which will inevitably be different from its parents.

Automatically designing the behaviours of falling paper: the emergence of non-trivial behaviours via interaction with the physical world

Biological systems exhibit extraordinary levels of diversity. Embodied behaviours are the emergent property of many loosely-coupled parallel processes, from the microscopic level, e.g. materials, to more abstracted levels, e.g. organs and limbs. We summarise our investigations into using a synthetic methodology, i.e. an understanding-by-building approach, for designing the complex interactions of falling paper shapes. By studying how simple systems such as a falling paper shape behave, we can analyse the specific characteristics of the interaction between morphology and the environment, and how this leads to programmable non-trivial behaviours. We present current results and discuss the implications on the future of design in robotics.

Evolutionary stress factors for adaptable robot 'personalities'

Many studies on real multi-robot systems are undertaken in laboratory environments, whereas the outside world is unstable and risky. I argue that the phenomenon of phenotypic plasticity provides a bioinspiration framework for making such systems more resilient in the field. Such plasticity can result in consistent individual behavioural differences among group members, which have been described as 'personalities' in recent years. Personality axes such as shy-bold can be represented by single variables, and may be intuitive in the context of human-robot interaction. However, for plasticity to emerge in artificial evolution, simulated environments need to be more heterogeneous and unpredictable, as in real-world natural history. Behavioral Reaction Norms (BRNs) from developmental biology are a useful concept for the transparent mapping of relevant sensory inputs to robot behavioral change. In this position paper, I identify candidate stress factors in simulated environments for artificial evolution of adaptive BRNs. I also consider some potential benefits and costs of this approach.

Reusability vs morphological space in physical robot evolution

Few evolved robots have been tested outside simulation due to real world experiments being resource and time expensive. In this paper, we discuss different approaches to physically implement evolved robots and propose a modular robot system with external automatic reconfiguration. While the morphological space is reduced, it provides us with a fast, reusable, fully autonomous system to evolve physical robots in reality.

Real world morphological evolution is feasible

Evolutionary algorithms offer great promise for the automatic design of robot bodies, tailoring them to specific environments or tasks. Most research is done on simplified models or virtual robots in physics simulators, which do not capture the natural noise and richness of the real world. Very few of these virtual robots are built as physical robots, and the few that are will rarely be further improved in the actual environment they operate in, limiting the effectiveness of the automatic design process. We utilize our shape-shifting quadruped robot, which allows us to optimize the design in its real-world environment. The robot is able to change the length of its legs during operation, and is robust enough for complex experiments and tasks. We have co-evolved control and morphology in several different scenarios, and have seen that the algorithm is able to exploit the dynamic morphology solely through real-world experiments.

WORKSHOP SESSION: Workshop industrial applications of metaheuristics

A pareto front-based metric to identify major bitcoin networks influencers

Bitcoin is a novel digital currency that relies on cryptography instead of a central authority to verify transactions. Without a central authority, Bitcoin requires a complete list of all transactions to be made public so that they can be verified by all users. The major network influencers in a Bitcoin network are defined as users that accumulate a disproportionate amount of wealth compared to others. However, there are some defined metrics to identify major network influencers, considering multiple criteria can improve the detection task. In this paper, a multi-criteria metric is applied to identify the major network influencers based on the history of their activities recorded on the blockchain. The proposed metric is based on the Pareto front on multiple criteria, the maximum increase in wealth with the least amount of activity using non-dominated sorting inspired from multi-objective optimization. The provided descriptive statistics on extracted data demonstrates the efficiency of the proposed metric on identification of major influencers.

A benchmark of recent population-based metaheuristic algorithms for multi-layer neural network training

Multi-layer neural networks (MLNNs) are extensively used in many industrial applications. Training is the crucial task for MLNNs. While gradient descent-based approaches are most commonly employed here, they suffer from drawbacks such as getting stuck in local optima. One approach to address this is to employ population-based metaheuristic algorithms. Since a variety of such algorithms have been proposed in the literature, in this paper we benchmark the performance of 15 metaheuristic algorithms, including state-of-the-art as well as some of the most recent algorithms, for neural network training. In particular, we evaluate particle swarm optimisation (PSO), differential evolution (DE), artificial bee colony (ABC), imperialist competitive algorithm (ICA), cuckoo search (CS), gravitational search algorithm (GSA), bat algorithm (BA), firefly algorithm (FA), grey wolf optimiser (GWO), ant lion optimiser (ALO), dragonfly algorithm (DA), sine cosine algorithm (SCA), whale optimisation algorithm (WOA), grasshopper optimisation algorithm (GOA), and salp swarm algorithm (SSA), and assess their performance on different classification algorithms.

Scheduling unrelated parallel machines with family setups and resource constraints to minimize total tardiness

This work considers the unrelated parallel machines scheduling problem with family setups and resource constraints. In this problem, jobs are grouped into families and setup times are required between jobs belonging to different families. The processing of a job requires a certain amount of resource from a machine, which is supplied by upstream processes. The total resource consumed by the jobs processed on a machine must not exceed the resource supply up. The objective is to schedule the jobs on the machines so that the total tardiness of the jobs is minimized. In attempting to obtain optimal solutions for small size instances, a mixed integer linear programming (MILP) model is proposed. Since the problem is a NP-hard combinatorial optimization problem, to obtain near optimal solutions we develop two meta-heuristic algorithms based on Simulated Annealing and Iterated Greedy. Computational experiments show that the presented algorithms can be successfully applied to the problem.

WORKSHOP SESSION: Workshop swarm intelligence algorithms: Foundations, perspectives and challenges

A modular hybridization of particle swarm optimization and differential evolution

In swarm intelligence, Particle Swarm Optimization (PSO) and Differential Evolution (DE) have been successfully applied in many optimization tasks, and a large number of variants, where novel algorithm operators or components are implemented, has been introduced to boost the empirical performance. In this paper, we first propose to combine the variants of PSO or DE by modularizing each algorithm and incorporating the variants thereof as different options of the corresponding modules. Then, considering the similarity between the inner workings of PSO and DE, we hybridize the algorithms by creating two populations with variation operators of PSO and DE respectively, and selecting individuals from those two populations. The resulting novel hybridization, called PSODE, encompasses most up-to-date variants from both sides, and more importantly gives rise to an enormous number of unseen swarm algorithms via different instantiations of the modules therein.

In detail, we consider 16 different variation operators originating from existing PSO- and DE algorithms, which, combined with 4 different selection operators, allow the hybridization framework to generate 800 novel algorithms. The resulting set of hybrid algorithms, along with the combined 30 PSO- and DE algorithms that can be generated with the considered operators, is tested on the 24 problems from the well-known COCO/BBOB benchmark suite, across multiple function groups and dimensionalities.

Discrete self organizing algorithm for pollution vehicle routing problem

This paper introduces a novel discrete variant of the Self Organizing Migrating Algorithm (DSOMA). Developed on the principle of sequence insertions and the NEH paradigm, the DSOMA algorithm provides a unique application for combinatorial optimization problem. The new algorithm is tested on the Pollution Routing Problem (PRP) dataset and two different minimization objectives of emission and distance are conducted. The results are compared and they show that emission objective is highly significant in reducing emission while not significantly adding to the distance overhead.

gBeam-ACO: a greedy and faster variant of Beam-ACO

Beam-ACO, a modification of the traditional Ant Colony Optimization (ACO) algorithms that incorporates a modified beam search, is one of the most effective ACO algorithms for solving the Traveling Salesman Problem (TSP). Although adding beam search to the ACO heuristic search process is effective, it also increases the amount of work (in terms of partial paths) done by the algorithm at each step. In this work, we introduce a greedy variant of Beam-ACO that uses a greedy path selection heuristic. The exploitation of the greedy path selection is offset by the exploration required in maintaining the beam of paths. This approach has the added benefit of avoiding costly calls to a random number generator and reduces the algorithms internal state, making it simpler to parallelize. Our experiments demonstrate that not only is our greedy Beam-ACO (gBeam-ACO) faster than traditional Beam-ACO, in some cases by an order of magnitude, but it does not sacrifice quality of the found solution, especially on large TSP instances. We also found that our greedy algorithm, which we refer to as gBeam-ACO, was less dependent on hyperparameter settings.

Self-organizing migrating algorithm with clustering-aided migration

This paper proposes a novel migration strategy for Self-organizing Migrating Algorithm (SOMA), which combines advantages of the explorative All-To-Random migration with new exploitation focused All-To-Cluster-Leaders strategy. The main goal of this novel innovation to SOMA is to deliver competitive results, not only on the latest CEC 2020 benchmark set on a single objective bound-constrained numerical optimization. The proposed algorithm variant was titled SOMA-CL, and it has manifested notable potential in such demanding challenges. The results of the proposed algorithm were compared and tested for statistical significance against two other SOMA variants.

Colour quantisation using self-organizing migrating algorithm

Colour quantisation is a common image processing technique to reduce the number of distinct colours in an image. Selecting these colour, which comprise a colour palette, is a challenging task since they determine the resulting image quality. In this paper, we propose a novel colour quantisation algorithm based on the Self-Organizing Migrating Algorithm (SOMA), in particular, SOMA Team To Team Adaptive (SOMA T3A), a recent variant of SOMA. SOMA T3A works, iteratively in three phases, namely organization, migration, and update, and performs adaptive parameter definition. Migrants are selected from the population and move towards a leader during the organization process. Experimental results on a benchmark set of images show excellent colour quantisation performance and our approach to outperform several conventional and soft-computing-based colour quantisation algorithms.

High-dimensional multi-level image thresholding using self-organizing migrating algorithm

Multi-level image thresholding is a common approach to image segmentation for which population-based metaheuristic algorithms present an interesting alternative to conventional methods that are based on a exhaustive search. In this paper, we propose a novel multi-level image thresholding algorithm based on the Self-Organizing Migrating Algorithm (SOMA), in particular SOMA Team To Team Adaptive (SOMA T3A), a recent variant of SOMA, and an entropy-based fitness function. We evaluate our algorithm on a set of benchmark images on high-dimensional search spaces and with regards to fitness function value and peak signal-to-noise ratio (PSNR). Experimental results demonstrate excellent thresholding performance and our algorithm to outperform nine other state-of-the-art metaheuristics.

Archive-based swarms

The Particle Swarm Optimization (PSO) algorithm updates each individual's velocity and position using its own prior best position and the best position found so far by any particle. Effective search for global optima can benefit if each particle may utilize additional historical information regarding the quality of previously visited positions in its proximity. We present an approach for doing so, using an archive containing some prior particle positions, analogous to the archive used in some multi-objective optimization algorithms. A specific instance of this approach is the proposed Fitness-Distance-Ratio Archive-Based Swarm Optimization algorithm, shown to outperform PSO and three other variants, when applied to high-dimensional neural network training problems.

Ensemble of strategies and perturbation parameter based SOMA for optimal stabilization of chaotic oscillations

In this paper, we are investigating an original ensemble based adaptive strategy for the Self Organizing Migrating Algorithm (SOMA), namely the "Ensemble of Strategies and Perturbation Parameter in SOMA" (ESP-SOMA). The algorithm to which the main attention is focused as well as other state of the art metaheuristic algorithm (SHADE) are utilized in the complex task of time and quality optimal stabilization of chaotic system oscillations. Since there is a growing demand for intelligent and fast problem solutions in engineering, this paper represents an insight into the applicability and effectivity of modern adaptive/state of the art metaheuristic optimization algorithms in the task of constrained and difficult optimization problem. Experiments encompass two different cases of desired behaviour of chaotic system, and two different designs of objective function in terms of complexity. The simple statistical comparison of the results given by four different metaheuristic algorithms is also reported here for the pairs of generic and enhanced versions.

Applications of swarm intelligence algorithms countering the cyber threats

In recent years, swarm intelligence in particular, and bio-inspired computing, in general, have successfully utilized in a number of majors from science, engineering to industry. Therefore, it is logical to investigate how such techniques might contribute in the field of cybersecurity. This paper covers swarm-based intelligence techniques for enhancing cybersecurity as well as replenish the literature with fresh reviews on swarm-based methods. Furthermore, challenges and possible future directions when applying swam-based methods in the field of cybersecurity are also discussed, which will offer useful insights for researchers and practitioners.

WORKSHOP SESSION: Workshop decomposition techniques in evolutionary optimization

Incremental lattice design of weight vector set

This paper proposes an alternative method to generate the weight vector set for the decomposition-based evolutionary multi- and many-objective optimization. Since each weight vector specifies an approximation point on the Pareto front, the distribution of the weight vector set has an enormous impact on the total approximation quality of the Pareto front. The conventional simplex-lattice design has scalability on the number of weight vectors by a decomposition parameter and generates a uniformly distributed weight vector set. However, the generated set involves fewer inner weight vectors for the approximation of the central area of the Pareto front. A variant, the two-layered simplex-lattice design, generates an additional layer to introduce inner weight vectors but sacrifices the distribution uniformity. The proposed method, the incremental lattice design, also has scalability on the number of weight vectors by a decomposition parameter and always includes the intermediate weight vector pointing to the center of the Pareto front for any the decomposition parameter while keeping the distribution uniformity. Experimental results show that the proposed method achieves higher approximation quality of the Pareto front than the conventional simplex-lattice and two-layered simplex-lattice designs.

Multi-objective optimization in the agile software project scheduling using decomposition

Scrum is an agile software development framework followed nowadays by many software companies worldwide. Since it is an iterative and incremental methodology, the software is developed in increments. For each increment, the software development team and the customer agree upon a development plan. However, the context of the software project may change due to some circumstances that generally arise, for example, new software requirements or changes in the development team. Consequently, these factors force the plan to be adjusted. When the plan is modified, it is necessary to consider at least three criteria to minimize the economic and operational impact of these changes. Therefore, this activity can be analyzed as a multi-objective problem. In the last two decades, multi-objective evolutionary algorithms based on the decomposition principle have become an effective and efficient tool to solve multi-objective problems. In this paper, we evaluate the potential of decomposition-based MOEAs when approximating the agile software project scheduling problem. Mainly, we focus our investigation on analyzing the performance of emblematic decomposition-based MOEAs in a set of test instances introduced in this study.

WORKSHOP SESSION: Workshop genetic and evolutionary computation in defense, security, and risk management

Adversarial threats to large satellite networks (ATLAS-N): a coevolutionary approach based on flipit

Global connectivity from space using proliferated low Earth orbit (P-LEO) satellite constellations can benefit society at large. An increasing number of public and private actors rely on services that are provided from space through satellites, including Earth monitoring, communications, and internet access. As the dependence of society on satellite constellations grows, threats to P-LEO assets are more likely to increase. Developing strategies that are effective and efficient to defend P-LEO constellations against different types of attacks is warranted. As a first step, we develop a framework to evolve defense strategies for a low-fidelity constellation system against selected low-fidelity adversarial actions by evolving attack strategies, and the strategies are evolved using coevolutionary algorithms. Results show that attackers can easily overcome defenses that are unaware and non-reactive to executed attacks and demonstrate that it may be possible to perform meaningful coevolution of attack and defense strategies that consider selective actions on individual satellites in a constellation. Evolved attack and defense strategies have the potential to promote the development of robust, reliable, and secure P-LEO constellations.

Using evolutionary algorithms and pareto ranking to identify secure virtual local area networks

Interconnected devices require different types of network access to maintain functionality; however, access must also be limited to ensure security. Virtual Local Area Networks (VLANs) have become a core technology for securely interconnecting devices in a computer network since they can associate devices with various groups. Unfortunately, finding the smallest, most manageable number of VLANs to securely provide necessary interconnections becomes more difficult as the number of devices and security policy complexity increases.

This paper investigates the use of Evolutionary Algorithms (EAs) to discover the minimum number of VLANs, and the associated memberships, necessary for network connectivity and security. Using this approach, VLAN configurations are modeled as chromosomes and a series of selection, recombination, and mutation operations are performed to find suitable VLAN configurations. Since the problem is a multi-objective search, a hybrid Pareto-based fitness measure is developed to rank possible VLAN solutions, where better solutions have fewer VLANs and better security. Simulation results indicate this approach is able to consistently find concise and secure VLAN groups under various conditions, including increasing number of interconnected devices and more restrictive number of required VLANs.

Delivering diverse web server configuration in a moving target defense using evolutionary algorithms

Creating diverse service configurations that can be swiftly swapped is the essence of the so called Moving Target Defense: presenting a different attack surface for attackers profiling a system for further advances can be applied to many different services and systems. In this paper we focus on using evolutionary algorithms to generate a host of web server configurations. Even if this is an easy target for evolutionary algorithms, we will prove how evolutionary algorithms provide a supply of secure and diverse configurations, with minimum intervention of experts, even with the added challenge of the time the evaluation of a single system takes.

Securing the software defined perimeter with evolutionary co-optimization

We investigate the robustness of a network defense isolation technique called Software Defined Perimeter (SDP). SDP tries to limit security risk by isolating users access based on user accounts, rules and resources. We show how a competitive co-evolutionary framework can be used to evaluate different SDP configurations. It employs a simulation to test the strength of different SDP configurations against different attackers. It uses a co-evolutionary algorithm to search through the action spaces of SDP configurations and possible attackers. These results enable the comparison of different SDP configurations based on the objective of minimizing the number of potentially compromised high-value resources.

Exploring an artificial arms race for malware detection

The Android platform commands a dramatic majority of the mobile market, and this popularity makes it an appealing target for malicious actors. Android malware is especially dangerous because of the versatility in distribution and acquisition of software on the platform. In this paper, we continue to investigate evolutionary Android malware detection systems, implementing new features in an artificial arms race, and comparing different systems' performances on three new datasets. Our evaluations show that the artificial arms race based system achieves the overall best performance on these very challenging datasets.

WORKSHOP SESSION: Workshop interactive methods

Introducing counterexamples to a robotic agent using clicker training

Clicker training is an interactive approach commonly used by animal trainers to shape an animal's behavior by teaching it sub-tasks over time, gradually teaching the animal a new complex, behavior. Clicker training works by rewarding the animal's behavior based on interactions between the animal and trainer called clicks. This paper adapts clicker training for use in an online, evolutionary robotic system, allowing novices to train a robot by rewarding it for exhibiting correct behaviors without the need to understand the robot's capabilities.

Although clicker training traditionally relies on rewarding good behaviors, this paper introduces counterexamples as a way to encourage behavior exploration, by guiding the robot away from incorrect actions. This allows trainers to provide more feedback in less time, and allows the evolutionary system to learn from incorrect behaviors as well as correct behaviors. In this paper, we compare the performance of robots which have been trained with and without counterexamples in a real-world, noisy, environment.

Results show two major developments: significant improvement in the behavior accuracy of robots trained with counterexamples compared to those trained with only positive examples and reduced average training times with counterexamples. This represents an important advance because accuracy and training time are two of the most important factors in online learning.

Characteristic analysis of auditory perception and aesthetics in sound composition optimization using revised interactive differential evolution

In this study, we analyse auditory perception and sound aesthetic characteristic using a sound composition optimizations system, which is designed and implemented using a revised interactive differential evolution algorithm. We revise the population initialization and crossover operation of a conventional interactive differential evolution algorithm. The initialization population of the sound composition system uses a set of well-designed sounds as individuals rather than these randomly generated in a search space. The crossover operation of a conventional interactive differential evolution algorithm is revised in order to maintain the search space within a feasible range. The differential mechanism of interactive differential evolution provides a more feasible combination of these well-designed sounds to create a new type of sounds in the timbre domain. The subjective evaluation of revised interactive differential evolution algorithm leads to the sound composition to satisfy the preference of a subject as much as possible. We invite five subjects to conduct the experimental evaluation to create sounds with their personal preferences. We analyse the sound aesthetic characteristic of these five subjects using principal component analysis and k-means clustering methods with the sound data selected in the optimization process. The characteristics of selected sound help to explain optimization process and convergence of the algorithm, and assist to better understand the aesthetic judgement of each subject. We found that the proposed system and method can generate or create a sound satisfying a subject's preference. The proposal is an efficient method to analyse the characteristics of auditory perception and sound aesthetic of the subjects as well.

WORKSHOP SESSION: Workshop evolutionary computation software systems

Operon C++: an efficient genetic programming framework for symbolic regression

Genetic Programming (GP) is a dynamic field of research where empirical testing plays an important role in validating new ideas and algorithms. The ability to easily prototype new algorithms by reusing key components and quickly obtain results is therefore important for the researcher.

In this paper we introduce Operon, a C++ GP framework focused on performance, modularity and usability, featuring an efficient linear tree encoding and a scalable concurrency model where each logical thread is responsible for generating a new individual. We model the GP evolutionary process around the concept of an offspring generator, a streaming operator that defines how new individuals are obtained. The approach allows different algorithmic variants to be expressed using high-level constructs within the same generational basic loop. The evaluation routine supports both scalar and dual numbers, making it possible to calculate model derivatives via automatic differentiation. This functionality allows seamless integration with gradient-based local search approaches.

We discuss the design choices behind the proposed framework and compare against two other popular GP frameworks, DEAP and HeuristicLab. We empirically show that Operon is competitive in terms of solution quality, while being able to generate results significantly faster.

Library for evolutionary algorithms in Python (LEAP)

There are generally three types of scientific software users: users that solve problems using existing science software tools, researchers that explore new approaches by extending existing code, and educators that teach students scientific concepts. Python is a general-purpose programming language that is accessible to beginners, such as students, but also as a language that has a rich scientific programming ecosystem that facilitates writing research software. Additionally, as high-performance computing (HPC) resources become more readily available, software support for parallel processing becomes more relevant to scientific software.

There currently are no Python-based evolutionary computation frameworks that adequately support all three types of scientific software users. Moreover, some support synchronous concurrent fitness evaluation that do not efficiently use HPC resources. We pose here a new Python-based EC framework that uses an established generalized unified approach to EA concepts to provide an easy to use toolkit for users wishing to use an EA to solve a problem, for researchers to implement novel approaches, and for providing a low-bar to entry to EA concepts for students. Additionally, this toolkit provides a scalable asynchronous fitness evaluation implementation friendly to HPC that has been vetted on hardware ranging from laptops to the worlds fastest supercomputer, Summit.

Integrating heuristiclab with compilers and interpreters for non-functional code optimization

Modern compilers and interpreters provide code optimizations during compile and run time, simplifying the development process for the developer and resulting in optimized software. These optimizations are often based on formal proof, or alternatively stochastic optimizations have recovery paths as backup. The Genetic Compiler Optimization Environment (GCE) uses a novel approach, which utilizes genetic improvement to optimize the run-time performance of code with stochastic machine learning techniques.

In this paper, we propose an architecture to integrate GCE, which directly integrates with low-level interpreters and compilers, with HeuristicLab, a high-level optimization framework that features a wide range of heuristic and evolutionary algorithms, and a graphical user interface to control and monitor the machine learning process. The defined architecture supports parallel and distributed execution to compensate long run times of the machine learning process caused by abstract syntax tree (AST) transformations. The architecture does not depend on specific operating systems, programming languages, compilers or interpreters.

GA-lapagos, an open-source c framework including a python-based system for data analysis

In this work, we introduce, GA-lapagos, an open-source genetic algorithm framework written in 'C' for 3 exemplar optimization problems, and an accompanying Python-based data analysis scripts to extract and produce results. We created this system to help researchers implement a fast GA solving system for their problems allowing them to implement and leverage many ideas implemented in the research literature. Additionally, we provide a number of compilation paths to parallel computational systems such as multi-core, GPUs, and FPGAs. By building an executable framework and outputting results in comma separated values (CSV) format, we create a set of Python scripts to read the data and create graphs.

Implementation matters, also in concurrent evolutionary algorithms

Concurrency in evolutionary algorithms allow researchers to leverage the performance of powerful multi-core desktop architectures by parallelizing tasks using a high-level interface. However, concurrency also introduces additional complexity at the model and the implementation level. In this paper we describe how using parallel execution monitoring tools to check the effective parallelism of the implementation, can help to work out some of said complexities at the implementation-level, which in turn, are translated into improvements at the algorithmic-level. The performance gain is noticeable from an evaluations/seconds point of view to the possible scaling that can be achieved in the running machine, up to a superlinear scaling in an off-the-shelf platfonn. We show that the design using communicating sequential processes implemented in the language Raku is the basis for these improvements.

Open source evolutionary structured optimization

Nevergrad is a derivative-free optimization platform gathering both a wide range of optimization methods and a wide range of test functions to evaluate them upon. Some of these functions have very particular structures which standard methods are not able to use. The most recent feature of Nevergrad is the ability to conveniently define a search domain, so that many algorithms in Nevergrad can automatically rescale variables and/or take into account their possibly logarithmic nature or their discrete nature, but also take into account any user-defined mutation or recombination operator. Since many problems are efficiently solved using specific operators, Nevergrad therefore now enables using specific operators within generic algorithms: the underlying structure of the problem is user-defined information that several families of optimization methods can use and benefit upon. We explain how this API can help analyze optimization methods and how to use it for the optimization of a structured Photonics physical testbed, and show that this can produce significant improvements.

WORKSHOP SESSION: Workshop real-world applications of continuous and mixed-integer optimization

High-dimensional multi-level maximum variance threshold selection for image segmentation: a benchmark of recent population-based metaheuristic algorithms

Maximum variance threshold selection (MVTS) has been extensively used in image thresholding. However, conventional MVTS algorithms are time-consuming, in particular when the number of threshold values increases. One approach to moderate this problem is to use population-based metaheuristic algorithms, but, due to the curse of dimensionality, the efficacy of such algorithms may drop sharply in higher-dimensional search spaces. Since various population-based algorithms have been presented in the literature, in this paper, we benchmark the performance of 10 such algorithms for MVTS, focussing in particular on more recent metaheuristics that have received significant attention. As such, the algorithms we evaluate include, apart from the more established differential evolution (DE), and particle swarm optimisation (PSO), artificial bee colony (ABC), the imperialist competitive algorithm (ICA), the grey wolf optimiser (GWO), moth fame optimisation (MFO), the dragonfly algorithm (DA), the sine cosine algorithm (SCA), the multi-verse optimiser (MVO), and the salp swarm algorithm (SSA). We assess these algorithms on a set of benchmark images with regards to objective function value and structural similarity index (SSIM), and also perform a non-parametric statistical test, the Wilcoxon signed-rank test, to compare the algorithms statistically.

Uncertainty quantification methods for evolutionary optimization under uncertainty

In this paper, we discuss the role of uncertainty quantification (UQ) in assisting optimization under uncertainty. UQ plays a significant role in quantifying the robustness of solutions so as to help the optimizer in achieving robust optimum solutions. In this respect, the scientific discipline of UQ addresses various theoretical and practical aspects of uncertainty, which include representations of uncertainty and also efficient computation of the output uncertainty, to name a few. However, the UQ community and the evolutionary computation community rarely interact with each other despite the potential of utilizing the advancement in UQ for research in evolutionary computation. To that end, this paper serves as a short introduction to the science of UQ for the evolutionary computation community. We discuss several aspects of UQ for robust optimization such as aleatory and epistemic uncertainty and objective functions when uncertainties are considered. A tutorial on an aerodynamic design problem is also given to illustrate the use of UQ in a real-world problem.

WORKSHOP SESSION: Workshop on surrogate-assisted evolutionary optimisation

What do you mean?: the role of the mean function in bayesian optimisation

Bayesian optimisation is a popular approach for optimising expensive black-box functions. The next location to be evaluated is selected via maximising an acquisition function that balances exploitation and exploration. Gaussian processes, the surrogate models of choice in Bayesian optimisation, are often used with a constant prior mean function equal to the arithmetic mean of the observed function values. We show that the rate of convergence can depend sensitively on the choice of mean function. We empirically investigate 8 mean functions (constant functions equal to the arithmetic mean, minimum, median and maximum of the observed function evaluations, linear, quadratic polynomials, random forests and RBF networks), using 10 synthetic test problems and two real-world problems, and using the Expected Improvement and Upper Confidence Bound acquisition functions.

We find that for design dimensions ≥ 5 using a constant mean function equal to the worst observed quality value is consistently the best choice on the synthetic problems considered. We argue that this worst-observed-quality function promotes exploitation leading to more rapid convergence. However, for the real-world tasks the more complex mean functions capable of modelling the fitness landscape may be effective, although there is no clearly optimum choice.

A surrogate-assisted GA enabling high-throughput ML by optimal feature and discretization selection

Novel lookup-based classification approaches allow machine-learning (ML) to be performed at extremely high classification rates for suitable low-dimensional classification problems. A central aspect of such approaches is the crucial importance placed on the optimal selection of features and discretized feature representations. In this work we propose and study a hybrid-genetic algorithm (hGAm) approach to solve this optimization problem. For the considered problem the fitness evaluation function is expensive, as it entails training a ML classifier with the proposed set of features and representations, and then evaluating the resulting classifier. We have here devised a surrogate problem by casting the feature selection and representation problem as a combinatorial optimization problem in the form of a multiple-choice quadratic knapsack problem (MCQKP). The orders of magnitude faster evaluation of the surrogate problem allows a comprehensive hGAm performance evaluation to be performed. The results show that a suitable trade-off exists at around 5000 fitness evaluations, and the results also provide a characterization of the parameter behaviors as input to future extensions.

Bayesian methods for multi-objective optimization of a supersonic wing planform

The global design1 of a supersonic wing planform is demonstrated using gradient-based and gradient-free surrogate-assisted Bayesian optimization utilizing expected hypervolume improvement as the optimization metric. The planform is parameterized using 6- and 11-variables. Representative of a simple supersonic business-jet conceptual-level design, the wing-body is optimized for low inviscid drag and low A-weighted ground-level noise. The speed of convergence to the non-dominated front and suitability of the two optimization implementations are compared, and the advantages over using a genetic algorithm directly are observed. A novel method for initial candidate sample generation, effective non-dominated from sampling, is proposed to further accelerate the convergence of samples towards Pareto solutions.

WORKSHOP SESSION: Workshop evolutionary algorithms for problems with uncertainty

Solution approaches for the dynamic stacking problem

In this paper we describe a dynamic stacking problem as it arises in a more complex form in the steel industry. We describe a simulation model and the simulated processes that are implemented. The model covers a gantry crane that performs relocations of blocks among three types of stacks. In the real world the crane operators and dispatchers solve complex stacking problems targeting to minimize relocation effort while adhering to many constraints, to various time windows, and to satisfy quality demands. While our model does not include all the real-world constraints, the challenges and benefits of solving this problem as a dynamic optimization problem in contrast to a static formulation become apparent. We adapt an existing solution approach from a static formulation that is based on the block relocation problem and compare it with a hand-written rule-based approach. Furthermore, we devise a hybrid approach in the form of an open-ended evolutionary algorithm.

WORKSHOP SESSION: Workshop evolutionary computation for permutation problems

Tabu search and iterated greedy for a flow shop scheduling problem with worker assignment

In this work we address a combinatorial optimization problem composed of two sub-problems: the assignment of heterogeneous workers to machines and the flow shop scheduling problem. In the studied problem, m heterogeneous workers have to be assigned to operate m machines, and n jobs have to be scheduled on each one of the m machines ordered in series. The objective is to find the best worker assignment and the corresponding job schedule in order to minimize the maximum completion time (makespan). Given the difficulty in solving this problem, to obtain near optimal solutions, we propose a new heuristic based on Tabu Search (TS) and Iterated Greedy (IG) algorithm. The TS heuristic, which has good local searching ability, is responsible for defining the assignment of workers to machines, and IG algorithm is responsible for obtaining a schedule of jobs on the machines. The experimental results show that the proposed heuristic is a state-of-the-art method for solving the problem under study, as seen through a comparison of the obtained results with the best heuristic available in the literature.

The (1 + (λ, λ)) genetic algorithm for permutations

The (1 + (λ, λ)) genetic algorithm is a bright example of an evolutionary algorithm which was developed based on the insights from theoretical findings. This algorithm uses crossover, and it was shown to asymptotically outperform all mutation-based evolutionary algorithms even on simple problems like OneMax. Subsequently it was studied on a number of other problems, but all of these were pseudo-Boolean.

We aim at improving this situation by proposing an adaptation of the (1 + (λ, λ)) genetic algorithm to permutation-based problems. Such an adaptation is required, because permutations are noticeably different from bit strings in some key aspects, such as the number of possible mutations and their mutual dependence. We also present the first runtime analysis of this algorithm on a permutation-based problem called Ham whose properties resemble those of OneMax. On this problem, where the simple mutation-based algorithms have the running time of Θ(n2 log n) for problem size n, the (1 + (λ, λ)) genetic algorithm finds the optimum in O(n2) fitness queries. We augment this analysis with experiments, which show that this algorithm is also fast in practice.

An experimental evaluation of the algebraic differential evolution algorithm on the single row facility layout problem

The Algebraic Differential Evolution for Permutations (ADEP) has been recently proposed as an effective evolutionary algorithm for permutation-based optimization problems. ADEP is built upon a framework that exploits the rich algebraic structure of the permutations search space. In this paper we further explore the abilities of ADEP by presenting an implementation for the Single Row Facility Layout Problem (SRFLP): a permutation problem with interesting real-world applications ranging from designing the layouts of machines in certain manufacturing systems to optimally arranging rooms in hospitals. An experimental investigation was conducted on a set of commonly adopted benchmarks and different settings ADEP were compared among them and with respect to the other methods in the literature. Interestingly, the experimental results confirm the validity of ADEP by showing its competitiveness with respect to the state-of-the-art results for the SRFLP.

On the solvability of routing multiple point-to-point paths in manhattan meshes

The solvability of multiple path routing problems in 3D Manhattan meshes is greatly influenced by the order in which the paths are processed. Unsolvable instances can readily be made solvable by simply changing the sequence of paths to be routed, and the inverse also holds. Our results on square meshes with 100 randomly placed terminals show that the routability of an instance can be accurately guessed a priori from its characteristics only. Furthermore, the attainable routability from random sequence change can also be accurately guessed, and a tight scaling relation suggests these results hold for a broad range of instance sizes. For our particulars, the number of routable paths can increase as much as 73%, just by changing the order of processing.

dMFEA-II: An adaptive multifactorial evolutionary algorithm for permutation-based discrete optimization problems

The emerging research paradigm coined as multitasking optimization aims to solve multiple optimization tasks concurrently by means of a single search process. For this purpose, the exploitation of complementarities among the tasks to be solved is crucial, which is often achieved via the transfer of genetic material, thereby forging the Transfer Optimization field. In this context, Evolutionary Multitasking addresses this paradigm by resorting to concepts from Evolutionary Computation. Within this specific branch, approaches such as the Multifactorial Evolutionary Algorithm (MFEA) has lately gained a notable momentum when tackling multiple optimization tasks. This work contributes to this trend by proposing the first adaptation of the recently introduced Multifactorial Evolutionary Algorithm II (MFEA-II) to permutation-based discrete optimization environments. For modeling this adaptation, some concepts cannot be directly applied to discrete search spaces, such as parent-centric interactions. In this paper we entirely reformulate such concepts, making them suited to deal with permutation-based search spaces without loosing the inherent benefits of MFEA-II. The performance of the proposed solver has been assessed over 5 different multitasking setups, composed by 8 datasets of the well-known Traveling Salesman (TSP) and Capacitated Vehicle Routing Problems (CVRP). The obtained results and their comparison to those by the discrete version of the MFEA confirm the good performance of the developed dMFEA-II, and concur with the insights drawn in previous studies for continuous optimization.

Monte carlo tree search on perfect rectangle packing problem instances

We explore the possibilities of Monte Carlo tree search (MCTS), immensely successful in games such as Go and Chess, to solve the perfect rectangle packing problem. Experiments are done on two differently generated problem sets of 1,000 instances each, and we explore six different rollout numbers and two different action-selection strategies for MCTS. We compare the algorithm's performance to an exact depth-first algorithm equipped with efficient pruning techniques. By rating the number of solutions found against the total number of tiles placed, we define a 'computationally economic tradeoff'. Different rollout numbers and strategies lead to different results, both within and between the two problem sets. We discuss these results in context of other heuristic algorithms on this problem and closely related areas.

Gradient search in the space of permutations: an application for the linear ordering problem

Gradient search is a classical technique for optimizing differentiable functions that has gained much relevance recently due to its application on Neural Network training. Despite its popularity, the application of gradient search has been limited to the continuous optimization and its usage in the combinatorial case is confined to a few works, all which tackle the binary search space. In this paper, we present a new approach for applying Gradient Search to the space of permutations. The idea consists of optimizing the expected objective value of a random variable defined over permutations. Such a random variable is distributed according to the Plackett-Luce model, and a gradient search over its continuous parameters is performed. Conducted experiments on a benchmark of the linear ordering problem confirm that the Gradient Search performs better than its counterpart Estimation of Distribution Algorithm: the Plackett-Luce EDA. Moreover, results reveal that the scalability of the Gradient Search is better than that of the PL-EDA.

WORKSHOP SESSION: Workshop parallel and distributed evolutionary inspired methods

Exploiting multi-objective parallel extremal optimization features in dynamic load balancing

This paper is focused on the methodology for using the parallel multi-objective Extremal Optimization in load balancing algorithms for distributed systems. In the proposed approach, parallel multi-objective Extremal Optimization algorithms define task migration as a means for processor load balancing. In the studied algorithms three objectives relevant to distributed processor load balancing are used as global fitness functions: the function dealing with the computational load imbalance in execution of application tasks on processors, the function concerned with the communication between tasks placed on distributed computing nodes and the function concerned with the task migration number. Internal properties of the proposed multi-objective Extremal Optimization algorithms have been discussed. A number of such algorithms with different composition of global and local fitness functions have been presented and verified by simulation experiments. The performed comparative experiments concerned execution of distributed programs represented as macro data flow graphs. Their parallel execution speed-up was discussed based on different best solution search methods such as compromise approach, lexicographic approach and hybrid approach. The obtained results have shown that the parallel multi-objective Extremal Optimization algorithms used in load balancing have visibly improved the quality of execution of the tested program graphs.

cMOGA/D: a novel cellular GA based on decomposition to tackle constrained multiobjetive problems

Parallel schemes in Evolutionary Algorithms (EAs) and in particular fine-grained or cellular Genetic Algorithms (cGAs) to tackle constrained multiobjective optimization problems (CMOPs) have been narrowly investigated. Cellular GAs structural properties balance exploration and exploitation stages during the search to avoid stagnation and possible premature convergence. This article proposes cMOGA/D, a novel algorithmic approach that takes advantage of structural properties in cGAs in combination with core concepts of well established MOEA/D technique, such as, targeting multiobjective problems via multiple subproblems by Tchebycheff decomposition; its neighbouring conceptual basis; and also combining a CMOP Constraint Handling Technique (CHT) called push and pull search (PPS). A thorough empirical assessment shows an improved cMOGA/D performance in comparison to previous state of the art approaches.

An evolutionary approach for constructing multi-stage classifiers

Multi-stage classification is a supervised learning approach that distributes a set of features, each with an associated importance and cost of generation, across a series of classifiers (stages). Inputs are processed in a pipeline fashion through the stages, each of which utilizes only a subset of the complete feature set, until a confident classification decision can be made or until all features and stages have been exhausted. This design benefits from processing inputs in parallel and ensures that labels are assigned using only the necessary features, but the number and composition of stages used by the model can have significant impact on overall performance. Unfortunately, identifying these critical design aspects becomes more difficult as the number of features and possible stages increases, often making brute-force search or human intuition impractical.

This paper introduces a novel evolutionary approach for discovering multi-stage configurations that provide high classification performance and fast processing times. Using this approach, multistage classifier configurations are modeled as chromosomes, and a series of selection, recombination, and mutation operations are iteratively performed to find better configurations. Since the problem has multiple objectives, a Pareto-based fitness measure is developed to rank chromosomes, where better chromosomes have high accuracy, high conclusiveness, and fast processing time. Experimental results indicate this approach is able to consistently find accurate and fast multi-stage classifier configurations under various conditions, including an increasing number of features and different feature synthesis time distributions.

A parallel two-stage genetic algorithm for route planning

Robotic path planning is an area of increasing importance given the interest in developing autonomous vehicles of all types and sizes. Consider, for example, a warehouse robot for a large internet commerce company. The robot's task is to retrieve a large number of items from warehouse shelves and deliver them to the packing area. Because efficiency is important, it is desirable for the robot to travel as short a distance as possible in completing its task. Though related to path planning, this is a distinct problem as there are multiple, perhaps many, destinations. We call this problem route planning. We present a parallel genetic algorithm that runs in two stages to solve the route planning problem. The two-stage approach significantly improves results over a similar singe-stage parallel genetic algorithm for this problem.

ExaEvo: topological optimization and scalability of evolutionary algorithms

As the availability of high-performance computing systems rises to unprecedented levels in the exascale era, vastly scalable parallelism is now an accessible, viable option for a wide range of scientific fields and algorithmic implementations. This abundance of increasingly complex parallel and distributed systems necessitates new approaches to software parallelism and scalability for finding high-quality solutions to complex problems at previously unattainable rates. This research investigates the implicit and explicit use of MPI topologies applied to parallel evolutionary algorithms (EAs) in order to dynamically arrange processes and communication channels during runtime for optimal performance, adapting to underlying system architecture and state. An island model EA employing numerical optimization has been chosen as the benchmark for evaluating these techniques. Initial results using a random search of the topology space offer promising results, with some cases resulting in both faster performance and improved fitness, indicating that a multi-objective evolutionary search approach could provide near optimal performance in regard to both communication speed and solution quality, thus resulting in a novel, performance enhancing meta-EA approach, where the top-level EA evolves the MPI topology on which the lower-level parallel EA is executing.

WORKSHOP SESSION: Workshop international workshop on learning classifier systems

An adaption mechanism for the error threshold of XCSF

Learning Classifier System (LCS) is a class of rule-based learning algorithms, which combine reinforcement learning (RL) and genetic algorithm (GA) techniques to evolve a population of classifiers. The most prominent example is XCS, for which many variants have been proposed in the past, including XCSF for function approximation. Although XCSF is a promising candidate for supporting autonomy in computing systems, it still must undergo parameter optimization prior to deployment. However, in case the later deployment environment is unknown, a-priori parameter optimization is not possible, raising the need for XCSF to automatically determine suitable parameter values at run-time. One of the most important parameters is the error threshold, which can be interpreted as a target bound on the approximation error and has to be set according to the approximated function. To enable XCSF to reliably approximate functions unknown at design-time, we propose the use of an error threshold, which is adapted at run-time based on the currently achieved approximation error. Our experimental evaluation shows that the adaption mechanism automatically adjusts the error threshold to a suitable value. On six different target functions, XCSF with an adaptive error threshold leads to a worst-case approximation error that is 11.5% higher than the error achieved with the best-suited static threshold.

Investigating exploration techniques for ACS in discretized real-valued environments

One way of dealing with the real-valued input signal is to discretize it. This might influence the process of learning the environmental model by the ACS2 agent. A more sophisticated method of selecting action can be applied to increase the speed of gaining knowledge by determining the most valuable regions of the input-space. This paper compares four ACS2 biasing exploration techniques applied across four real-valued environments. A new class of benchmark problem (inverted pendulum) and an agent modification - Optimistic Initial Quality (OIQ) are introduced for ACS2 both with promising outcomes.

PEPACS: integrating probability-enhanced predictions to ACS2

Many real-world environments are non-deterministic, thus presenting learning challenges for Anticipatory Learning Classifier Systems (ALCS). Maze problems have been widely used in ALCS literature, as they provide such environments. However, few ALCS can efficiently run in such mazes, having trouble to build a complete and accurate internal representation of their environment. ALCS can implement Probability-Enhanced Predictions (PEP) in their classifiers. Those PEP permit ALCS to handle non-deterministic environments, but they have never been experimented within ACS2 (Anticipatory Classifier System 2), yet one of the most advanced ALCS, and tested within a thorough maze benchmark. This paper introduces PEPACS, an ALCS that integrates PEP to ACS2. PEPACS integrates an aliasing detection algorithm that let the system build PEP-enhanced classifiers, and the learning process was adjusted so as to avoid over-generalization issues. The presented results show that PEP with ACS2 is a suitable approach to handle at least one kind of non-determinism (namely, the perceptual aliasing issue), without impairing ACS2 functionalities, and that PEP allows the system to build a complete and accurate internal representation of its environment.

An overview of LCS research from IWLCS 2019 to 2020

The International Workshop on Learning Classifier Systems (IWLCS) is a yearly workshop at the GECCO conference where new concepts and results all around Learning Classifier Systems (LCSs) are presented and discussed. One recurring part of the workshop agenda is a presentation that reviews and summarizes the advances made in the field over the last year; this is intented to provide an easy entry point to the most recent progress and achievements. As organizers of the upcoming IWLCS, we take the underlying idea a bit further and accompany the presentation with the present workshop paper, hoping that this recurs as well. We give an overview of all the LCS-related publications since the last IWLCS, that is, since 13th July 2019 and until 10th March 2020. The 33 publications we review are clustered into six overall topics: Formal theoretic advances, approaches that combine LCSs with neural networks, contributions to LCS-based reinforcement learning, improvements to existing LCSs, new LCS architectures and applications of LCSs.

Generic approaches for parallel rule matching in learning classifier systems

The XCS classifier system (XCS) constitutes the most deeply investigated evolutionary rule-based machine learning algorithm. Due to its online learning nature, it is not as computationally intense as deep learning approaches. However, in order to increase XCS's realtime capabilities, it is worth to improve its runtime for the inference step. A iteration through XCS's main loop involves various steps. Among them, matching usually constitutes the computationally most expensive part. Various ways for improvements have been proposed, e.g. using specialized hardware or vector commands. However, existing approaches require either a certain hardware coupled with a specific instruction set, or a dedicated multiprocessing programming language. In this context, parallel programs are often evaluated in terms of their speed up, but it turns out that qualitative criteria such as flexibility and independence of a specific hardware are also relevant. We therefore present a generic parallel approach for matching. We only rely on the minimum assumption of a multicore CPU with standard synchronization mechanisms, e.g., locks. Those assumptions are satisfied in almost all modern computing systems and high-level programming languages today. Thus we show that our approaches cannot only decrease the runtime, but are also reusable for most learning classifier systems and computing systems.

XCS as a reinforcement learning approach to automatic test case prioritization

Testing is a crucial part in the development of new products. With the rise of test automation methods, companies start relying on an even higher number of tests. Sometimes it is not feasible to run all tests and the goal is to determine which tests are crucial and which are less important. This prioritization problem has just recently gotten into the focus of reinforcement learning. A neural network combined with prioritized experience replay (ER) was used to identify critical tests. We are the first to apply XCS classifier systems (XCS) for this use case and reveal that XCS is not only suitable for this problem, but can also be superior to the aforementioned neural network and leads to more stable results. In this work, we adapt XCS's learning mechanism to the task by introducing a batch update which is based on Monte Carlo control. Further, we investigate if prioritized ER has the same positive effects on XCS as on the neural network for this test prioritization problem. Our experiments show that in general this is not the case for XCS.

Learning classifier systems: appreciating the lateralized approach

Biological nervous systems can learn knowledge from simple and small-scale problems and then apply it to resolve more complex and large-scale problems in similar and related domains. However, the rudimentary attempts to apply this transfer learning in artificial intelligence systems have struggled. This is maybe due to the homogeneous nature of their knowledge representation. It is believed that it is the lateral asymmetry of the brain, enabling modular learning at different levels of abstraction, which facilitates transfer between tasks. Learning classifier systems (LCSs) are a rule-based evolutionary computation technique that automatically clusters inputs into environmental niches, which makes it an ideal candidate for implementing lateralization.

Recently LCSs based systems have applied lateralization and modular learning at different levels of abstraction to solve complex problems in Boolean, computer vision, and navigation domains. This paper aims to bring these three separate implementations together for the first time to understand the methodology of lateralization and appreciate its benefits. The experimental results demonstrate that the LCSs based lateralized systems outperformed state-of-the-art homogeneous systems in solving complex problems. The advances arise from the ability to consider input at both the constituent level and holistic level simultaneously, such that the most appropriate viewpoint controls the system.

A scikit-learn compatible learning classifier system

Parallel to genetic algorithms and genetic programming, the evolutionary computation (EC) sub-field of learning classifier systems (LCS) has been an active area of research, development, and application. LCSs are best known for their ability to flexibly adapt to perform well on a wide range of problems, and to outperform other machine learning (ML) algorithms on problems characterized as being complex and heterogeneous. However, to date LCSs remain poorly recognized and thus rarely considered for comparison to other ML approaches in or outside of the EC community. One likely reason for this is the general lack of easy-to-use LCS implementations to facilitate comparisons to other well-known ML approaches. The Python-based scikit-learn library has become a very popular way to implement, utilize, and compare ML algorithms in a simple, uniform manner. In this work, we develop and evaluate the first LCS scikit-learn package, called scikit-eLCS. The scikit-eLCS algorithm is a modern Michigan-style supervised-learning LCS descended from eLCS, UCS, and XCS. We demonstrate the efficacy and capabilities of this package over benchmark n-bit multiplexer problems. We expect scikit-eLCS to serve as an algorithmic benchmark to facilitate future ML comparisons, as well as a blueprint for the implementation of other scikit-learn compatible LCS algorithms.

WORKSHOP SESSION: Workshop neuroevolution at work

Multi-objective evolutionary GAN

Generative Adversarial Network (GAN) is a generative model proposed to imitate real data distributions. The original GAN algorithm has been found to be able to achieve excellent results for the image generation task, but it suffers from problems such as instability and mode collapse. To tackle these problems, many variants of the original model have been proposed; one of them is the Evolutionary GAN (EGAN), where a population of generators is evolved.

Inspired by EGAN, we propose here a new algorithm, called Multi-Objective Evolutionary Generative Adversarial Network (MOEGAN), which reformulates the problem of training GANs as a multi-objective optimization problem. Thus, Pareto dominance is used to select the best solutions, evaluated using diversity and quality fitness functions.

Preliminary experimental results on synthetic datasets show how the proposed approach can achieve better results than EGAN.

Learning to walk - reward relevance within an enhanced neuroevolution approach

Recent advances in human motion sensing technologies and machine learning have enhanced the potential of AI to simulate artificial agents exhibiting human-like movements. Human movements are typically explored via experimental recordings with the aim of establishing relationships between neural and mechanical activities. A recent trend in AI research shows that AI algorithms work remarkably well when combined with sufficient computing resources and data. One common criticism is that all of these methods are gradient-based, which involves a gradient approximation, thus suffering of the well-known vanishing gradient problem.

In this paper, the goal is to build an ANN-based controller that enables an agent to walk in a human-like way. In particular, the proposed methodology is based on a new approach to Neuroevolution based on NeuroEvolution of Augmenting Topologies (NEAT). The original algorithm has been endowed with a different type of selection-reproduction mechanism and a modified management of the population, with the aim to improve the performance and to reduce the computational effort. Experiments have evidenced the effectiveness of the proposed approach and have highlighted the interdependence among three key aspects: the reward framework, the Evolutionary Algorithm chosen and the hyper-parameters' configuration. As a consequence, none of the above aspects can be ignored and their balancing is crucial for achieving suitable results and good performance.

Mutational puissance assisted neuroevolution

Artificial Neural Networks (ANN) are often trained using back-propagation, wherein the interconnection weights are determined based on the error gradient. ANNs have also been evolved using neuroevolutionary techniques. The weights in such ANNs are updated randomly by Gaussian mutation. Either all, or a random number of weights are modified, and the performance of the resulting ANN is determined. This paper describes a novel mutation strategy that enhances the performance of random selection by augmenting a dedicated mutational puissance to every weight within the ANN. These puissances are either replenished or consumed based on whether the performance of the ANN improved or deteriorated. Mutational puissance thus, tends to preserve, in some form, the information on how well the associated weight contributed to the performance of the ANN. We have embedded this technique in a robot to enable it to evolve ANN-based controllers for specific tasks, on-the-fly. The experiments and results portrayed in the paper conclusively endorse the efficacy of guiding mutation using the concept of puissance.

GEVO-ML: a proposal for optimizing ML code with evolutionary computation

Parallel accelerators, such as GPUs, are a key enabler of large-scale Machine Learning (ML) applications. However, programmers often lack detailed knowledge of the underlying architecture and fail to fully leverage their computational power. This paper proposes GEVO-ML, a tool for automatically discovering optimization opportunities and tuning the performance of ML kernels. GEVO-ML extends earlier work on GEVO (Gpu optimization using EVOlutionary computation) by focusing directly on ML frameworks, intermediate languages, and target architectures. It retains the multi-objective evolutionary search developed for GEVO, which searches for edits to GPU code compiled to LLVM-IR and improves performance on desired criteria while retaining required functionality. In earlier work, we studied some ML workloads in GPU settings and found that GEVO could improve kernel speeds by factors ranging from 1.7X to 2.9X, even with access to only a small portion of the overall ML framework. This workshop paper examines the limitations and constraints of GEVO for ML workloads and discusses our GEVO-ML design, which we are currently implementing.

Neuro-evolution using game-driven cultural algorithms

Contemporary 'deep learning' (DL) models have proven to be effective in a wide variety of applications. However, the right network topology for the problem at hand may be complex and not immediately obvious. This has given rise to the secondary field of neural architecture search (NAS). This paper describes a NAS method based on graph evolution pioneered by Neuro-evolution of Augmenting Topologies (NEAT), but driven by the evolutionary mechanisms underlying Cultural Algorithms (CA). CA is a population-based, stochastic optimization system inspired by problem solving in human cultures, suited to solving problems such as NAS. We present CATNeuro a system for evolving DL models guided by CA metaheuristics called Knowledge Sources (KS). The KS store knowledge harvested from prior generations and use it to guide subsequent generations in the search space. A knowledge distribution mechanism, which assigns a KS to each individual in the population, is an instrumental part of this process. CATNeuro, is applied to find optimal network topologies to play a 2D fighting game called FightingICE (based on "The Rumble Fish" game). A policy-based, reinforcement learning method is used to create the training data for network optimization. CATNeuro is still evolving. In this primary foray into NAS, we contrast the performance of CATNeuro with two different knowledge distribution mechanisms - the stalwart Weighted Majority (WTD) - which represents "wisdom of the crowds" - and a new one based on the Stag-Hunt game from evolutionary game theory. We show that Stag-Hunt has a statistical edge over WTD in many areas and thus is a better candidate for future development with CATNeuro.

WORKSHOP SESSION: Workshop visualisation methods in genetic and evolutionary computation

Using pagerank to uncover patterns in search behavior induced by the bit flip operator

Understanding how the complex interactions of the problem-algorithm combination lead to an algorithm's search performance is arguably one of the most important open questions in metaheuristic algorithm theory. Examination of the fitness landscape does not provide all of the information needed to understand a metaheuristic algorithm's search behavior. We introduce an extension to the fitness landscape, which we call a search behavior diagram, that models a metaheuristic algorithm's expected search behavior across an entire fitness landscape. We then show that analyzing a search behavior diagram can produce insights into the nature of metaheuristic algorithm search behavior on problems from binary optimization, including one interesting insight about the relationship between the distribution of optima and adaptive search behavior.

Visual mapping of multi-objective optimization problems and evolutionary algorithms

This work proposes a method to map optimization problems and evolutionary search algorithms into a two-dimensional space, respectively. In the research domain of evolutionary optimization, benchmark optimization problems and search algorithms have been evolved cooperatively so far. A variety of benchmark problems are essential for the research of search algorithms. However, there is difficulty to quantitatively and visually represent differences among benchmark problems. Also, there is the same difficulty in describing differences among search algorithms. The proposed method maps optimization problems based on the difference in the search algorithm ranking on each optimization problem. Also, the proposed method maps search algorithms based on the difference in the problem ranking on each search algorithm. In the experiment, we use 26 kinds of multi-objective optimization problems included in the ZDT, DTLZ, and WFG benchmark suites and 29 kinds of evolutionary search algorithms. As a result, we show that similarities of the internal functions are reflected in the position of each problem on the optimization problem map, and the similarities of the design principles of search algorithms are reflected in the position of each search algorithm on the search algorithm map.

WORKSHOP SESSION: Workshop evolutionary computation for the automated design of algorithms

On the sensitivity analysis of cartesian genetic programming hyper-heuristic

The research on Genetic-Programming Hyper-heuristics (GPHH) for automated design of heuristic-based methods has been very active over the last years. Most efforts have focused on the development or improvements of GPHH methods or their applications to different problem domains. Studies that target on the analysis and understanding of the GPHH behavior are still scarce, despite their relevance for easing the application of GPHH in practice and for advancing this research field. In order to advance the body of knowledge on the understanding of GPHH behavior, this paper aims at analyzing the impact of its parameters on the evolution, diversity, and quality of generated algorithms. In particular, a Cartesian Genetic-Programming hyper-heuristic (CGPHH) applied to an NP-Complete problem of production planning (multi-level capacitated lot-sizing problem) is considered. The effects of five parameters on response variables that reflect various aspects of the CGPHH behavior, such as diversity and quality of generated algorithms, are analyzed based on a full-factorial design of experiments. Results indicate that mainly three factors affect the CGPHH behavior in different ways: mutation rate, the CGP representation, and the number of graph nodes. Nonetheless, the CGPHH still generates competitive algorithms, despite the changes applied to its parameters.

The automated design of local optimizers for memetic algorithms employing supportive coevolution

One promising method of improving Evolutionary Algorithm (EA) performance is to improve its fine tuning capabilities by using an additional local optimization operator in the evolutionary cycle. This customization on the traditional EA is typically called a Memetic Algorithm (MA). Adding the appropriate local optimization algorithm can increase performance, while a poor choice can decrease performance. Thus, some knowledge is required to select the correct algorithm. In many cases the optimal algorithm selection is not known and it may not be static, but could change during evolution.

This investigation combines a method of local optimizer evolution using Push Genetic Programming with a method of automatic, self-configuration called Supportive Coevolution. This combination creates a novel MA that coevolves local optimization operators with target fitness function solution candidates. Implementation methodology is shown and experimentation details with corresponding results are presented. Some additional parameters that were discovered for performance tuning are discussed along with a study of their impact on the algorithm's performance. Discussion of some interesting insights followed by some suggestions for further investigation are also provided. Results show the proposed technique can improve the performance of an EA by providing automatically configured, coevolved local optimization operators to a MA.

Time-dependent automatic parameter configuration of a local search algorithm

In combinatorial optimization, where problems are often NP-hard, metaheuristics and other approximation algorithms frequently have many parameters in order to adapt to a wide range of scenarios. Very often, obtaining good values for these parameters is a long and tedious manual task but automatic algorithm configuration has been shown to overcome this issue. At the same time, it may also be useful for parameter values to change during the search in order to fine-tune the search process. These parameters include low-level heuristic components. In this article, we propose to use automatic parameter configuration coupled with a control mechanism that switches between parameter configurations at specific times during the search, as an in-between classic parameter tuning and selection hyperheuristics. We test this idea on a local search algorithm, whose parameters allow for selecting different design components, and three combinatorial problems: the Permutation Flowshop Problem, the Traveling Salesman Problem, and the Quadratic Assignment Problem. In comparisons with traditional static automatic parameter configuration, the proposed approach time-dependent is shown to perform better. Additionally, better-performing local search component configurations are identified and discussed.

Dynamic primitive granularity control: an exploration of unique design considerations

Dynamic primitive granularity control (DPGC) is a promising avenue for improving the performance of genetic programming (GP). However, it remains almost entirely unexplored. Further, it may pose many unique challenges in its design and implementation that traditional GP implementations do not. This paper presents an implementation of DPGC in order to determine what aspects of conventional GP design and implementation require special consideration. There are some common techniques used in GP that have been found here to negatively impact DPGC's ability to improve performance. Parsimony pressure appears to disproportionately penalize low-level primitives, and a mixed-granularity population suffers from heavy biases towards particular granularity levels, seemingly to the detriment of evolution. This paper provides hypotheses as to why these conventional techniques harm DPGC implementations, as well as several potential alternatives for use in the future that may remedy these detrimental effects.

WORKSHOP SESSION: Workshop good benchmarking practices for evolutionary computation

A benchmark with facile adjustment of difficulty for many-objective genetic programming and its reference set

In several works, Multi-Objective GP (MOGP) using Multi-Objective Evolutionary Algorithms (MOEAs) is effective on function estimation problems for the cutting process of steel, modeling of non-linear systems, and truss optimization. However, their targets are two or three objective GP problems. Only little research on GP problems with more than four objectives, or Many-Objective Genetic Programming (MaOGP), exists. This is not because real MaOGP problems are ere are rare, but probably because there are few MaOGP benchmarks. Therefore, this paper proposes a benchmark for MaOGP. The problem consists of an analytic function generated by GP and the well-known Many-Objective KnapSack Problem (MaOKSP). In this problem, the difficulty of the problem can be easily adjusted by changing the non-terminal node set, the number of knapsacks or the number of objectives, the number of items, and so on. Moreover, the IGD# indicator is proposed, in which this is a slightly improved version of the IGD+.

WORKSHOP SESSION: Workshop genetic improvement

An annotated dataset of stack overflow post edits

To improve software engineering, software repositories have been mined for code snippets and bug fixes. Typically, this mining takes place at the level of files or commits. To be able to dig deeper and to extract insights at a higher resolution, we hereby present an annotated dataset that contains over 7 million edits of code and text on Stack Overflow. Our preliminary study indicates that these edits might be a treasure trove for mining information about fine-grained patches, e.g., for the optimisation of non-functional properties.

Genetic improvement of software efficiency: the curse of fitness estimation

Many challenges arise in the application of Genetic Improvement (GI) of Software to improve non-functional requirements of software such as energy use and run-time. These challenges are mainly centred around the complexity of the search space and the estimation of the desired fitness function. For example, such fitness function are expensive, noisy and estimating them is not a straightforward task. In this paper, we illustrate some of the challenges in computing such fitness functions and propose a synergy between in-vivo evaluation and machine learning approaches to overcome such issues.

Evolving sqrt into 1/x via software data maintenance

While most software automation research concentrates on programs' code, we have started investigating if Genetic Improvement (GI) of data can assist developers by automating aspects of the maintenance of parameters embedded in source code. We extend recent GI work on optimising compile time constants to give new functionality and describe the transformation of a GNU C library square root function into the double precision reciprocal function, drcp. Multiplying by 1/x (drcp) allows division free division without requiring the hardware to support division. The evolution (6 seconds) and indeed the GI dp division (7.14 ± 0.012 nS) are both surprisingly fast.

Tuning genetic algorithm parameters using design of experiments

Tuning evolutionary algorithms is a persistent challenge in the field of evolutionary computing. The efficiency of an evolutionary algorithm relates to the coding of the algorithm, the design of the evolutionary operators and the parameter settings for evolution. In this paper, we explore the effect of tuning the operators and parameters of a genetic algorithm for solving the Traveling Salesman Problem using Design of Experiments theory. Small scale problems are solved with specific settings of parameters including population size, crossover rate, mutation rate and the extent of elitism. Good values of the parameters suggested by the experiments are used to solve large scale problems. Computational tests show that the parameters selected by this process result in improved performance both in the quality of results obtained and the convergence rate when compared with untuned parameter settings.

Optimising the fit of stack overflow code snippets into existing code

Software developers often reuse code from online sources such as Stack Overflow within their projects. However, the process of searching for code snippets and integrating them within existing source code can be tedious. In order to improve efficiency and reduce time spent on code reuse, we present an automated code reuse tool for the Eclipse IDE (Integrated Developer Environment), NLP2TestableCode. NLP2TestableCode can not only search for Java code snippets using natural language tasks, but also evaluate code snippets based on a user's existing code, modify snippets to improve fit and correct errors, before presenting the user with the best snippet, all without leaving the editor. NLP2TestableCode also includes functionality to automatically generate customisable test cases and suggest argument and return types, in order to further evaluate code snippets. In evaluation, NLP2TestableCode was capable of finding compilable code snippets for 82.9% of tasks, and testable code snippets for 42.9%.