This talk summarizes our 15+ years of work on the use of Machine Learning for Search & Optimization. I review the four main approaches that we invented during this time. Since learning during search takes effort, it should not surprise that we designed three of these approaches for a particular target range of total function evaluations: from few tens of dozens, to thousands, to many hundreds of thousands of function evaluations. The last hybrid I review regards a surrogate-based approach for optimization under stochastic uncertainty. The wonder of this research area is that each of these four methods defines the state of the art in its respective area, giving significant empirical evidence that learning to optimize can be highly effective.
People have welcomed conversational AI technologies into our homes, workplaces, and institutions where we interact with them on a daily basis. The proliferation of digital assistants in a multitude of embodiments (e.g., speakers, displays, avatars, robots) in human environments over extended periods of time provides us with new ways to investigate, develop and assess the design of personified AIs that emotionally engage and support people to promote human flourishing across a wide range of applications and usage contexts. In this talk, I highlight a number research projects where we are developing, fielding, and assessing social robots in homes, schools, and living communities of older adults. We explore different embodiments and develop adaptive algorithmic capabilities for our robots to sustain interpersonal engagement and personalize to people's needs to support novel interventions in education, social engagement, and emotional wellness. In addition to evaluating the impact of these capabilities and features on improving learning, sustaining engagement, nudging behavior, and shifting attitudes --- we are also examining the nature of the relationship that people form with these personified AI technologies and how it contributes to these impacts. We conclude by reflecting on the ethical and responsible design of intelligent technologies that emotionally engage and build relationships with people.
Few EC technologies have gone from universities to commercial success. Goodman will describe the SHERPA algorithm, part of the HEEDS design exploration framework, and how Red Cedar Technology, which he co-founded, eventually succeeded. Beginning 20 years ago, SHERPA used a self-adapting ensemble of EC methods (GA, ES, DE, SA, etc.) in each run, requiring no choice of optimization methods or parameters by the engineering user. It is a best-selling engineering design optimizer, still built around the original code developed in 1999-2010, although current owner Siemens now has 20+ developers on HEEDS and SHERPA. Goodman will then turn to a problem outside SHERPA's scope, addressed with unpublished parallel EC methods. NASA provided a futuristic challenge problem to teams of DARPA awardees, to develop ways to optimize the distribution of a set of solid propellant types (eventually to be 3D-printed) in a rocket. Goodman will describe the modeling of the rocket and several problem-specific EC methods used to find feasible solutions to this problem with a design space of 10300--500 and over 700 constraints.
We introduce a novel algorithm for ride-hailing services, called Predictive Leafcutter Ant optimization for Networks (PLAN), that is shown to reduce vehicular fuel costs and customer wait times, especially for clustered requests. PLAN combines traditional ant colony optimization, network flow optimization, and task-partitioning inspired by leafcutter ant foraging patterns. At a high level, PLAN integrates these methods to predictively station vehicles at locations of high activity (hot spots) to minimize both customer wait times and vehicle dispatch costs and to selectively disregard requests if they exceed a dynamic cost-to-benefit ratio. The algorithm is tested on a modeled city with simulated ride requests as well as on real-world New York City commuter data. The performance of PLAN is compared to that of a control algorithm, and the conditions under which PLAN outperforms the control algorithm are identified. Results show that for the New York City data with the presence of quantifiably significant hot spots, PLAN can decrease customer wait times by up to 36% (and proportionally dispatch fuel costs) in a test case with a limited service fleet and produce higher reductions with larger fleets.
This paper proposes a new algorithm based on a new idea inspired by the Ant Colony Optimization (ACO) metaheuristic. The goal of the proposed algorithm is to increase the performance of the traditional ACO algorithms by changing the classic role played by ants. In fact, ants are not anymore used to construct solutions but to improve the quality of a population of solutions using a specific local search strategy. Thus the new algorithm, called Ant-PLS, is a population based local search algorithm which uses the stored knowledge in pheromone trails to guide the ant when choosing its neighboring solution from a dynamic candidate list. Ant-PLS is applied on the symmetric traveling salesman problem. The experimental results show the effectiveness of the Ant-PLS when applied on large benchmark instances of TSP. In fact Ant-PLS results are significantly better than the compared ACO algorithms.
The common method for testing metaheuristic optimisation algorithms is to benchmark against problem test suites. However, existing benchmark problems limit the ability to analyse algorithm performance due to their inherent complexity. This paper proposes a novel benchmark, BTB, whose member functions have known geometric properties and critical point topologies. A given function in the benchmark is a realisation of a specified barrier tree in which funnel and basin geometries, and values and locations of all critical points are predetermined. We investigate the behaviour of two metaheuristics, PSO and DE, on the simplest manifestations of the framework, ONECONE and TWOCONES, and relate algorithm performance to a downhill walker reference algorithm. We study success rate, defined as the probability of optimal basin attainment, and inter-basin mobility. We find that local PSO is the slowest optimiser on the unimodal ONECONE but surpasses global PSO in all TWOCONES problems instances below 70 dimensions. DE is the best optimiser when basin difference depths are large but performance degrades as the differences become smaller. LPSO is the superior algorithm in the more difficult case where basins have similar depth. DE consistently finds the optimum basin when the basins have equal size and a large depth difference in all dimensions below 100D; the performance of LPSO falls away abruptly beyond 70D.
Designing controllers for robot swarms is challenging, because human developers have typically no good understanding of the link between the details of a controller that governs individual robots and the swarm behavior that is an indirect result of the interactions between swarm members and the environment. In this paper we investigate whether an evolutionary approach can mitigate this problem. We consider a very challenging task where robots with limited sensing and communication abilities must follow the gradient of an environmental feature and use Differential Evolution to evolve a neural network controller for simulated robots. We conduct a systematic study to measure the flexibility and scalability of the method by varying the size of the arena and number of robots in the swarm. The experiments confirm the feasibility of our approach, the evolved robot controllers induced swarm behavior that solved the task. We found that solutions evolved under the harshest conditions (where the environmental clues were the weakest) were the most flexible and that there is a sweet spot regarding the swarm size. Furthermore, we observed collective motion of the swarm, showcasing truly emergent behavior that was not represented in-and selected for during evolution.
Surrogate-assisted evolutionary algorithms (SAEAs) have performed well on low- and medium-scale expensive optimization problems (EOPs). However, with the dimensionality increasing, existing SAEAs have trouble getting reliable surrogates for solving large-scale EOPs. In this paper, we propose a progressive sampling surrogate-assisted particle swarm optimization (PS-SAPSO) to efficiently solve the large-scale EOPs from the perspective of data collection and model training. For the data collection, a progressive sampling strategy with restart operation is proposed to collect the new sample solutions during the evolution process for training the radial basis function network (RBFN) surrogate. Specifically, a social learning particle swarm optimization is employed to generate the new sample solutions under the control of a progressively varying stop criterion. For the model training, a dynamic tunning strategy is proposed to obtain a reliable RBFN by adaptively adjusting the hyperparameter setting during the evolution process. The experimental result shows that PS-SAPSO can achieve competitive or better performance compared with four state-of-the-art SAEAs on widely used benchmark functions. Moreover, ablation experiments are conducted to show the effectiveness of the components of the PS-SAPSO algorithm.
Data-driven evolutionary algorithms (DDEAs) have shown strong capacity in solving optimization problems with the help of surrogate models. However, current DDEAs often fall into the trap of inaccurate surrogates if no additional data can be obtained for the surrogates update during the optimization process. In addition, when dealing with multiobjective optimization problems, the surrogates in DDEAs will further suffer the difficulty of aggravating cumulative predicted fitness error. To overcome these difficulties, we propose a two-surrogate collaboration (TSC) management method, in which a Kriging surrogate and a radial basis function network (RBFN) surrogate are adopted to approximate the real fitness evaluation for the parent solutions and the newly generated offspring solutions, respectively. This TSC management method can effectively reduce the prediction uncertainty and improve the prediction performance of each surrogate without any other surrogate update process. During the optimization process, we use a modified social learning particle swarm optimization (SLPSO) as the basic search method and propose an offline DDEA. With the help of TSC, our resulting SLPSO-TSC algorithm can search for potential optimal solutions quickly and effectively without the help of any real fitness evaluation during the optimization process. The performance of the proposed SLPSO-TSC algorithm is verified on eleven widely used benchmark problems and compared with five different offline DDEAs. The experimental results show the high competitiveness of our proposed SLPSO-TSC for offline data-driven multiobjective optimization.
Adaptation capabilities, like damage recovery, are crucial for the deployment of robots in complex environments. Several works have demonstrated that using repertoires of pre-trained skills can enable robots to adapt to unforeseen mechanical damages in a few minutes. These adaptation capabilities are directly linked to the behavioural diversity in the repertoire. The more alternatives the robot has to execute a skill, the better are the chances that it can adapt to a new situation. However, solving complex tasks, like maze navigation, usually requires multiple different skills. Finding a large behavioural diversity for these multiple skills often leads to an intractable exponential growth of the number of required solutions. In this paper, we introduce the Hierarchical Trial and Error algorithm, which uses a hierarchical behavioural repertoire to learn diverse skills and leverages them to make the robot more adaptive to different situations. We show that the hierarchical decomposition of skills enables the robot to learn more complex behaviours while keeping the learning of the repertoire tractable. The experiments with a hexapod robot show that our method solves maze navigation tasks with 20% less actions in the most challenging scenarios than the best baseline while having 57% less complete failures.
We present a method of generating diverse collections of neural cellular automata (NCA) to design video game levels. While NCAs have so far only been trained via supervised learning, we present a quality diversity (QD) approach to generating a collection of NCA level generators. By framing the problem as a QD problem, our approach can train diverse level generators, whose output levels vary based on aesthetic or functional criteria. To efficiently generate NCAs, we train generators via Covariance Matrix Adaptation MAP-Elites (CMA-ME), a quality diversity algorithm which specializes in continuous search spaces. We apply our new method to generate level generators for several 2D tile-based games: a maze game, Sokoban, and Zelda. Our results show that CMA-ME can generate small NCAs that are diverse yet capable, often satisfying complex solvability criteria for deterministic agents. We compare against a Compositional Pattern-Producing Network (CPPN) baseline trained to produce diverse collections of generators and show that the NCA representation yields a better exploration of level-space.
Quality-Diversity algorithms provide efficient mechanisms to generate large collections of diverse and high-performing solutions, which have shown to be instrumental for solving downstream tasks. However, most of those algorithms rely on a behavioural descriptor to characterise the diversity that is hand-coded, hence requiring prior knowledge about the considered tasks. In this work, we introduce Relevance-guided Unsupervised Discovery of Abilities; a Quality-Diversity algorithm that autonomously finds a behavioural characterisation tailored to the task at hand. In particular, our method introduces a custom diversity metric that leads to higher densities of solutions near the areas of interest in the learnt behavioural descriptor space. We evaluate our approach on a simulated robotic environment, where the robot has to autonomously discover its abilities based on its full sensory data. We evaluated the algorithms on three tasks: navigation to random targets, moving forward with a high velocity, and performing half-rolls. The experimental results show that our method manages to discover collections of solutions that are not only diverse, but also well-adapted to the considered downstream task.
Quality-Diversity (QD) algorithms can discover large and complex behavioural repertoires consisting of both diverse and high-performing skills. However, the generation of behavioural repertoires has mainly been limited to simulation environments instead of real-world learning. This is because existing QD algorithms need large numbers of evaluations as well as episodic resets, which require manual human supervision and intervention. This paper proposes Reset-Free QD (RF-QD) as a step towards autonomous learning for robotics in open-ended environments. We build on Dynamics-Aware QD (DA-QD) and introduce a behaviour selection policy that leverages the diversity of the imagined repertoire and environmental information to intelligently select of behaviours that can act as automatic resets. We demonstrate this through a task of learning to walk within defined training zones with obstacles. Our experiments show that we can learn repertoires of locomotion controllers autonomously without manual resets and with high sample efficiency in spite of harsh safety constraints. Finally, using an ablation of different target objectives, we show that it is important for RF-QD to have diverse types solutions available for the behaviour selection policy over solutions optimised with a specific objective. Videos and code available at https://sites.google.com/view/rf-qd.
Locating odour sources with mobile robots is a difficult task with many applications. Over the years, researchers have devised bio-inspired and cognitive methods to enable mobile robots to fulfil this task. Cognitive approaches are effective in large spaces, but computationally heavy. On the other hand, bio-inspired ones are lightweight, but they are only effective in the presence of frequent stimuli. One of the most popular cognitive approaches is Infotaxis, which iteratively computes a probability map of the source location. Another strand of work uses Genetic Programming to produce complete search strategies from bio-inspired behaviours. This work combines the two approaches by allowing Genetic Programming to evolve search strategies that include infotactic and bio-inspired behaviours. The proposed method is tested in a set of environments with distinct airflow and chemical dispersion patterns. Its performance is compared to that of evolved strategies without infotactic behaviours and to the standard infotaxis approach. The statistically validated results show that the proposed method produces search strategies that have significantly higher success rates, whilst being faster than those produced by any of the original approaches. Moreover, the best evolved strategies are analysed, providing insight into when infotaxis is more beneficial.
In this paper, we explore how robots in a swarm can individually exploit collisions to produce self-organizing behaviours at the macroscopic scale. We propose to focus on two behaviours that modify the orientation of a robot during a collision, which are inspired by positive and negative feedback observed in Nature. These two behaviours differ in the nature of the feedback produced after a collision by favouring either (1) the alignment or (2) the anti-alignment of the robot with an external force, whether it is an obstacle or another robot. We describe a social learning algorithm using evolutionary operators to learn individual policies that exploit these behaviours in an online and distributed fashion. This algorithm is validated both in simulation and with real robots to solve two tasks involving phototaxis, one of which requires self-organized aggregation to be completed.
The diversity and quality of natural systems have been a puzzle and inspiration for communities studying artificial life. It is now widely admitted that the adaptation mechanisms enabling these properties are largely influenced by the environments they inhabit. Organisms facing environmental variability have two alternative adaptation mechanisms operating at different timescales: plasticity, the ability of a phenotype to survive in diverse environments and evolvability, the ability to adapt through mutations. Although vital under environmental variability both mechanisms are associated with fitness costs hypothesized to render them unnecessary in stable environments. In this work, we study the interplay between environmental dynamics and adaptation in a minimal model of the evolution of plasticity and evolvability. We experiment with different types of environments characterized by the presence of niches and a climate function that determines the fitness landscape. We empirically show that environmental dynamics affect plasticity and evolvability differently and that the presence of diverse ecological niches favors adaptability even in stable environments. We perform ablation studies of the selection mechanisms to separate the role of fitness-based selection and niche-limited competition. Results obtained from our minimal model allow us to propose promising research directions in the study of open-endedness in biological and artificial systems.
Digital signal processors are widely used in today's computers to perform advanced computational tasks. But, the selection of digital electronics as the physical substrate for computation a hundred years ago was influenced more by technological limitations than substrate appropriateness. In recent decades, advances in chemical, physical and material sciences have provided new options. Granular metamaterials are one such promising target for realizing mechanical computing devices. However, their high-dimensional design space and the unintuitive relationship between microstructure and desired macroscale behavior makes the inverse design problem formidable. In this paper, we use multiobjective evolutionary optimization to solve this inverse problem: we demonstrate the design of basic logic gates embedded in a granular metamaterial, and that the designed material can be "reprogrammed" via frequency modulation. As metamaterial design advances, more computationally dense materials may be evolved, amenable to reprogramming by increasingly sophisticated programming languages written in the frequency domain.
Organisms in nature have evolved to exhibit flexibility in face of changes to the environment and/or to themselves. Artificial neural networks (ANNs) have proven useful for controlling of artificial agents acting in environments. However, most ANN models used for reinforcement learning-type tasks have a rigid structure that does not allow for varying input sizes. Further, they fail catastrophically if inputs are presented in an ordering unseen during optimization. We find that these two ANN inflexibilities can be mitigated and their solutions are simple and highly related. For permutation invariance, no optimized parameters can be tied to a specific index of the input elements. For size invariance, inputs must be projected onto a common space that does not grow with the number of projections. Based on these restrictions, we construct a conceptually simple model that exhibit flexibility most ANNs lack. We demonstrate the model's properties on multiple control problems, and show that it can cope with even very rapid permutations of input indices, as well as changes in input size. Ablation studies show that is possible to achieve these properties with simple feedforward structures, but that it is much easier to optimize recurrent structures.
In this work, we consider the problem of Quality-Diversity (QD) optimization with multiple objectives. QD algorithms have been proposed to search for a large collection of both diverse and high-performing solutions instead of a single set of local optima. Searching for diversity was shown to be useful in many industrial and robotics applications. On the other hand, most real-life problems exhibit several potentially conflicting objectives to be optimized. Hence being able to optimize for multiple objectives with an appropriate technique while searching for diversity is important to many fields. Here, we propose an extension of the map-elites algorithm in the multi-objective setting: Multi-Objective map-elites (mome). Namely, it combines the diversity inherited from the map-elites grid algorithm with the strength of multi-objective optimizations by filling each cell with a Pareto Front. As such, it allows to extract diverse solutions in the descriptor space while exploring different compromises between objectives. We evaluate our method on several tasks, from standard optimization problems to robotics simulations. Our experimental evaluation shows the ability of mome to provide diverse solutions while providing global performances similar to standard multi-objective algorithms.
Modularity in robotics holds great potential. In principle, modular robots can be disassembled and reassembled in different robots, and possibly perform new tasks. Nevertheless, actually exploiting modularity is yet an unsolved problem: controllers usually rely on inter-module communication, a practical requirement that makes modules not perfectly interchangeable and thus limits their flexibility. Here, we focus on Voxel-based Soft Robots (VSRs), aggregations of mechanically identical elastic blocks. We use the same neural controller inside each voxel, but without any inter-voxel communication, hence enabling ideal conditions for modularity: modules are all equal and interchangeable. We optimize the parameters of the neural controller---shared among the voxels---by evolutionary computation. Crucially, we use a local self-attention mechanism inside the controller to overcome the absence of inter-module communication channels, thus enabling our robots to truly be driven by the collective intelligence of their modules. We show experimentally that the evolved robots are effective in the task of locomotion: thanks to self-attention, instances of the same controller embodied in the same robot can focus on different inputs. We also find that the evolved controllers generalize to unseen morphologies, after a short fine-tuning, suggesting that an inductive bias related to the task arises from true modularity.1
We study the problem of efficiently generating high-quality and diverse content in games. Previous work on automated deckbuilding in Hearthstone shows that the quality diversity algorithm MAP-Elites can generate a collection of high-performing decks with diverse strategic gameplay. However, MAP-Elites requires a large number of expensive evaluations to discover a diverse collection of decks. We propose assisting MAP-Elites with a deep surrogate model trained online to predict game outcomes with respect to candidate decks. MAP-Elites discovers a diverse dataset to improve the surrogate model accuracy while the surrogate model helps guide MAP-Elites towards promising new content. In a Hearthstone deck-building case study, we show that our approach improves the sample efficiency of MAP-Elites and outperforms a model trained offline with random decks, as well as a linear surrogate model baseline, setting a new state-of-the-art for quality diversity approaches in automated Hearthstone deckbuilding. We include the source code for all the experiments at: https://github.com/icaros-usc/EvoStone2.
We study the effects of injecting human-generated designs into the initial population of an evolutionary robotics experiment, where subsequent population of robots are optimised via a Genetic Algorithm and MAP-Elites. First, human participants interact via a graphical front-end to explore a directly-parameterised legged robot design space and attempt to produce robots via a combination of intuition and trial-and-error that perform well in a range of environments. Environments are generated whose corresponding high-performance robot designs range from intuitive to complex and hard to grasp. Once the human designs have been collected, their impact on the evolutionary process is assessed by replacing a varying number of designs in the initial population with human designs and subsequently running the evolutionary algorithm. Our results suggest that a balance of random and hand-designed initial solutions provides the best performance for the problems considered, and that human designs are most valuable when the problem is intuitive. The influence of human design in an evolutionary algorithm is a highly understudied area, and the insights in this paper may be valuable to the area of AI-based design more generally.
Parallel machine scheduling problems are among the most studied scheduling problems in the literature. We study the unrelated parallel machine scheduling problem with release dates and machine-and sequence-dependent setup times to minimize the makespan. We introduce three heuristics, five local search methods and three metaheuristics for the problem: the Late Acceptance Hill Climbing and two variants of Simulated Annealing. Furthermore, we introduce a three-set 1620-instance benchmark in which the number of jobs and machines, release dates, processing and setup times are generated according to existing procedures. The proposed approaches are analyzed and compared on the proposed benchmark. We compare the proposed heuristics and derive the best heuristic to be used to generate the initial solution of the metaheuristics. Moreover, we show that the different metaheuristics are efficient, each performing best on one of the sets.
Generating instances of different properties is key to algorithm selection methods that differentiate between the performance of different solvers for a given combinatorial optimization problem. A wide range of methods using evolutionary computation techniques has been introduced in recent years. With this paper, we contribute to this area of research by providing a new approach based on quality diversity (QD) that is able to explore the whole feature space. QD algorithms allow to create solutions of high quality within a given feature space by splitting it up into boxes and improving solution quality within each box. We use our QD approach for the generation of TSP instances to visualize and analyze the variety of instances differentiating various TSP solvers and compare it to instances generated by established approaches from the literature.
A graceful labeling of a graph G = (V, E) is an assignment of labels to the vertices V of G subject to constraints arising from the structure of the graph. A graph is called graceful if it admits a graceful labeling. As a combinatorial problem, it has applications in coding theory, communications networks, and optimizing circuit layouts. Several different approaches, both heuristic and complete, for finding graceful labelings have been developed and analyzed empirically. Most such algorithms have been established in the context of verifying the conjecture that trees are graceful.
In this paper, we present the first rigorous running time analysis of a simple evolutionary algorithm applied to finding labelings of graceful graphs. We prove that an evolutionary algorithm can find a graceful labeling in polynomial time for all paths, stars, and complete bipartite graphs with a constant-sized partition. We also empirically compare the running time of a simple evolutionary algorithm against a complete constraint solver.
Designed for graybox optimization problems, the state-of-the-art Drils algorithm (Deterministic Recombination and Iterated Local Search) follows the framework of a hybrid iterated local search by combining the efficient identification of improving moves and the fast recombination of local optima. The Drils algorithm uses a perturbation mechanism in order to feed the graybox crossover with promising local optima. The perturbation is a key element to avoid that the search gets trapped. In this paper, we revisit the Drils algorithm by focusing on two main questions: (i) how the perturbation is performed, and (ii) how strong it should be. We propose two alternative designs of the perturbation within the framework of Drils. The so-obtained algorithms are proved to provide substantial improvements. This is demonstrated based on extensive experiments using a diverse set of NKQ-landscapes, with different degrees of ruggedness, as well as, different dimensions ranging from relatively small to very large. Besides, we provide a comprehensive analysis on the impact of the proposed mechanisms allowing us to highlight the guiding principles for an accurate design and configuration of the perturbation as a function of landscape characteristics.
The Large-Scale Vehicle Routing Problem is an NP-hard combinatorial optimisation problem with many challenges regarding the increasing number of possible solutions. To reduce the search space, limiting the neighbourhood size for the neighbourhood search approaches is a commonly used strategy to reach a good balance between efficiency and effectiveness. However, it lacks generalisability, since setting a fixed neighbourhood limit might be below optimal for certain instances. In this work, a heuristic method that automatically changes the neighbourhood size is proposed. The heuristic increases or decreases the search scope of the neighbourhood search operators to better match the search process. It does that by looking at the moving trajectories of the previous iteration. We combine the proposed online neighbourhood size adaption heuristic with the highly-efficient Knowledge-Guided Local Search (KGLS) and successfully achieved up to almost 40% improvement to the efficiency. Furthermore, the experiment results show that the KGLS with the adaptive neighbour size heuristic can obtain statistically better solutions on 50 out of the 110 instances compared, while worse on only 25 instances.
Offline algorithm configuration methods search for fixed parameter values for a given set of problem instances. For each parameter, such methods perform an equivalent to a constant regression, since the parameter value remains constant for any problem instance. However, optimal parameter values may depend on instance features, such as the instance size. In this paper, we represent parameters by non-constant models, which set the parameter values according to the instance size. Instead of searching for parameter values directly, the configuration process calibrates such models. In particular, we propose a simple yet effective linear model, which approximates linear relations between instance size and optimal parameter values. For modeling nonlinear relations, we propose piecewise and log-log linear models. The evaluation of the proposed methods on four configuration scenarios show good performance gains in comparison to traditional instance-independent algorithm configuration with comparable tuning effort.
The Bus Driver Scheduling Problem (BDSP) is a combinatorial optimisation problem that consists of assigning bus drivers to vehicles with predetermined routes. The objective is to optimise the employees' operating costs and work quality based on goals like the number of vehicle changes. This problem is highly constrained due to the complex rules specified by a collective agreement and law. Hence, solving real-life instances with exact methods in a reasonable time is very challenging. In this work, we investigate and compare metaheuristics based on Tabu Search and Iterated Local Search for solving this problem. We analyse the impact of different solution components, including neighbourhoods, acceptance criteria, tabu lists, and perturbation moves. Further, we provide a new set of large real-life-based instances that extends the existing benchmark. We compare our methods with the state-of-the-art approaches on the extended set of instances and show that our algorithms provide very good solutions for large instances.
The Target Set Selection (TSS) problem is an NP-hard combinatorial optimization problem with origins in the field of social networks. There are various problem variants, all dealing with finding a smallest subset of vertices of a graph such that their influence is propagated to all nodes of the graph under a specific diffusion model. Despite the practical relevance of the problem, most existing research efforts have focused on theoretical properties restricted to certain classes of graphs. The richness in terms of theoretical results is in contrast to the scarceness of research aiming at efficiently solving the TSS problem. In this work we propose a Biased Random Key Genetic Algorithm (BRKGA) for solving the TSS problem in large-scale social networks. We consider the problem in combination with the Linear Threshold diffusion model. The obtained results show that our approach outperforms a recent heuristic from the literature.
With a focus on both the fitness and cost of subset selection, we study stochastic local search (SLS) heuristics in this paper. In particular, we consider subset selection problems where the cost of fitness function evaluation needs to be accounted for. Here, cost can be fitness evaluation's computation time or energy cost. We propose and study an SLS method, SLS4CFF, tailored to such problems. SLS4CFF ("SLS for costly fitness functions") is an amalgamation of several existing heuristics. We develop a homogeneous Markov chain model that explicitly represents both fitness and cost of subset selection with SLS4CFF. This Markov chain, which can be lumped or compressed for certain fitness and cost functions, enables us to better understand and analyze hyperparameter optimization in a principled manner, via expected hitting times. Studies with synthetic and real-world problems improve the understanding of SLS and demonstrate the importance of cost-awareness.
In real-world optimisation, it is common to face several sub-problems interacting and forming the main problem. There is an inter-dependency between the sub-problems, making it impossible to solve such a problem by focusing on only one component. The traveling thief problem (TTP) belongs to this category and is formed by the integration of the traveling salesperson problem (TSP) and the knapsack problem (KP). In this paper, we investigate the inter-dependency of the TSP and the KP by means of quality diversity (QD) approaches. QD algorithms provide a powerful tool not only to obtain high-quality solutions but also to illustrate the distribution of high-performing solutions in the behavioural space. We introduce a MAP-Elite based evolutionary algorithm using well-known TSP and KP search operators, taking the TSP and KP score as behavioural descriptor. Afterwards, we conduct comprehensive experimental studies that show the usefulness of using the QD approach applied to the TTP. First, we provide insights regarding high-quality TTP solutions in the TSP/KP behavioural space. Afterwards, we show that better solutions for the TTP can be obtained by using our QD approach and it can improve the best-known solution for a wide range of TTP instances used for benchmarking in the literature.
Gray-box optimization employs the knowledge about the true direct gene dependencies represented by the Variable Interaction Graph (VIG). This knowledge is utilized in many ways, e.g., for improving the fitness computation efficiency and proposing more effective operators for Genetic Algorithms (GAs). In the Black-box optimization, the underlying problem structure is not known. Therefore, linkage learning techniques were proposed to at least approximate gene relations and improve the evolutionary search. However, since these techniques are based on predictions, they were not suitable for building VIG. The proposition of Empirical Linkage Learning (ELL) has changed the situation. In ELL, the prediction that two genes are dependent is replaced by certainty. Additionally, ELL techniques are proven never to mark two independent genes as dependent. Therefore, we use ELL to build the empirical VIG and propose the fusion of ELL and the Gray-box operators. On this base, we propose two new mechanisms that detect (with certainty) the missing linkage between two groups of genes and the fact that the population is stuck. We integrate these mechanisms with a Gray-box operator (partition crossover) in the proposed Dark Gray Genetic Algorithm that is shown highly competitive to other state-of-the-art GAs.
The Network Alignment problem is a hard Combinatorial Optimization problem with a wide range of applications, especially in computational biology. Given two (or more) networks, the goal is to find a mapping between their respective nodes that preserves the topological and functional structure of the networks. In this work we extend a novel ant colony optimization approach for network alignment by adding a recently proposed Negative Learning mechanism. In particular, information for Negative Learning is obtained by solving sub-instances of the tackled problem instances at each iteration by means of an Integer Linear Programming solver. The results show that the proposed algorithm not only outperforms the standard ant colony optimization approach but also current state-of-the-art methods from the literature.
We propose looking at the phenomenon of fitness landscape funnels in terms of their depth. In particular, we examine how the depth of funnels in Local Optima Networks (LONs) of benchmark Quadratic Assignment Problem instances relate to metaheuristic performance. Three distinct iterated local search (ILS) acceptance strategies are considered: better-or-equal (standard), annealing-like, and restart. Funnel measurements are analysed for their connection to ILS performance on the underlying combinatorial problems. We communicate the findings through hierarchical clustering of LONs, network visualisations, subgroup analysis, correlation analysis, and Random Forest regression models. The results show that funnel depth is associated with search difficulty, and that there is an interplay between funnel structure and acceptance strategy. Standard and annealing acceptance work better than restart on both deep-funnel and shallow-funnel problems; standard acceptance is the best strategy when optimal funnel(s) are deep, while annealing acceptance is superior when they are shallow. Regression models including funnel depth measurements could explain up to 96% of ILS runtime variance (with annealing-like acceptance). The runtime of ILS with restarts was less explainable using funnel features.
Perturbing solutions is a key factor in iterated local search (ILS). The standard approach for perturbing a solution is to randomly change a fixed number of decision variables from the current local optimum. Finding suitable values of perturbation strength is difficult. It is desirable that consecutive local optima generated by ILS be close to each other and correlated in fitness. However, if the perturbation is too small, we can get stuck in the same local optimum. We propose a new perturbation strategy for ILS applied to pseudo-Boolean optimization problems where decision variables that interact are perturbed. These interactions are identified in a variable interaction graph (VIG), that is available in gray-box optimization. For black-box optimization, we propose a local search strategy that estimates an empirical VIG. Theoretical and experimental results show that perturbation based on the VIG is efficient in random and adjacent NK landscapes. Results also show that the proposed local search strategy was able to build empirical VIGs with more than 97% of the edges of the true VIG.
The Capacitated Arc Routing Problem (CARP) aims at assigning vehicles to serve tasks which are located at different arcs in a graph. However, the originally planned routes are easily affected by different dynamic events like newly added tasks. This gives rise to Dynamic CARP (DCARP) instances, which need to be efficiently optimized for new high-quality service plans in a short time. However, it is unknown which dynamic events make DCARP instances especially hard to solve. Therefore, in this paper, we provide an investigation of the influence of different dynamic events on DCARP instances from the perspective of fitness landscape analysis based on a recently proposed hybrid local search (HyLS) algorithm. We generate a large set of DCARP instances based on a variety of dynamic events and analyze the fitness landscape of these instances using several different measures such as fitness correlation length. From the empirical results we conclude that cost-related events have no significant impact on the difficulty of DCARP instances, but instances which require more new vehicles to serve the remaining tasks are harder to solve. These insights improve our understanding of the DCARP instances and pave the way for future work on improving the performance of DCARP algorithms.
The Uncertain Capacitated Arc Routing Problem (UCARP) is a well-known combinatorial optimisation problem that has many real-world applications. Genetic Programming is usually utilised to handle UCARP by evolving effective routing policies, which can respond to the uncertain environment in real-time. Previous studies mainly focus on the effectiveness of the routing policies but ignore the interpretability. In this paper, we focus on post-hoc interpretability, which explains a pre-trained complex routing policy. Unlike the existing explanation methods for classification/regression models, the behaviour of a routing policy is characterised as a ranking process rather than predicting a single output. To address this issue, this paper proposes a Local Ranking Explanation (LRE) method for GP-evolved routing policies for UCARP. Given a UCARP decision situation, LRE trains a linear model that gives the same ranks of the candidate tasks as those of the explained routing policy. The experimental results demonstrate that LRE can obtain more interpretable linear models that have highly correlated and consistent behaviours with the original routing policy in most decision situations. By analysing coefficients and attribute importance of the linear model, we managed to provide a local explanation of the original routing policy in a decision situation.
Interest in reinforcement learning (RL) has recently surged due to the application of deep learning techniques, but these connectionist approaches are opaque compared with symbolic systems. Learning Classifier Systems (LCSs) are evolutionary machine learning systems that can be categorised as eXplainable AI (XAI) due to their rule-based nature. Michigan LCSs are commonly used in RL domains as the alternative Pittsburgh systems (e.g. SAMUEL) suffer from complex algorithmic design and high computational requirements; however they can produce more compact/interpretable solutions than Michigan systems. We aim to develop two novel Pittsburgh LCSs to address RL domains: PPL-DL and PPL-ST. The former acts as a "zeroth-level" system, and the latter revisits SAMUEL's core Monte Carlo learning mechanism for estimating rule strength. We compare our two Pittsburgh systems to the Michigan system XCS across deterministic and stochastic FrozenLake environments. Results show that PPL-ST performs on-par or better than PPL-DL and outperforms XCS in the presence of high levels of environmental uncertainty. Rulesets evolved by PPL-ST can achieve higher performance than those evolved by XCS, but in a more parsimonious and therefore more interpretable fashion, albeit with higher computational cost. This indicates that PPL-ST is an LCS well-suited to producing explainable policies in RL domains.
Coevolutionary algorithms have effectively trained multiagent teams to collectively solve complex problems. However, in many real-world applications, changes to the environment or agent functionality require agents to function well with multiple different teams. In this paper, we provide a counterfactual-state-based shaped fitness evaluation that provides an agent-specific signal that promotes effective cooperation across a variety of teams. The key insight leading to this result is that the shaped fitnesses across multiple teams can be aggregated because those performances are independent of each other. As a result, this approach leads to a single signal that captures an agent's performance across multiple teams. We show that this method provides significant improvement over standard multiagent fitness-shaped methods in learning robust cooperative behavior.
DiBB (for Distributing Black-Box) is a meta-algorithm and framework that addresses the decades-old scalability issue of Black-Box Optimization (BBO), including Evolutionary Computation. Algorithmically, it does so by creating out-of-the-box a Partially Separable (PS) version of any existing black-box algorithm. This is done by leveraging expert knowledge about the task at hand to define blocks of parameters expected to have significant correlation, such as weights entering a same neuron/layer in a neuroevolution application. DiBB distributes the computation to a set of machines without further customization, while still retaining the advanced features of the underlying BBO algorithm, such as scale invariance and step-size adaptation, which are typically lost in recent distributed ES implementations. This is achieved by instantiating a separate instance of the underlying base algorithm for each block, running on a dedicated machine, with DiBB handling communication and constructing complete individuals for evaluation on the original task. DiBB's performance scales constantly with the number of parameter-blocks defined, which should allow for unprecedented applications on large clusters. Our reference implementation (Python, on GitHub and PyPI) demonstrates a 5x speed-up on COCO/BBOB using our new PS-CMA-ES. We also showcase a neuroevolution application (11 590 weights) on the PyBullet Walker2D with our new PS-LM-MA-ES.
To achieve coordination in multiagent systems such as air traffic control or search and rescue, agents must not only evolve their policies, but also adapt to the behaviors of other agents. However, extending coevolutionary algorithms to complex domains is difficult because agents evolve in the dynamic environment created by the changing policies of other agents. This problem is exacerbated when the teams consist of diverse asymmetric agents (agents with different capabilities and objectives), making it difficult for agents to evolve complementary policies. Quality-Diversity methods solve part of the problem by allowing agents to discover not just optimal, but diverse behaviors, but are computationally intractable in multiagent settings. This paper introduces a multiagent learning framework to allow asymmetric agents to specialize and explore diverse behaviors needed for coordination in a shared environment. The key insight of this work is that a hierarchical decomposition of diversity search, fitness optimization, and team composition modeling allows the fitness on the team-wide objective to direct the diversity search in a dynamic environment. Experimental results in multiagent environments with temporal and spatial coupling requirements demonstrate the diversity of acquired agent synergies in response to a changing environment and team compositions.
Feature selection is an approach to selecting the best set of features from a feature pool. Its goal is to increase the performance of the machine learning model by providing sufficient information while avoiding redundant or irrelevant features. Due to the high dimensionality of data in practical problems, solutions ranging from genetic algorithms to reinforcement learning have been recently tried to solve this task. In this work, we propose a novel feature selection architecture that uses metaheuristic techniques combined with evolutionary algorithms and chaos theory to select the best features for a model. It uses the concept of evolution, which guides the algorithm to the best path and a chaotic map function to create new random subsets of features. The backbone of this algorithm, mutation and crossover operator, is inspired by genetic algorithms. It uses these methods to increase the exploration and exploitation strategies for the search space. We tested the proposed method on 10 datasets using different machine learning models and achieved significant improvement on each dataset compared to other methods in the literature.
Medical image processing can lack images for diagnosis. Generative Adversarial Networks (GANs) provide a method to train generative models for data augmentation. Synthesized images can be used to improve the robustness of computer-aided diagnosis systems. However, GANs are difficult to train due to unstable training dynamics that may arise during the learning process, e.g., mode collapse and vanishing gradients. This paper focuses on Lipizzaner, a GAN training framework that combines spatial coevolution with gradient-based learning, which has been used to mitigate GAN training pathologies. Lipizzaner improves performance by taking advantage of its distributed nature and running at scale. Thus, the Lipizzaner algorithm and implementation robustness can be scaled to high-performance computing (HPC) systems to provide more accurate generative models. We address medical imaging data augmentation to create chest X-Ray images by using Lipizzaner on the HPC infrastructure provided by Oak Ridge National Labs' Summit Supercomputer. The experimental analysis shows improved performance by increasing the scale of the Lipizzaner GAN training. We also demonstrate that distributed coevolutionary learning improves performance even when using suboptimal neural network architectures due to hardware constraints.
Curriculum learning allows complex tasks to be mastered via incremental progression over 'stepping stone' goals towards a final desired behaviour. Typical implementations learn locomotion policies for challenging environments through gradual complexification of a terrain mesh generated through a parameterised noise function. To date, researchers have predominantly generated terrains from a limited range of noise functions, and the effect of the generator on the learning process is underrepresented in the literature. We compare popular noise-based terrain generators to two indirect encodings, CPPN and GAN. To allow direct comparison between both direct and indirect representations, we assess the impact of a range of representation-agnostic MAP-Elites feature descriptors that compute metrics directly from the generated terrain meshes. Next, performance and coverage are assessed when training a humanoid robot in a physics simulator using the PPO algorithm. Results describe key differences between the generators that inform their use in curriculum learning, and present a range of useful feature descriptors for uptake by the community.
Structural design of neural networks is crucial for the success of deep learning. While most prior works in evolutionary learning aim at directly searching the structure of a network, few attempts have been made on another promising track, channel pruning, which recently has made major headway in designing efficient deep learning models. In fact, prior pruning methods adopt human-made pruning functions to score a channel's importance for channel pruning, which requires domain knowledge and could be sub-optimal. To this end, we pioneer the use of genetic programming (GP) to discover strong pruning metrics automatically. Specifically, we craft a novel design space to express high-quality and transferable pruning functions, which ensures an end-to-end evolution process where no manual modification is needed on the evolved functions for their transferability after evolution. Unlike prior methods, our approach can provide both compact pruned networks for efficient inference and novel closed-form pruning metrics which are mathematically explainable and thus generalizable to different pruning tasks. While the evolution is conducted on small datasets, our functions shows promising results when applied to more challenging datasets, different from those used in the evolution process. For example, on ILSVRC-2012, an evolved function achieves state-of-the-art pruning results.
The uncertainty associated with predictions is vital for decision making in financial time series forecasting. For the sake of enriching forecasts, quantile interval predictions are generated and their quality is evaluated using two conflicting indicators: quantile coverage error and quantile estimation error, which are later optimized using multi-objective evolutionary algorithms (MOEAs). The high performance quantile multi-horizon predictions are computed by an attention-based deep learning model based on transformers and the weights of its last layer are optimized using NSGA-II and NSGA-III. The Pareto fronts obtained in this work show that evolutionary algorithms can find a wide range of solutions that allow the decision maker to efficiently fine-tune the quantile forecasts without need of retraining the neural network. The results show that the decision maker can choose solutions whose risks' ranges variation is as high as 169% with only an increase of 5% in the original loss function. The dataset used in this work consists of S&P 500 Futures from Jan-2015 to Jun-2021 with one-hour frequency.
Echo State Networks represent a type of recurrent neural network with a large randomly generated reservoir and a small number of readout connections trained via linear regression. The most common topology of the reservoir is a fully connected network of up to thousands of neurons. Over the years, researchers have introduced a variety of alternative reservoir topologies, such as a circular network or a linear path of connections. When comparing the performance of different topologies or other architectural changes, it is necessary to tune the hyperparameters for each of the topologies separately since their properties may significantly differ. The hyperparameter tuning is usually carried out manually by selecting the best performing set of parameters from a sparse grid of predefined combinations. Unfortunately, this approach may lead to underperforming configurations, especially for sensitive topologies. We propose an alternative approach of hyperparameter tuning based on the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Using this approach, we have improved multiple topology comparison results by orders of magnitude suggesting that topology alone does not play as important role as properly tuned hyperparameters.
Learning Classifier Systems (LCSs) are a family of versatile rule-based machine learning algorithms. Despite their long research history, the foundations of most LCSs are still informal due to them having been developed in an ad-hoc manner. An exception to this is the fully Bayesian LCS described by Drugowitsch in his 2008 book. In the present paper we shortly reiterate the central points of that system and then showcase our Python implementation of it, reporting on our attempt at replicating Drugowitsch's empirical results as well as on the results of a preliminary comparison study with the well-known LCS XCSF on the same learning tasks. We are able to replicate parts of Drugowitsch's results and explain the remaining differences. The comparison results make us conclude that the system may be competitive and exhibits unique features. Finally, we identify its current greatest shortcomings and based on those the next steps towards making it more universally usable.
This paper focuses on the concept of "absumption" which restrains over-general rules by decomposing them into several concrete rules, proposes the novel "absumption" for continuous spaces by improving the conventional absumption to achieve high performance (e.g., the acquired rewards) in a noisy environment, and integrates it into the XCS for real-valued inputs (XCSR) to evaluate it through a comparison with the conventional absumption. Concretely, the proposed absumption mechanism based on Overgenerality and Condition-clustering based specialization (called Absumption-OC) manipulates the balance between the Overgenerality of the rules and the specialization of Condition of the rules. Through the intensive experiments of three different types of continuous space problems, the following implications have been revealed: (1) XCSR with Absumption-OC shows the statistically significant performance in the acquired reward, the system error, and the population size against XCSR with the conventional absumption; and (2) this effectiveness of Absumption-OC becomes to be clear in noisy reward environments in comparison within noiseless reward environments.
This paper focuses on the rule representation in Learning Classifier System (LCS) and proposes a flexible representation mechanism that can generate a variety of shapes of its matching area with one rule condition of a classifier. Concretely, the proposed representation mechanism changes the shape of the matching area according to the logical product or multiplication of the values of the probability distribution in the multiple dimension. As one of its implementation, this paper introduces the beta distribution in XCS for continuous space. Through intensive experiments of different types of continuous space problems, the following implications have been revealed: XCS based on the beta distribution (1) can match line and curved shapes using the same classifier; (2) can obtain higher reward values with the same or fewer classifiers than the conventional representations (i.e., the hyperrectangular and hyperellipsoidal representations); and (3) is robust to variations in the shape of the class boundary, contributing to stable classification performance.
Explainable artificial intelligence (XAI) is an important and rapidly expanding research topic. The goal of XAI is to gain trust in a machine learning (ML) model through clear insights into how the model arrives at its predictions. Genetic programming (GP) is often cited as being uniquely well-suited to contribute to XAI because of its capacity to learn (small) symbolic models that have the potential to be interpreted. Nevertheless, like many ML algorithms, GP typically results in a single best model. However, in practice, the best model in terms of training error may well not be the most suitable one as judged by a domain expert for various reasons, including overfitting, multiple different models existing that have similar accuracy and unwanted errors on particular data points due to typical accuracy measures like mean squared error. Hence, to increase chances that domain experts deem a resulting model plausible, it becomes important to be able to explicitly search for multiple, diverse, high-quality models that trade-off different meanings of accuracy. In this paper, we achieve exactly this with a novel multi-modal multi-tree multi-objective GP approach that extends a modern model-based GP algorithm known as GP-GOMEA that is already effective at searching for small expressions.
AutoML tackles the problem of automatically configuring machine learning pipelines to specific data analysis problems. These pipelines may include methods for preprocessing and classifying data. There are many strategies to find the best pipeline configuration, and most of them address the problem from an optimization perspective, using Bayesian optimization and evolutionary algorithms. However, little is known about the shape of the search space these methods work upon. What is known for a fact is that these spaces include both categorical, continuous and conditional hyperparameters, and dealing with them may not be trivial. In this direction, this paper performs an analysis of AutoML search spaces using local optimal networks (LON) to better understand the global properties of the search space. Knowing these properties allow us to improve current optimization methods. By analyzing the metrics extracted from the LON we got many insights on the difficulty of the problem.
Dimensionality reduction (DR) is an important technique for data exploration and knowledge discovery. However, most of the main DR methods are either linear (e.g., PCA), do not provide an explicit mapping between the original data and its lower-dimensional representation (e.g., MDS, t-SNE, isomap), or produce mappings that cannot be easily interpreted (e.g., kernel PCA, neural-based autoencoder). Recently genetic programming (GP) has been used to evolve interpretable DR mappings in the form of symbolic expressions. There exists a number of ways in which GP can be used to this end and no study exists that performs a comparison. In this paper, we fill this gap by comparing existing GP methods as well as devising new ones. We evaluate our methods on several benchmark datasets based on predictive accuracy and on how well the original features can be reconstructed using the lower-dimensional representation only. Finally we qualitatively assess the resulting expressions and their complexity. We find that various GP methods can be competitive with state-of-the-art DR algorithms and that they have the potential to produce interpretable DR mappings.
Quantum and quantum-inspired optimisation algorithms are designed to solve problems represented in binary, quadratic and unconstrained form. Combinatorial optimisation problems are therefore often formulated as Quadratic Unconstrained Binary Optimisation Problems (QUBO) to solve them with these algorithms. Moreover, these QUBO solvers are often implemented using specialised hardware to achieve enormous speedups, e.g. Fujitsu's Digital Annealer (DA) and D-Wave's Quantum Annealer. However, these are single-objective solvers, while many real-world problems feature multiple conflicting objectives. Thus, a common practice when using these QUBO solvers is to scalarise such multi-objective problems into a sequence of single-objective problems. Due to design trade-offs of these solvers, formulating each scalarisation may require more time than finding a local optimum. We present the first attempt to extend the algorithm supporting a commercial QUBO solver as a multi-objective solver that is not based on scalarisation. The proposed multi-objective DA algorithm is validated on the bi-objective Quadratic Assignment Problem. We observe that algorithm performance significantly depends on the archiving strategy adopted, and that combining DA with non-scalarisation methods to optimise multiple objectives outperforms the current scalarised version of the DA in terms of final solution quality.
Evolutionary Algorithms have been regularly used for solving multi and many objectives optimization problems. The effectiveness of such methods is determined generally by their ability to generate a well-distributed front (diversity) that is as close as possible to the optimal Pareto front (proximity). Analysis of current multi-objective evolutionary frameworks shows that they are still sub-optimal and present poor versatility on different geometries and dimensionalities. For that, in this paper, we present AGE-MOEA++, a new Multi and Many Objective Evolutionary Algorithm that: (1) incorporates the principle of Pareto Front (PF) shape fitting to enhance the convergence in different shaped high dimensional objective spaces, and (2) adapts K-means ++ fundamentals in order to best manage the diversity in non-uniform distributed PF. The empirical study shows that our proposal has better results than the state-of-the-art approaches in terms of IGD and is competitive in terms of GD.
Due to advancements in hardware capabilities and computation power, multi-objective optimization has recently been used in many industrial problems. These problems usually have multiple objectives of conflicting nature. To solve such problems, Multi-objective Evolutionary Algorithms (MOEAs) utilize Non-dominated Sorting (NDS) algorithms to rank the viability of the solutions efficiently. Researchers have focused on improving the time complexity of NDS algorithms for a long time due to their wide range of applications. With the increase in the use of GPUs for general-purpose scientific computing, it is now becoming possible to reduce the computation cost of such algorithms. In this paper, we have analyzed one such NDS algorithm, Corner Sort, and highlighted two areas within it with a high scope of parallelism. We propose a highly efficient, parallelized version of Corner Sort, implemented using CUDA framework. Utilizing the thousands of cores in a GPU, our algorithm is able to break the solution set into smaller chunks and simultaneously process them. Furthermore, the comparison between two solutions across all the objectives is done parallelly as well. On benchmark datasets, our algorithm performs up to 10x faster than the serial algorithm, and its performance improves for larger datasets, irrespective of the number of objectives.
The design of effective features enabling the development of automated landscape-aware techniques requires to address a number of inter-dependent issues. In this paper, we are interested in contrasting the amount of budget devoted to the computation of features with respect to: (i) the effectiveness of the features in grasping the characteristics of the landscape, and (ii) the gain in accuracy when solving an unknown problem instance by means of a feature-informed automated algorithm selection approach. We consider multi-objective combinatorial landscapes where, to the best of our knowledge, no in depth investigations have been conducted so far. We study simple cost-adjustable sampling strategies for extracting different state-of-the-art features. Based on extensive experiments, we report a comprehensive analysis on the impact of sampling on landscape feature values, and the subsequent automated algorithm selection task. In particular, we identify different global trends of feature values leading to non-trivial cost-vs-accuracy trade-off(s). Besides, we provide evidence that the sampling strategy can improve the prediction accuracy of automated algorithm selection. Interestingly, this holds independently of whether the sampling cost is taken into account or not in the overall solving budget.
So far, multi-objective NK landscapes have been investigated under the assumption of a homogeneous nature of the involved objectives in terms of difficulty. However, we argue that problems with heterogeneous objectives, e.g., in terms of multi-modality, can be challenging for multi-objective evolutionary algorithms, and deserve further considerations. In this paper, we propose a model of multi-objective NK landscapes, where each objective has a different degree of variable interactions (K), as a benchmark to investigate heterogeneous multi-objective optimization problems. We show that the use of a rank-annotated neighborhood network with labeled local optimal solutions, together with landscape metrics extracted from the heterogeneous objectives, thoroughly characterize bi-objective NK landscapes with a different level of heterogeneity among the objectives.
Bayesian optimization is often used to optimize expensive black box optimization problems with long simulation times. Typically Bayesian optimization algorithms propose one solution per iteration. The downside of this strategy is the sub-optimal use of available computing power. To efficiently use the available computing power (or a number of licenses etc.) we introduce a multi-point acquisition function for parallel efficient multi-objective optimization algorithms. The multi-point acquisition function is based on the hypervolume contribution of multiple solutions simultaneously, leading to well spread solutions along the Pareto frontier. By combining this acquisition function with a constraint handling technique, multiple feasible solutions can be proposed and evaluated in parallel every iteration. The hypervolume and feasibility of the solutions can easily be estimated by using multiple cheap radial basis functions as surrogates with different configurations. The acquisition function can be used with different population sizes and even for one shot optimization. The strength and generalizability of the new acquisition function is demonstrated by optimizing a set of black box constraint multi-objective problem instances. The experiments show a huge time saving factor by using our novel multi-point acquisition function, while only marginally worsening the hypervolume after the same number of function evaluations.
The (1 + (λ, λ)) genetic algorithm is a recently proposed single-objective evolutionary algorithm with several interesting properties. We show that its main working principle, mutation with a high rate and crossover as repair mechanism, can be transported also to multi-objective evolutionary computation. We define the (1 + (λ, λ)) global SEMO algorithm, a variant of the classic global SEMO algorithm, and prove that it optimizes the OneMinMax benchmark asymptotically faster than the global SEMO. Following the single-objective example, we design a one-fifth rule inspired dynamic parameter setting (to the best of our knowledge for the first time in discrete multi-objective optimization) and prove that it further improves the runtime to O(n2), whereas the best runtime guarantee for the global SEMO is only O(n2 log n).
Dynamic multi-objective optimization problems (DMOPs) are widely accepted to be more challenging than stationary problems due to the time-dependent nature of the objective functions and/or constraints. Evaluation of purpose-built algorithms for DMOPs is often performed on narrow selections of dynamic instances with differing change magnitude and frequency or a limited selection of problems. In this paper, we focus on the reproducibility of simulation experiments for parameters of DMOPs. Our framework is based on an extension of PlatEMO, allowing for the reproduction of results and performance measurements across a range of dynamic settings and problems. A baseline schema for dynamic algorithm evaluation is introduced, which provides a mechanism to interrogate performance and optimization behaviours of well-known evolutionary algorithms that were not designed specifically for DMOPs. Importantly, by determining the maximum capability of non-dynamic multi-objective evolutionary algorithms, we can establish the minimum capability required of purpose-built dynamic algorithms to be useful. The simplest modifications to manage dynamic changes introduce diversity. Allowing non-dynamic algorithms to incorporate mutated/random solutions after change events determines the improvement possible with minor algorithm modifications. Future expansion to include current dynamic algorithms will enable reproduction of their results and verification of their abilities and performance across DMOP benchmark space.
The performance of multiobjective algorithms varies across problems, making it hard to develop new algorithms or apply existing ones to new problems. To simplify the development and application of new multiobjective algorithms, there has been an increasing interest in their automatic design from component parts. These automatically designed metaheuristics can outperform their human-developed counterparts. However, it is still uncertain what are the most influential components leading to their performance improvement. This study introduces a new methodology to investigate the effects of the final configuration of an automatically designed algorithm. We apply this methodology to a well-performing Multiobjective Evolutionary Algorithm Based on Decomposition (MOEA/D) designed by the irace package on nine constrained problems. We then contrast the impact of the algorithm components in terms of their Search Trajectory Networks (STNs), the diversity of the population, and the hypervolume. Our results indicate that the most influential components were the restart and update strategies, with higher increments in performance and more distinct metric values. Also, their relative influence depends on the problem difficulty: not using the restart strategy was more influential in problems where MOEA/D performs better; while the update strategy was more influential in problems where MOEA/D performs the worst.
One essential issue in surrogate-assisted evolutionary algorithms (SAEAs) is how to evaluate solutions and find candidates for evolution, without resorting to the calculation of the computational-expensive real objective function. While most studies use regression models as surrogate models for SAEAs, some recent work propose to use classification models for single-objective expensive optimization, as classification models are more stable and much easier to train with limited available data. However, for multi-objective expensive optimization problems, it is more challenging to classify the solutions into different levels according to their quality, and thus the use of classification-based surrogate models for multi-objective SAEAs has not been explored in-depth. To this end, in this paper we propose a classification-assisted level-based learning swarm optimizer for expensive multi-objective optimization. First, a preference relationship among individuals taking both Pareto dominance and crowding distance into account is defined to divide the whole population into different quality levels. Then, a classification model is trained as the surrogate. In addition, a selection strategy is devised in decision space to acquire solutions on the sparse part on the Pareto front. Experimental results validate the superiority and efficiency of the proposed algorithm on benchmark test functions.
Evolutionary multi-objective optimization plays a vital role in solving many complex real-world optimization problems. Numerous approaches have been proposed over the years, and popular methods such as NSGA and its variants incorporate non-dominated sorting selection into evolutionary genetic algorithms to extract competing Pareto-optimal solutions from all over the objective space. However, in applications where the decision-maker is interested in a region of interest, a global optimization wastes effort to find irrelevant solutions outside of the preferred region. In this work, we propose an approach named ROI-NSGA-II to limit the optimization effort to a region of interest defined by the boundaries provided by the decision-maker. The ROI-NSGA-II invites the classical NSGA-II algorithm into the desired region using a modified dominance relation and conquers solutions within this region using a modified crowding distance based selection. The effectiveness of our approach is demonstrated on a set of benchmark problems with up to ten objectives and a real-world application, and the results are compared to a state-of-the-art R-NSGA-II.
A key idea in many-objective optimization is to approximate the optimal Pareto front using a set of representative non-dominated solutions. The produced solution set should be close to the optimal front (convergence) and well-diversified (diversity). Recent studies have shown that measuring both convergence and diversity depends on the shape (or curvature) of the Pareto front. In recent years, researchers have proposed evolutionary algorithms that model the shape of the non-dominated front to define environmental selection strategies that adapt to the underlying geometry. This paper proposes a novel method for non-dominated front modeling using the Newton-Raphson iterative method for roots finding. Second, we compute the distance (diversity) between each pair of non-dominated solutions using geodesics, which are generalizations of the distance on Riemann manifolds (curved topological spaces). We have introduced an evolutionary algorithm within the Adaptive Geometry Estimation based MOEA (AGE-MOEA) framework, which we called AGE-MOEA-II. Computational experiments with 17 problems from the WFG and SMOP benchmarks show that AGE-MOEA-II outperforms its predecessor AGE-MOEA as well as other state-of-the-art many-objective algorithms, i.e., NSGA-III, MOEA/D, VaEA, and LMEA.
Existing extractive summarization systems suffer from the problem of identifying correct aspects while generating summary from multiple documents on a particular topic. Online access to documents allows readers to post comments on available documents/blogs which may be helpful in correctly identifying the important aspects from a document in a crowd-sourcing fashion. However, comments can be unstructured and boisterous, making them difficult to manage. The current paper builds an unsupervised summarization system utilizing documents and the corresponding comments by carefully selecting a subset of sentences from the document based on document-comment similarities. A binary multiobjective optimization technique exploring the search capability of differential evolution is utilized for simultaneously optimizing different aspects of summarization like diversity between sentences of the summary, user-attention and density-based score providing importance to the comments with respect to the documents, user-attention with a syntactic score between the summary phrases. Extensive analysis is performed by varying the combinations of different objective functions, on a dataset containing 45 topics belonging to different news themes as well as in a new French dataset having documents, summaries, and comments. The experimental result validates the importance of all the objective functions and superiority over other methods.
One important issue in evolutionary multi-objective optimization (EMO) which still leaves room for improvement is the maintenance of the subset of the obtained candidate solutions that forms the approximation of the Pareto front (in short: selection or archiving). Existing archivers that are entirely based on the distances between the candidate solutions are known to reveal cyclic behavior and do hence not tap their full potential. On the other hand, there exist archivers based on ∊-dominance that guarantee monotonic behaviors as well as certain approximation qualities in the limit. For such methods, however, so far the magnitudes of the final archive size heavily depend on several design parameters and cannot be bounded a priori which is desired by many EMO researchers.
In this paper, we propose a new archiver, ArchiveUpdateBound, for bi-objective problems that is based both on distance and ∊-dominance. ArchiveUpdateBound aims for a uniform spread of the solutions along the Pareto front, and the archive sizes can be bounded above. In particular, the use of ∊-dominance eliminates the occurrence of cyclic behavior. Numerical results using NSGA-II, MOEA/D, and SMS-MOEA on selected benchmark problems indicate that the use of the novel archiver significantly increases the performance of the base MOEAs.
Recent advances in the visualization of continuous multimodal multi-objective optimization (MMMOO) landscapes brought a new perspective to their search dynamics. Locally eficient (LE) sets, often considered as traps for local search, are rarely isolated in the decision space. Rather, intersections by superposing attraction basins lead to further solution sets that at least partially contain better solutions. The Multi-Objective Gradient Sliding Algorithm (MOGSA) is an algorithmic concept developed to exploit these superpositions. While it has promising performance on many MMMOO problems with linear LE sets, closer analysis of MOGSA revealed that it does not sufficiently generalize to a wider set of test problems. Based on a detailed analysis of shortcomings of MOGSA, we propose a new algorithm, the Multi-Objective Landscape Explorer (MOLE). It is able to efficiently model and exploit LE sets in MMMOO problems. An implementation of MOLE is presented for the bi-objective case, and the practicality of the approach is shown in a benchmarking experiment on the Bi-Objective BBOB testbed.
This paper proposes a two-phase framework with a Bézier simplex-based interpolation method (TPB) for computationally expensive multi-objective optimization. The first phase in TPB aims to approximate a few Pareto optimal solutions by optimizing a sequence of single-objective scalar problems. The first phase in TPB can fully exploit a state-of-the-art single-objective derivative-free optimizer. The second phase in TPB utilizes a Bézier simplex model to interpolate the solutions obtained in the first phase. The second phase in TPB fully exploits the fact that a Bézier simplex model can approximate the Pareto optimal solution set by exploiting its simplex structure when a given problem is simplicial. We investigate the performance of TPB on the 55 bi-objective BBOB problems. The results show that TPB performs significantly better than HMO-CMA-ES and some state-of-the-art meta-model-based optimizers.
A recent runtime analysis (Zheng, Liu, Doerr (2022)) has shown that a variant of the NSGA-II algorithm can efficiently compute the full Pareto front of the OneMinMax problem when the population size is by a constant factor larger than the Pareto front, but that this is not possible when the population size is only equal to the Pareto front size. In this work, we analyze how well the NSGA-II with small population size approximates the Pareto front of One-MinMax. We observe experimentally and by mathematical means that already when the population size is half the Pareto front size, relatively large gaps in the Pareto front remain. The reason for this phenomenon is that the NSGA-II in the selection stage computes the crowding distance once and then repeatedly removes individuals with smallest crowding distance without updating the crowding distance after each removal. We propose an eficient way to implement the NSGA-II using the current crowding distance. In our experiments, this algorithm approximates the Pareto front much better than the previous version. We also prove that the gaps in the Pareto front are at most a constant factor larger than the theoretical minimum.
Fair algorithm evaluation is conditioned on the existence of high-quality benchmark datasets that are non-redundant and are representative of typical optimization scenarios. In this paper, we evaluate three heuristics for selecting diverse problem instances which should be involved in the comparison of optimization algorithms in order to ensure robust statistical algorithm performance analysis. The first approach employs clustering to identify similar groups of problem instances and subsequent sampling from each cluster to construct new benchmarks, while the other two approaches use graph algorithms for identifying dominating and maximal independent sets of nodes. We demonstrate the applicability of the proposed heuristics by performing a statistical performance analysis of five portfolios consisting of three optimization algorithms on five of the most commonly used optimization benchmarks.
The results indicate that the statistical analyses of the algorithms' performance, conducted on each benchmark separately, produce conflicting outcomes, which can be used to give a false indication of the superiority of one algorithm over another. On the other hand, when the analysis is conducted on the problem instances selected with the proposed heuristics, which uniformly cover the problem landscape, the statistical outcomes are robust and consistent.
In this paper, we investigate the effect of a learning rate for the mean in Evolution Strategies with recombination. We study the effect of a half-line search after the mean shift direction is established, hence the learning rate value is conditioned to the direction. We prove convergence and study convergence rates in different dimensions and for different population sizes on the sphere function with the step-size proportional to the distance to the optimum.
We empirically find that a perfect half-line search increases the maximal convergence rate on the sphere function by up to about 70%, assuming the line search imposes no additional costs. The speedup becomes less pronounced with increasing dimension. The line search reduces---however does not eliminate---the dependency of the convergence rate on the step-size. The optimal step-size assumes considerably smaller values with line search, which is consistent with previous results for different learning rate settings. The step-size difference is more pronounced in larger dimension and with larger population size, thereby diminishing an important advantage of a large population.
This study targets the mixed-integer black-box optimization (MI-BBO) problem where continuous and integer variables should be optimized simultaneously. The CMA-ES, our focus in this study, is a population-based stochastic search method that samples solution candidates from a multivariate Gaussian distribution (MGD), which shows excellent performance in continuous BBO. The parameters of MGD, mean and (co)variance, are updated based on the evaluation value of candidate solutions in the CMA-ES. If the CMA-ES is applied to the MI-BBO with straightforward discretization, however, the variance corresponding to the integer variables becomes much smaller than the granularity of the discretization before reaching the optimal solution, which leads to the stagnation of the optimization. In particular, when binary variables are included in the problem, this stagnation more likely occurs because the granularity of the discretization becomes wider, and the existing modification to the CMA-ES does not address this stagnation. To overcome these limitations, we propose a simple modification of the CMA-ES based on lower-bounding the marginal probabilities associated with the generation of integer variables in the MGD. The numerical experiments on the MI-BBO benchmark problems demonstrate the efficiency and robustness of the proposed method.
Selecting the most suitable algorithm and determining its hyperparameters for a given optimization problem is a challenging task. Accurately predicting how well a certain algorithm could solve the problem is hence desirable. Recent studies in single-objective numerical optimization show that supervised machine learning methods can predict algorithm performance using landscape features extracted from the problem instances.
Existing approaches typically treat the algorithms as black-boxes, without consideration of their characteristics. To investigate in this work if a selection of landscape features that depends on algorithms' properties could further improve regression accuracy, we regard the modular CMA-ES framework and estimate how much each landscape feature contributes to the best algorithm performance regression models. Exploratory data analysis performed on this data indicate that the set of most relevant features does not depend on the configuration of individual modules, but the influence that these features have on regression accuracy does. In addition, we have shown that by using classifiers that take the features' relevance on the model accuracy, we are able to predict the status of individual modules in the CMA-ES configurations.
Exploratory Landscape Analysis is a powerful technique for numerically characterizing landscapes of single-objective continuous optimization problems. Landscape insights are crucial both for problem understanding as well as for assessing benchmark set diversity and composition. Despite the irrefutable usefulness of these features, they suffer from their own ailments and downsides. Hence, in this work we provide a collection of different approaches to characterize optimization landscapes. Similar to conventional landscape features, we require a small initial sample. However, instead of computing features based on that sample, we develop alternative representations of the original sample. These range from point clouds to 2D images and, therefore, are entirely feature-free. We demonstrate and validate our devised methods on the BBOB testbed and predict, with the help of Deep Learning, the high-level, expert-based landscape properties such as the degree of multimodality and the existence of funnel structures. The quality of our approaches is on par with methods relying on the traditional landscape features. Thereby, we provide an exciting new perspective on every research area which utilizes problem information such as problem understanding and algorithm design as well as automated algorithm configuration and selection.
Many optimization problems tackled by evolutionary algorithms are not only computationally expensive, but also complicated with one or more sources of noise. One technique to deal with high computational overhead is parallelization. However, though the existing literature gives good insights about the expected behavior of parallelized evolutionary algorithms, we still lack an understanding of their performance in the presence of noise.
This paper considers how parallelization might be leveraged together with multi-parent crossover in order to handle noisy problems. We present a rigorous running time analysis of an island model with weakly connected topology tasked with hill climbing in the presence of general additive noise. Our proofs yield insights into the relationship between the noise intensity and number of required parents. We translate this into positive and negative results for two kinds of multi-parent crossover operators. We then empirically analyze and extend this framework to investigate the trade-offs between noise impact, optimization time, and limits of computation power to deal with noise.
In a Gray-Box Optimization (GBO) setting that allows for partial evaluations, the fitness of an individual can be updated efficiently after a subset of its variables has been modified. This enables more efficient evolutionary optimization with the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) due to its key strength: Gene-pool Optimal Mixing (GOM). For each solution, GOM performs variation for many (small) sets of variables. To improve efficiency even further, parallel computing can be leveraged. For EAs, typically, this comprises population-wise parallelization. However, unless population sizes are large, this offers limited gains. For large GBO problems, parallelizing GOM-based variation holds greater speed-up potential, regardless of population size. However, this potential cannot be directly exploited because of dependencies between variables. We show how graph coloring can be used to group sets of variables that can undergo variation in parallel without violating dependencies. We test the performance of a CUDA implementation of parallel GOM on a Graphics Processing Unit (GPU) for the Max-Cut problem, a well-known problem for which the dependency structure can be controlled. We find that, for sufficiently large graphs with limited connectivity, finding high-quality solutions can be achieved up to 100 times faster, showcasing the great potential of our approach.
In this work, we consider the problem of finding a set of tours to a traveling salesperson problem (TSP) instance maximizing diversity, while satisfying a given cost constraint. This study aims to investigate the effectiveness of applying niching to maximize diversity rather than simply maintaining it. To this end, we introduce a 2-stage approach where a simple niching memetic algorithm (NMA), derived from a state-of-the-art for multi-solution TSP, is combined with a baseline diversifying algorithm. The most notable feature of the proposed NMA is the use of randomized improvement-first local search instead of 2-opt. Our experiment on TSPLIB instances shows that while the populations evolved by our NMA tend to contain clusters at tight quality constraints, they frequently occupy distant basins of attraction rather than close-by regions, improving on the baseline diversification in terms of sum-sum diversity. Compared to the original NMA, ours, despite its simplicity, finds more distant solutions of higher quality within less running time, by a large margin.
Combining Iterated Local Search with Partition Crossover (PX) has the potential to be a powerful hybrid search strategy for MAX-kSAT problems. The disadvantage of standard Partition Crossover is that it touches every variable and every clause. This paper borrows strategies from WalkSAT to improve Partition Crossover such that it only touches a fraction of clauses by focusing on unsatisfied clauses. On average, it is possible to speed up Partition Crossover by one or two orders of magnitude. The PX-preprocessor also simplifies the interface between Partition Crossover and local search. Partition Crossover is compared with and without the PX-preprocessor on 478 SAT instances from the 2014 SAT competition; PX is particularly effective on application problems and larger problem instances.
Model-Based Evolutionary Algorithms (MBEAs) can be highly scalable by virtue of linkage (or variable interaction) learning. This requires, however, that the linkage model can capture the exploitable structure of a problem. Usually, a single type of linkage structure is attempted to be captured using models such as a linkage tree. However, in practice, problems may exhibit multiple linkage structures. This is for instance the case in multi-objective optimization when the objectives have different linkage structures. This cannot be modelled sufficiently well when using linkage models that aim at capturing a single type of linkage structure, deteriorating the advantages brought by MBEAs. Therefore, here, we introduce linkage kernels, whereby a linkage structure is learned for each solution over its local neighborhood. We implement linkage kernels into the MBEA known as GOMEA that was previously found to be highly scalable when solving various problems. We further introduce a novel benchmark function called Best-of-Traps (BoT) that has an adjustable degree of different linkage structures. On both BoT and a worst-case scenario-based variant of the well-known MaxCut problem, we experimentally find a vast performance improvement of linkage-kernel GOMEA over GOMEA with a single linkage tree as well as the MBEA known as DSMGA-II.
Deep learning has been widely adopted in many real-world applications, especially in image classification. However, researches have shown that minor distortions imperceptible to humans may mislead classifiers. One way to improve the robustness is using adversarial attacks to obtain adversarial examples and re-training the classifier with those images. However, the connections between attacks and application scenarios are rarely discussed. This paper proposes a novel black-box adversarial attack that is specifically designed for real-world application scenarios: The transfer-based black-box adversarial attack with genetic algorithms (TAGA). TAGA adopts a genetic algorithm to generate the adversarial examples and reduces the ensuing query costs with a surrogate model based on the transferability of adversarial attacks. Empirical results show that perturbing embeddings in the latent space helps the attack algorithm quickly obtain adversarial examples and that the surrogate fitness function reduces the number of function evaluations. Compared with several state-of-the-art attacks, TAGA improves the classifiers more under the application scenario in terms of the summation of natural and defense accuracy.
Evolutionary algorithms are sensitive to the mutation rate (MR); no single value of this parameter works well across domains. Self-adaptive MR approaches have been proposed but they tend to be brittle: Sometimes they decay the MR to zero, thus halting evolution. To make self-adaptive MR robust, this paper introduces the Group Elite Selection of Mutation Rates (GESMR) algorithm. GESMR co-evolves a population of solutions and a population of MRs, such that each MR is assigned to a group of solutions. The resulting best mutational change in the group, instead of average mutational change, is used for MR selection during evolution, thus avoiding the vanishing MR problem. With the same number of function evaluations and with almost no overhead, GESMR converges faster and to better solutions than previous approaches on a wide range of continuous test optimization problems. GESMR also scales well to high-dimensional neuroevolution for supervised image-classification tasks and for reinforcement learning control tasks. Remarkably, GESMR produces MRs that are optimal in the long-term, as demonstrated through a comprehensive look-ahead grid search. Thus, GESMR and its theoretical and empirical analysis demonstrate how self-adaptation can be harnessed to improve performance in several applications of evolutionary computation.
Selection in genetic algorithms is difficult for stochastic problems due to noise in the fitness space. Common methods to deal with this fitness noise include sampling multiple fitness values, which can be expensive. We propose LUCIE, the Lower Upper Confidence Intervals Elitism method, which selects individuals based on confidence. By focusing evaluation on separating promising individuals from others, we demonstrate that LUCIE can be effectively used as an elitism mechanism in genetic algorithms. We provide a theoretical analysis on the convergence of LUCIE and demonstrate its ability to select fit individuals across multiple types of noise on the OneMax and LeadingOnes problems. We also evaluate LUCIE as a selection method for neuroevolution on control policies with stochastic fitness values.
This paper characterizes the inherent power of evolutionary algorithms. This power depends on the computational properties of the genetic encoding. With some encodings, two parents recombined with a simple crossover operator can sample from an arbitrary distribution of child phenotypes. Such encodings are termed expressive encodings in this paper. Universal function approximators, including popular evolutionary substrates of genetic programming and neural networks, can be used to construct expressive encodings. Remarkably, this approach need not be applied only to domains where the phenotype is a function: Expressivity can be achieved even when optimizing static structures, such as binary vectors. Such simpler settings make it possible to characterize expressive encodings theoretically: Across a variety of test problems, expressive encodings are shown to achieve up to super-exponential convergence speed-ups over the standard direct encoding. The conclusion is that, across evolutionary computation areas as diverse as genetic programming, neuroevolution, genetic algorithms, and theory, expressive encodings can be a key to understanding and realizing the full power of evolution.
There has been a growing interest in the evolutionary computation community to compute a diverse set of high-quality solutions for a given optimisation problem. This can provide the practitioners with invaluable information about the solution space and robustness against imperfect modelling and minor problems' changes. It also enables the decision-makers to involve their interests and choose between various solutions. In this study, we investigate for the first time a prominent multi-component optimisation problem, namely the Traveling Thief Problem (TTP), in the context of evolutionary diversity optimisation. We introduce a bi-level evolutionary algorithm to maximise the structural diversity of the set of solutions. Moreover, we examine the inter-dependency among the components of the problem in terms of structural diversity and empirically determine the best method to obtain diversity. We also conduct a comprehensive experimental investigation to examine the introduced algorithm and compare the results to another recently introduced framework based on the use of Quality Diversity (QD). Our experimental results show a significant improvement of the QD approach in terms of structural diversity for most TTP benchmark instances.
Local optima networks (LONs) model the global distribution and connectivity pattern of local optima under given search operators. Recent research has looked at how recombination operators can jump from a pair of parents that are locally optimal to a new child that is either a local optimum, or is guaranteed to be in a new basin of attraction. Recombination can therefore also induce a local optima network which maps how crossover moves between local optima. In this paper, we prove that recombination induces a LON which is actually a network of overlapping hypercube lattices. Given two or more samples from any lattice, we can also infer the existence of additional local optima that have not previously been reached by sampling. We prove that these lattices can be exponentially large. Finally, we prove that there exists TSP instances can be solved in polynomial time by exploiting Partition Crossover; these same instances are not solved by local search.
It has long been observed that the performance of evolutionary algorithms and other randomized search heuristics can benefit from a non-static choice of the parameters that steer their optimization behavior. Mechanisms that identify suitable configurations on the fly ("parameter control") or via a dedicated training process ("dynamic algorithm configuration") are thus an important component of modern evolutionary computation frameworks. Several approaches to address the dynamic parameter setting problem exist, but we barely understand which ones to prefer for which applications. As in classical benchmarking, problem collections with a known ground truth can offer very meaningful insights in this context. Unfortunately, settings with well-understood control policies are very rare.
One of the few exceptions for which we know which parameter settings minimize the expected runtime is the LeadingOnes problem. We extend this benchmark by analyzing optimal control policies that can select the parameters only from a given portfolio of possible values. This also allows us to compute optimal parameter portfolios of a given size. We demonstrate the usefulness of our benchmarks by analyzing the behavior of the DDQN reinforcement learning approach for dynamic algorithm configuration.
Optimisation problems often have multiple conflicting objectives that can be computationally and/or financially expensive. Mono-surrogate Bayesian optimisation (BO) is a popular model-based approach for optimising such black-box functions. It combines objective values via scalarisation and builds a Gaussian process (GP) surrogate of the scalarised values. The location which maximises a cheap-to-query acquisition function is chosen as the next location to expensively evaluate. While BO is an effective strategy, the use of GPs is limiting. Their performance decreases as the problem input dimensionality increases, and their computational complexity scales cubically with the amount of data. To address these limitations, we extend previous work on BO by density-ratio estimation (BORE) to the multi-objective setting. BORE links the computation of the probability of improvement acquisition function to that of probabilistic classification. This enables the use of state-of-the-art classifiers in a BO-like framework. In this work we present MBORE: multi-objective Bayesian optimisation by density-ratio estimation, and compare it to BO across a range of synthetic and real-world benchmarks. We find that MBORE performs as well as or better than BO on a wide variety of problems, and that it outperforms BO on high-dimensional and real-world problems.
We present a new algorithm for efficiently solving expensive multi-and many-objective, black-box optimisation problems by interactively incorporating the preferences of an external decision maker. We define a novel acquisition function which combines the maximin distance to the summary attainment front with a set of aspirational objective-space target points chosen by the decision maker. This drives an exploitative multi-surrogate model to quickly converge to solutions favourable to the decision maker, without relying on surrogate posterior uncertainty estimates or arbitrary objective weighting.
The performance of the algorithm is quantified through measurement of the hypervolume of interest to the decision maker which is dominated by the set of found solutions. This measure is used to compare the performance of our algorithm to one similar, which does not involve the decision maker. We demonstrate over a range of Walking Fish Group test problems, that incorporating preference in this way improves convergence in the vast majority of cases. Empirical evaluation further shows that preference inclusion is of increasing importance as the number of objectives increases, and that the greatest benefit is obtained when the targets dominate or are dominated by a small region of the Pareto front.
Recent works showed that simple success-based rules for self-adjusting parameters in evolutionary algorithms (EAs) can match or outperform the best fixed parameters on discrete problems. Non-elitism in a (1, λ) EA combined with a self-adjusting of spring population size λ outperforms common EAs on the multimodal Cliff problem. However, it was shown that this only holds if the success rate λ that governs self-adjustment is small enough. Otherwise, even on OneMax, the self-adjusting (1, λ) EA stagnates on an easy slope, where frequent successes drive down the of spring population size.
We show that self-adjustment works as intended in the absence of easy slopes. We define everywhere hard functions, for which successes are never easy to find and show that the self-adjusting (1, λ) EA is robust with respect to the choice of success rates λ. We give a general fitness-level upper bound on the number of evaluations and show that the expected number of generations is at most [EQUATION] where d is the number of non-optimal fitness values and [EQUATION] is the smallest probability of finding an improvement from a non-optimal search point. We discuss implications for the everywhere hard function LeadingOnes and a new class OneMaxBlocks of everywhere hard functions with tunable difficulty.
Optimization methods, including evolutionary algorithms, need increasing efficiency in order to solve more and more complex problems in the shortest possible time. Apart from tuning the structure of those algorithms and adapting them to the given problem, the best way to speed up computations is introducing more parallelism, either at the hardware level (by using accelerators), or at the architecture level (by using multiple compute nodes). In this paper, we propose a method for expressing evolutionary computations, based on the tensor computational model. The presented approach enables cross-platform hardware acceleration on CPUs, GPUs and TPUs. To validate the new method, we contribute an open, extensible evolutionary framework, with support for distributed, accelerated execution in heterogeneous environments. Finally, we demonstrate results of the conducted tests that confirm the efficiency of the proposed approach, also in comparison to other existing frameworks.
We consider a type of constrained optimization problem, where the violation of a constraint leads to an irrevocable loss, such as breakage of a valuable experimental resource/platform or loss of human life. Such problems are referred to as safe optimization problems (SafeOPs). While SafeOPs have received attention in the machine learning community in recent years, there was little interest in the evolutionary computation (EC) community despite some early attempts between 2009 and 2011. Moreover, there is a lack of acceptable guidelines on how to benchmark different algorithms for SafeOPs, an area where the EC community has significant experience in. Driven by the need for more eficient algorithms and benchmark guidelines for SafeOPs, the objective of this paper is to reignite the interest of the EC community in this problem class. To achieve this we (i) provide a formal definition of SafeOPs and contrast it to other types of optimization problems that the EC community is familiar with, (ii) investigate the impact of key SafeOP parameters on the performance of selected safe optimization algorithms, (iii) benchmark EC against state-of-the-art safe optimization algorithms from the machine learning community, and (iv) provide an open-source Python framework to replicate and extend our work.
In this study, we investigate the problem of min-max continuous optimization in a black-box setting minx maxy f (x,y). A popular approach updates x and y simultaneously or alternatingly. However, two major limitations have been reported in existing approaches. (I) As the influence of the interaction term between x and y (e.g., xTBy) on the Lipschitz smooth and strongly convex-concave function f increases, the approaches converge to an optimal solution at a slower rate. (II) The approaches fail to converge if f is not Lipschitz smooth and strongly convex-concave around the optimal solution. To address these difficulties, we propose minimizing the worst-case objective function F(x) = maxy f (x, y) directly using the covariance matrix adaptation evolution strategy, in which the rankings of solution candidates are approximated by our proposed worst-case ranking approximation (WRA) mechanism. Compared with existing approaches, numerical experiments show two important findings regarding our proposed method. (1) The proposed approach is eficient in terms of f-calls on a Lipschitz smooth and strongly convex-concave function with a large interaction term. (2) The proposed approach can converge on functions that are not Lipschitz smooth and strongly convex-concave around the optimal solution, whereas existing approaches fail.
Computing diverse sets of high quality solutions for a given optimization problem has become an important topic in recent years. In this paper, we introduce a coevolutionary Pareto Diversity Optimization approach which builds on the success of reformulating a constrained single-objective optimization problem as a bi-objective problem by turning the constraint into an additional objective. Our new Pareto Diversity optimization approach uses this bi-objective formulation to optimize the problem while also maintaining an additional population of high quality solutions for which diversity is optimized with respect to a given diversity measure. We show that our standard co-evolutionary Pareto Diversity Optimization approach outperforms the recently introduced DIVEA algorithm which obtains its initial population by generalized diversifying greedy sampling and improving the diversity of the set of solutions afterwards. Furthermore, we study possible improvements of the Pareto Diversity Optimization approach. In particular, we show that the use of inter-population crossover further improves the diversity of the set of solutions.
We consider a new class of expensive, resource-constrained optimization problems (here arising from molecular discovery) where costs are associated with the experiments (or evaluations) to be carried out during the optimization process. In the molecular discovery problem, candidate compounds to be optimized must be synthesized in an iterative process that starts from a set of purchasable items and builds up to larger molecules. To produce target molecules, their required resources are either used from already-synthesized items in storage or produced themselves on-demand at an additional cost. Any remaining resources from the production process are stored for reuse for the next evaluations. We model these resource dependencies with a directed acyclic production graph describing the development process from granular purchasable items to evaluable target compounds. Moreover, we develop several resource-eficient algorithms to address this problem. In particular, we develop resource-aware variants of Random Search heuristics and of Bayesian Optimization and analyze their performance in terms of anytime behavior. The experimental results were obtained from a real-world molecular optimization problem. Our results suggest that algorithms that encourage exploitation by reusing existing resources achieve satisfactory results while using fewer resources overall.
Genetic code improvement systems (GI) start from an existing piece of program code and search for alternative versions with better performance according to a metric of interest. The search space of source code is a large, rough fitness landscape which can be extremely difficult to navigate. Most approaches to enhancing search capability in this domain involve either novelty search, where low-fitness areas are remembered and avoided, or formal analysis which attempts to find high-utility parameterizations for the GI process. In this paper we propose the use of phylogenetic analysis over genetic history to understand how different mutations and crossovers affect the fitness of a population over time for a particular problem; we use the results of that analysis to tune a GI process during its operation to enhance its ability to locate better program candidates. Using phylogenetic analysis on 600 runs of a genetic improver targeting a hash function, we demonstrate how the results of this analysis yield tuned mutation types over the course of a GI process (dynamically and continually set according to individual's ancestors' ranks within the population) to give hash functions with over 20% improved fitness compared to a baseline GI process.
Understanding the landscape underlying NK models is of fundamental interest. Different representations have been proposed to better understand how the ruggedness of the landscape is influenced by the model parameters, such as the problem dimension, the degree of non-linearity and the structure of variable interactions. In this paper, we propose to use neural embedding, that is a continuous vectorial representation obtained as a result of applying a neural network to a prediction task, in order to investigate the characteristics of NK landscapes. The main assumption is that neural embeddings are able to capture important features that reflect the difficulty of the landscape. We propose a method for constructing NK embeddings, together with metrics for evaluating to what extent this embedding space encodes valuable information from the original NK landscape. Furthermore, we study how the embedding dimensionality and the parameters of the NK model influence the characteristics of the NK embedding space. Finally, we evaluate the performance of optimizers that solve the continuous representations of NK models by searching for solutions in the embedding space.
The stochastic nature of iterative optimization heuristics leads to inherently noisy performance measurements. Since these measurements are often gathered once and then used repeatedly, the number of collected samples will have a significant impact on the reliability of algorithm comparisons. We show that care should be taken when making decisions based on limited data. Particularly, we show that the number of runs used in many benchmarking studies, e.g., the default value of 15 suggested by the COCO environment, can be insufficient to reliably rank algorithms on well-known numerical optimization benchmarks.
Additionally, methods for automated algorithm configuration are sensitive to insufficient sample sizes. This may result in the configurator choosing a "lucky" but poor-performing configuration despite exploring better ones. We show that relying on mean performance values, as many configurators do, can require a large number of runs to provide accurate comparisons between the considered configurations. Common statistical tests can greatly improve the situation in most cases but not always. We show examples of performance losses of more than 20%, even when using statistical races to dynamically adjust the number of runs, as done by irace. Our results underline the importance of appropriately considering the statistical distribution of performance values.
Various flavours of parameter setting, such as (static) parameter tuning and (dynamic) parameter control, receive a lot of attention both in applications and in theoretical investigations. It is widely acknowledged that the choice of parameter values influences the efficiency of evolutionary algorithms (and other search heuristics) a lot, and a considerable amount of work has been dedicated to finding (near-)optimal choices of parameters, or of parameter control strategies.
It is perhaps surprising that all the recent theoretic attempts aim at making smaller the time needed to reach the optimum, whereas in most practical settings we may only hope to reach certain realistic fitness targets. One of the ways to close this gap is to study static and dynamic parameter choices in fixed-target settings, and to understand how these choices are different from those tuned towards reaching the optimum. In this paper we investigate some of these settings, using a mixture of exact theory-driven computations and experimental evaluation, and find few remarkably generic trends, some of which may explain a number of misconceptions found in evolutionary computation.
Evolutionary algorithms have proven to be highly effective in continuous optimization, especially when numerous fitness function evaluations (FFEs) are possible. In certain cases, however, an expensive optimization approach (i.e. with relatively low number of FFEs) must be taken, and such a setting is considered in this work. The paper introduces an extension to the well-known LSHADE algorithm in the form of a pre-screening mechanism (psLSHADE). The proposed pre-screening relies on the three following components: a specific initial sampling procedure, an archive of samples, and a global linear meta-model of a fitness function that consists of 6 independent transformations of variables. The pre-screening mechanism preliminary assesses the trial vectors and designates the best one of them for further evaluation with the fitness function. The performance of psLSHADE is evaluated using the CEC2021 benchmark in an expensive scenario with an optimization budget of 102 - 104 FFEs per dimension. We compare psLSHADE with the baseline LSHADE method and the MadDE algorithm. The results indicate that with restricted optimization budgets psLSHADE visibly outperforms both competitive algorithms. In addition, the use of the pre-screening mechanism results in faster population convergence of psLSHADE compared to LSHADE.
Dynamic scheduling problems are important optimisation problems with many real-world applications. Since in dynamic scheduling not all information is available at the start, such problems are usually solved by dispatching rules (DRs), which create the schedule as the system executes. Recently, DRs have been successfully developed using genetic programming. However, a single DR may not efficiently solve different problem instances. Therefore, much research has focused on using DRs collaboratively by forming ensembles. In this paper, a novel ensemble collaboration method for dynamic scheduling is proposed. In this method, DRs are applied independently at each decision point to create a simulation of the schedule for all currently released jobs. Based on these simulations, it is determined which DR makes the best decision and that decision is applied. The results show that the ensembles easily outperform individual DRs for different ensemble sizes. Moreover, the results suggest that it is relatively easy to create good ensembles from a set of independently evolved DRs.
With the growing popularity of machine learning (ML), regression problems in many domains are becoming increasingly high-dimensional. Identifying relevant features from a high-dimensional dataset still remains a significant challenge for building highly accurate machine learning models.
Evolutionary feature selection has been used for high-dimensional symbolic regression using Genetic Programming (GP). While grammar based GP, especially Grammatical Evolution (GE), has been extensively used for symbolic regression, no systematic grammar-based feature selection approach exists. This work presents a grammar-based feature selection method, Production Ranking based Feature Selection (PRFS), and reports on the results of its application in symbolic regression.
The main contribution of our work is to demonstrate that the proposed method can not only consistently select the most relevant features, but also significantly improves the generalization performance of GE when compared with several state-of-the-art ML-based feature selection methods. Experimental results on benchmark symbolic regression problems show that the generalization performance of GE using PRFS was significantly better than that of a state-of-the-art Random Forest based feature selection in three out of four problems, while in fourth problem the performance was the same.
The growing production of digital content and its dissemination across the worldwide web require eficient and precise management. In this context, image quality assessment measures (IQAMs) play a pivotal role in guiding the development of numerous image processing systems for compression, enhancement, and restoration. The structural similarity index (SSIM) is one of the most common IQAMs for estimating the similarity between a pristine reference image and its corrupted variant. The multi-scale SSIM is one of its most popular variants that allows assessing image quality at multiple spatial scales. This paper proposes a two-stage genetic programming (GP) approach to evolve novel multi-scale IQAMs, that are simultaneously more effective and efficient. We use GP to perform feature selection in the first stage, while the second stage generates the final solutions. The experimental results show that the proposed approach outperforms the existing MS-SSIM. A comprehensive analysis of the feature selection indicates that, for extracting multi-scale similarities, spatially-varying convolutions are more effective than dilated convolutions. Moreover, we provide evidence that the IQAMs learned for one database can be successfully transferred to previously unseen databases. We conclude the paper by presenting a set of evolved multi-scale IQAMs and providing their interpretation.
Symbolic Regression searches for a function form that approximates a dataset often using Genetic Programming. Since there is usually no restriction to what form the function can have, Genetic Programming may return a hard to understand model due to non-linear function chaining or long expressions. A novel representation called Interaction-Transformation was recently proposed to alleviate this problem. In this representation, the function form is restricted to an affine combination of terms generated as the application of a single univariate function to the interaction of selected variables. This representation obtained competing solutions on standard benchmarks. Despite the initial success, a broader set of benchmarking functions revealed the limitations of the constrained representation. In this paper we propose an extension to this representation, called Transformation-Interaction-Rational representation that defines a new function form as the rational of two Interaction-Transformation functions. Additionally, the target variable can also be transformed with an univariate function. The main goal is to improve the approximation power while still constraining the overall complexity of the expression. We tested this representation with a standard Genetic Programming with crossover and mutation. The results show a great improvement when compared to its predecessor and a state-of-the-art performance for a large benchmark.
Bloat, a well-known phenomenon in Evolutionary Computation, often slows down evolution and complicates the task of interpreting the results. We propose Lexi2, a new selection and bloat-control method, which extends the popular lexicase selection method, by including a tie-breaking step which considers attributes related to the size of the individuals. This new step applies lexicographic parsimony pressure during the selection process and is able to reduce the number of random choices performed by lexicase selection (which happen when more than a single individual correctly solve the selected training cases).
Furthermore, we propose a new Grammatical Evolution-specific, low-cost diversity metric based on the grammar mapping modulus operations remainders, which we then utilise with Lexi2.
We address four distinct problems, and the results show that Lexi2 is able to reduce significantly the length, the number of nodes and the depth for all problems, to maintain a high level of diversity in three of them, and to significantly improve the fitness score in two of them. In no case does it adversely impact the fitness.
Shape-constrained Symbolic Regression integrates prior knowledge about the function shape into the symbolic regression model. This can be used to enforce that the model has desired properties such as monotonicity, or convexity, among others. Shape-constrained Symbolic Regression can also help to create models with better extrapolation behavior and reduced sensitivity to noise. The constraint evaluation can be challenging because exact evaluation of constraints may require a search for the extrema of non-convex functions. Approximations via interval arithmetic allow to efficiently find bounds for the extrema of functions. However, interval arithmetic can lead to overly wide bounds and therefore produces a pessimistic estimation. Another possibility is to use sampling which underestimates the true range. Sampling therefore produces an optimistic estimation. In this paper we evaluate both methods and compare them on different problem instances. In particular we evaluate the sensitivity to noise and the extrapolation capabilities in combination with noise data. The results indicate that the optimistic approach works better for predicting out-of-domain points (extrapolation) and the pessimistic approach works better for high noise levels.
Genetic programming (GP) is a commonly used approach to solve symbolic regression (SR) problems. Compared with the machine learning or deep learning methods that depend on the pre-defined model and the training dataset for solving SR problems, GP is more focused on finding the solution in a search space. Although GP has good performance on large-scale benchmarks, it randomly transforms individuals to search results without taking advantage of the characteristics of the dataset. So, the search process of GP is usually slow, and the final results could be unstable. To guide GP by these characteristics, we propose a new method for SR, called Taylor genetic programming (TaylorGP)1. TaylorGP leverages a Taylor polynomial to approximate the symbolic equation that fits the dataset. It also utilizes the Taylor polynomial to extract the features of the symbolic equation: low order polynomial discrimination, variable separability, boundary, monotonic, and parity. GP is enhanced by these Taylor polynomial techniques. Experiments are conducted on three kinds of benchmarks: classical SR, machine learning, and physics. The experimental results show that TaylorGP not only has higher accuracy than the nine baseline methods, but also is faster in finding stable results.
Linear genetic programming (LGP) has been successfully applied to various problems such as classification, symbolic regression and hyper-heuristics for automatic heuristic design. In contrast with the traditional tree-based genetic programming (TGP), LGP uses a sequence of instructions to represent an individual (program), and the data is carried by registers. A common issue of LGP is that LGP is susceptible to introns (i.e., instructions with no effect to the program output), which limits the effectiveness of traditional genetic operators. To address these issues, we propose a new graph-based LGP system. Specifically, graph-based LGP uses graph-based crossover and graph-based mutation to produce offspring. The graph-based crossover operator firstly converts each LGP parent to a directed acyclic graph (DAG), and then swaps the sub-graphs between the DAGs. The graph-based mutation selectively modify the connections in DAGs based on the height of sub graphs. To verify the effectiveness of the new graph-based genetic operators, we take the dynamic job shop scheduling as a case study, which has shown to be a challenging problem for LGP. The experimental results show that the LGP with the new graph-based genetic operators can obtain better scheduling heuristics than the LGP with the traditional operators and TGP.
Information theory explains the robustness of deep GP trees, with on average up to 83.3% of crossover run time disruptions failing to propagate to the root node, and so having no impact on fitness, leading to phenotypic convergence. Monte Carlo simulations of perturbations covering the whole tree demonstrate a model based on random synchronisation of the evaluation of the parent and child which cause parent and offspring evaluations to be identical. This predicts the effectiveness of fitness measurement grows slowly as O(log(n)) with number n of test cases. This geometric distribution model is tested on genetic programming symbolic regression.
Genetic programming (GP) is one of the best approaches today to discover symbolic regression models. To find models that trade off accuracy and complexity, the non-dominated sorting genetic algorithm II (NSGA-II) is widely used. Unfortunately, it has been shown that NSGA-II can be inefficient: in early generations, low-complexity models over-replicate and take over most of the population. Consequently, studies have proposed different approaches to promote diversity. Here, we study the root of this problem, in order to design a superior approach. We find that the over-replication of low complexity-models is due to a lack of evolvability, i.e., the inability to produce offspring with improved accuracy. We therefore extend NSGA-II to track, over time, the evolvability of models of different levels of complexity. With this information, we limit how many models of each complexity level are allowed to survive the generation. We compare this new version of NSGA-II, evoNSGA-II, with the use of seven existing multi-objective GP approaches on ten widely-used data sets, and find that evoNSGA-II is equal or superior to using these approaches in almost all comparisons. Furthermore, our results confirm that evoNSGA-II behaves as intended: models that are more evolvable form the majority of the population. Code: https://github.com/dzhliu/evoNSGA-II
Many recent studies focus on developing mechanisms to explain the black-box behaviors of neural networks (NNs). However, little work has been done to extract the potential hidden semantics (mathematical representation) of a neural network. A succinct and explicit mathematical representation of a NN model could improve the understanding and interpretation of its behaviors. To address this need, we propose a novel symbolic regression method for neural works (called SRNet) to discover the mathematical expressions of a NN. SRNet creates a Cartesian genetic programming (NNCGP) to represent the hidden semantics of a single layer in a NN. It then leverages a multi-chromosome NNCGP to represent hidden semantics of all layers of the NN. The method uses a (1+λ) evolutionary strategy (called MNNCGP-ES) to extract the final mathematical expressions of all layers in the NN. Experiments on 12 symbolic regression benchmarks and 5 classification benchmarks show that SRNet not only can reveal the complex relationships between each layer of a NN but also can extract the mathematical representation of the whole NN. Compared with LIME and MAPLE, SRNet has higher interpolation accuracy and trends to approximate the real model on the practical dataset1.
This work proposes an extension to Structured Grammatical Evolution (SGE) called Co-evolutionary Probabilistic Structured Grammatical Evolution (Co-PSGE). In Co-PSGE each individual in the population is composed by a grammar and a genotype, which is a list of dynamic lists, each corresponding to a non-terminal of the grammar containing real numbers that correspond to the probability of choosing a derivation rule. Each individual uses its own grammar to map the genotype into a program. During the evolutionary process, both the grammar and the genotype are subject to variation operators.
The performance of the proposed approach is compared to 3 different methods, namely, Grammatical Evolution (GE), Probabilistic Grammatical Evolution (PGE), and SGE on four different benchmark problems. The results show the effectiveness of the approach since Co-PSGE is able to outperform all the methods with statistically significant differences in the majority of the problems.
General program synthesis has become an important application area for genetic programming (GP), and for artificial intelligence more generally. Code Building Genetic Programming (CBGP) is a recently introduced GP method for general program synthesis that leverages reflection and first class specifications to support the evolution of programs that may use arbitrary data types, polymorphism, and functions drawn from existing codebases. However, neither a formal description nor a thorough benchmarking of CBGP have yet been reported. In this work, we formalize the method of CBGP using algorithms from type theory. Specially, we show that a functional programming language and a Hindley-Milner type system can be used to evolve type-safe programs using the process abstractly described in the original CBGP paper. Furthermore, we perform a comprehensive analysis of the search performance of this functional variant of CBGP compared to other contemporary GP program synthesis methods.
Solving the indefinite Helmholtz equation is not only crucial for the understanding of many physical phenomena but also represents an outstandingly-difficult benchmark problem for the successful application of numerical methods. Here we introduce a new approach for evolving eficient preconditioned iterative solvers for Helmholtz problems with multi-objective grammar-guided genetic programming. Our approach is based on a novel context-free grammar, which enables the construction of multigrid preconditioners that employ a tailored sequence of operations on each discretization level. To find solvers that generalize well over the given domain, we propose a custom method of successive problem difficulty adaption, in which we evaluate a preconditioner's efficiency on increasingly ill-conditioned problem instances. We demonstrate our approach's effectiveness by evolving multigrid-based preconditioners for a two-dimensional indefinite Helmholtz problem that outperform several human-designed methods for different wavenumbers up to systems of linear equations with more than a million unknowns.
GitHub Copilot, an extension for the Visual Studio Code development environment powered by the large-scale language model Codex, makes automatic program synthesis available for software developers. This model has been extensively studied in the field of deep learning, however, a comparison to genetic programming, which is also known for its performance in automatic program synthesis, has not yet been carried out. In this paper, we evaluate GitHub Copilot on standard program synthesis benchmark problems and compare the achieved results with those from the genetic programming literature. In addition, we discuss the performance of both approaches. We find that the performance of the two approaches on the benchmark problems is quite similar, however, in comparison to GitHub Copilot, the program synthesis approaches based on genetic programming are not yet mature enough to support programmers in practical software development. Genetic programming usually needs a huge amount of expensive hand-labeled training cases and takes too much time to generate solutions. Furthermore, source code generated by genetic programming approaches is often bloated and difficult to understand. For future work on program synthesis with genetic programming, we suggest researchers to focus on improving the execution time, readability, and usability.
Procedurally generated video game content has the potential to drastically reduce the content creation budget of game developers and large studios. However, adoption is hindered by limitations such as slow generation, as well as low quality and diversity of content. We introduce an evolutionary search-based approach for evolving level generators using novelty search to procedurally generate diverse levels in real time, without requiring training data or detailed domain-specific knowledge. We test our method on two domains, and our results show an order of magnitude speedup in generation time compared to existing methods while obtaining comparable metric scores. We further demonstrate the ability to generalise to arbitrary-sized levels without retraining.
To achieve excellent performance with modern neural networks, having the right network architecture is important. Neural Architecture Search (NAS) concerns the automatic discovery of task-specific network architectures. Modern NAS approaches leverage super-networks whose subnetworks encode candidate neural network architectures. These subnetworks can be trained simultaneously, removing the need to train each network from scratch, thereby increasing the efficiency of NAS.
A recent method called Neural Architecture Transfer (NAT) further improves the efficiency of NAS for computer vision tasks by using a multi-objective evolutionary algorithm to find high-quality subnetworks of a supernetwork pretrained on ImageNet. Building upon NAT, we introduce ENCAS --- Evolutionary Neural Cascade Search. ENCAS can be used to search over multiple pretrained supernetworks to achieve a trade-off front of cascades of different neural network architectures, maximizing accuracy while minimizing FLOPs count.
We test ENCAS on common computer vision benchmarks (CIFAR-10, CIFAR-100, ImageNet) and achieve Pareto dominance over previous state-of-the-art NAS models up to 1.5 GFLOPs. Additionally, applying ENCAS to a pool of 518 publicly available ImageNet classifiers leads to Pareto dominance in all computation regimes and to increasing the maximum accuracy from 88.6% to 89.0%, accompanied by an 18% decrease in computation effort from 362 to 296 GFLOPs.
Though Neuroevolution (NE) and Neural Architecture Search (NAS) have emerged as techniques for automating the design of neural networks, they are expensive and time consuming: they require training many neural networks and have largely resisted the benefits of surrogate-based optimization approaches, as it is difficult to model the performance of variable network architectures. We propose a novel and general framework for surrogate-assisted search of neural architectures consisting of two components: (1) an algorithm which leverages grammars to generate tensor representations of variable neural network topologies; and an evolutionary algorithm which employs a surrogate model to expedite architecture search using active learning. We demonstrate that our model can produce accurate performance predictions for unseen architectures, realizing a 5x reduction in the total compute required for search while improving asymptotic performance. We also illustrate that the surrogate models are transferable to new domains via a real-world transfer learning case study using industrial time series data.
Mixed-precision quantization is a powerful tool to enable memory and compute savings of neural network workloads by deploying different sets of bit-width precisions on separate compute operations. In this work, we present a flexible and scalable framework for automated mixed-precision quantization that concurrently optimizes task performance, memory compression, and compute savings through multi-objective evolutionary computing. Our framework centers on Neuroevolution-Enhanced Multi-Objective Optimization (NEMO), a novel search method, which combines established search methods with the representational power of neural networks. Within NEMO, the population is divided into structurally distinct sub-populations, or species, which jointly create the Pareto frontier of solutions for the multi-objective problem. At each generation, species perform separate mutation and crossover operations, and are re-sized in proportion to the goodness of their contribution to the Pareto frontier. In our experiments, we define a graph-based representation to describe the underlying workload, enabling us to deploy graph neural networks trained by NEMO via neuroevolution, to find Pareto optimal configurations for MobileNet-V2, ResNet50 and ResNeXt-101-32×8d. Compared to the state-of-the-art, we achieve competitive results on memory compression and superior results for compute compression. Further analysis reveals that the graph representation and the species-based approach employed by NEMO are critical to finding optimal solutions.
Neural architecture search (NAS) aims to automate architecture engineering in neural networks. This often requires a high computational overhead to evaluate a number of candidate networks from the set of all possible networks in the search space during the search. Prediction of the networks' performance can alleviate this high computational overhead by mitigating the need for evaluating every candidate network. Developing such a predictor typically requires a large number of evaluated architectures which may be difficult to obtain. We address this challenge by proposing a novel evolutionary-based NAS strategy, Predictor-assisted E-NAS (PRE-NAS), which can perform well even with an extremely small number of evaluated architectures. PRE-NAS leverages new evolutionary search strategies and integrates high-fidelity weight inheritance over generations. Unlike one-shot strategies, which may suffer from bias in the evaluation due to weight sharing, offspring candidates in PRE-NAS are topologically homogeneous, which circumvents bias and leads to more accurate predictions. Extensive experiments on NAS-Bench-201 and DARTS search spaces show that PRE-NAS can outperform state-of-the-art NAS methods. With only a single GPU searching for 0.6 days, competitive architecture can be found by PRE-NAS which achieves 2.40% and 24% test error rates on CIFAR-10 and ImageNet respectively.
A fascinating aspect of nature lies in its ability to produce a large and diverse collection of organisms that are all high-performing in their niche. By contrast, most AI algorithms focus on finding a single eficient solution to a given problem. Aiming for diversity in addition to performance is a convenient way to deal with the exploration-exploitation trade-off that plays a central role in learning. It also allows for increased robustness when the returned collection contains several working solutions to the considered problem, making it well-suited for real applications such as robotics. Quality-Diversity (QD) methods are evolutionary algorithms designed for this purpose. This paper proposes a novel algorithm, qd-pg, which combines the strength of Policy Gradient algorithms and Quality Diversity approaches to produce a collection of diverse and high-performing neural policies in continuous control environments. The main contribution of this work is the introduction of a Diversity Policy Gradient (DPG) that exploits information at the time-step level to drive policies towards more diversity in a sample-efficient manner. Specifically, qd-pg selects neural controllers from a map-elites grid and uses two gradient-based mutation operators to improve both quality and diversity. Our results demonstrate that qd-pg is significantly more sample-eficient than its evolutionary competitors.
Stochastic gradient descent (SGD) is a premium optimization method for training neural networks, especially for learning objectively defined labels such as image objects and events. When a neural network is instead faced with subjectively defined labels---such as human demonstrations or annotations---SGD may struggle to explore the deceptive and noisy loss landscapes caused by the inherent bias and subjectivity of humans. While neural networks are often trained via preference learning algorithms in an effort to eliminate such data noise, the de facto training methods rely on gradient descent. Motivated by the lack of empirical studies on the impact of evolutionary search to the training of preference learners, we introduce the RankNEAT algorithm which learns to rank through neuroevolution of augmenting topologies. We test the hypothesis that RankNEAT outperforms traditional gradient-based preference learning within the affective computing domain, in particular predicting annotated player arousal from the game footage of three dissimilar games. RankNEAT yields superior performances compared to the gradient-based preference learner (RankNet) in the majority of experiments since its architecture optimization capacity acts as an eficient feature selection mechanism, thereby, eliminating overfitting. Results suggest that RankNEAT is a viable and highly eficient evolutionary alternative to preference learning.
Vanilla neural architecture search using evolutionary algorithms (EA) involves evaluating each architecture by training it from scratch, which is extremely time-consuming. This can be reduced by using a supernet to estimate the fitness of every architecture in the search space due to its weight sharing nature. However, the estimated fitness is very noisy due to the co-adaptation of the operations in the supernet. In this work, we propose a method called pEvoNAS wherein the whole neural architecture search space is progressively reduced to smaller search space regions with good architectures. This is achieved by using a trained supernet for architecture evaluation during the architecture search using genetic algorithm to find search space regions with good architectures. Upon reaching the final reduced search space, the supernet is then used to search for the best architecture in that search space using evolution. The search is also enhanced by using weight inheritance wherein the supernet for the smaller search space inherits its weights from previous trained supernet for the bigger search space. Experimentally, pEvoNAS gives better results on CIFAR-10 and CIFAR-100 while using significantly less computational resources as compared to previous EA-based methods. The code for our paper can be found here.
Consider the problem of training robustly capable agents. One approach is to generate a diverse collection of agent polices. Training can then be viewed as a quality diversity (QD) optimization problem, where we search for a collection of performant policies that are diverse with respect to quantified behavior. Recent work shows that differentiable quality diversity (DQD) algorithms greatly accelerate QD optimization when exact gradients are available. However, agent policies typically assume that the environment is not differentiable. To apply DQD algorithms to training agent policies, we must approximate gradients for performance and behavior. We propose two variants of the current state-of-the-art DQD algorithm that compute gradients via approximation methods common in reinforcement learning (RL). We evaluate our approach on four simulated locomotion tasks. One variant achieves results comparable to the current state-of-the-art in combining QD and RL, while the other performs comparably in two locomotion tasks. These results provide insight into the limitations of current DQD algorithms in domains where gradients must be approximated. Source code is available at https://github.com/icaros-usc/dqd-rl
Searching solutions for the task with sparse or deceptive rewards is a fundamental problem in Evolutionary Algorithms (EA) and Reinforcement Learning (RL). Existing methods in RL have been proposed to enhance the exploration by encouraging agents to obtain novel states. However, solely seeking a single local optimal solution could be insufficient for the tasks with the deceptive local optima. Novelty-Search (NS) and Quality-Diversity (QD) have shown promising results for finding diverse solutions with different behavioral characteristics. However, manually defining the task-specific behavior description limits these methods to low-dimensional tasks. This paper presents Dynamics-aware Novelty Search with Behavior Repulsion (DANSBR), a hybrid algorithm that evolves high-performing solutions by introducing a generalized novelty measurement and a bidirectional gradient-based mutation operator based on the Quality-Diversity paradigm. The novelty of a single solution is defined as the prediction error of an approximate dynamic model in the task-agnostic behavior space. The mutation operator drives the solution to behave differently or obtain better performance in a sample-eficient manner. As a result of better exploration, our approach outperforms several baselines on high-dimensional continuous control tasks with sparse rewards. Empirical results also demonstrate that DANSBR improves the performance on the task with deceptive rewards.
Image reconstruction from ray projections is a common technique in medical imaging. In particular, the few-view scenario, in which the number of projections is very limited, is important for cases where the patient is vulnerable to potentially damaging radiation. This paper considers swarm-based reconstruction where individuals, or particles, swarm in image space in an attempt to lower the reconstruction error. We compare several swarm algorithms with standard algebraic reconstruction techniques and filtered back-projection for five standard test phantoms viewed under reduced projections. We find that although swarm algorithms do not produce solutions with lower reconstruction errors, they generally find more accurate reconstructions; that is, swarm techniques furnish reconstructions that are more similar to the original phantom. A function profiling method suggests that the ability of the swarm to optimise these high dimensional problems can be attributed to a broad funnel leading to complex structure close to the optima. This finding is further exploited by optimising the parameters of the best performing swarm technique, and the results are compared against three unconstrained and boxed local search methods. The tomographic reconstruction-optimised swarm technique is shown to be superior to prominent algebraic reconstructions and local search algorithms.
Human lives are increasingly influenced by algorithms, which therefore need to meet higher standards not only in accuracy but also with respect to explainability. This is especially true for high-stakes areas such as real estate valuation. Unfortunately, the methods applied there often exhibit a trade-off between accuracy and explainability.
One explainable approach is case-based reasoning (CBR), where each decision is supported by specific previous cases. However, such methods can be wanting in accuracy. The unexplainable machine learning approaches are often observed to provide higher accuracy but are not scrutable in their decision-making.
In this paper, we apply evolutionary algorithms (EAs) to CBR predictors in order to improve their performance. In particular, we deploy EAs to the similarity functions (used in CBR to find comparable cases), which are fitted to the data set at hand. As a consequence, we achieve higher accuracy than state-of-the-art deep neural networks (DNNs), while keeping interpretability and explainability.
These results stem from our empirical evaluation on a large data set of real estate offers where we compare known similarity functions, their EA-improved counterparts, and DNNs. Surprisingly, DNNs are only on par with standard CBR techniques. However, using EA-learned similarity functions does yield an improved performance.
This article reports using a bi-objective evolutionary algorithm interacting with a traffic simulator and data exploration methods to analyze the optimal capacity and location of charging infrastructure for electric vehicles. In this work, the focus of the study is the city of Cuenca, Ecuador. We configure a scenario with 20 candidate charging stations and 500 electric vehicles driving according to the mobility distribution observed in this city. We optimize the vehicle's travel time that requires recharging and the number of charging stations distributed in the city. Quality of Service is defined as the ratio of charged vehicles to vehicles waiting for a charge and is considered a constraint. The approximate Pareto set of solutions produced in our experiments includes a number of trade-off solutions to the formulated problem and shows that the evolutionary approach is a practical tool to find and study different layouts related to the location and capacities of charging stations. In addition, we complement the analysis of results by considering Quality of Service, charging time, and energy to determine the city's best locations. The proposed framework that combines simulated scenarios with evolutionary algorithms is a powerful tool to analyze and understand different charging station infrastructure designs.
Finding balanced, highly nonlinear Boolean functions is a difficult problem where it is not known what nonlinearity values are possible to be reached in general. At the same time, evolutionary computation is successfully used to evolve specific Boolean function instances, but the approach cannot easily scale for larger Boolean function sizes. Indeed, while evolving smaller Boolean functions is almost trivial, larger sizes become increasingly difficult, and evolutionary algorithms perform suboptimally.
In this work, we ask whether genetic programming (GP) can evolve constructions resulting in balanced Boolean functions with high nonlinearity. This question is especially interesting as there are only a few known such constructions. Our results show that GP can find constructions that generalize well, i.e., result in the required functions for multiple tested sizes. Further, we show that GP evolves many equivalent constructions under different syntactic representations. Interestingly, the simplest solution found by GP is a particular case of the well-known indirect sum construction.
For in silico drug discovery various representations have been established regarding storing and processing molecular data. The choice of representation has a great impact on employed methods and algorithms. Molecular fingerprints in the form of fixed-size bit vectors are a widely used representation which captures structural features of a molecule and enables a straight-forward way of estimating molecule similarities. However, since fingerprints are not invertible, they are rarely utilized for molecule generation tasks. This study presents an approach to the reconstruction of molecules from their fingerprint representation that is based on genetic algorithms. The algorithm assembles molecules from BRICS fragments and therefore only generates valid molecular structures. We demonstrate that the genetic algorithm is able to construct molecules similar to the specified target, or even reconstruct the original molecule. Furthermore, to illustrate how this genetic algorithm unlocks fingerprints as a representation for other in silico drug discovery methods, a novel Transformer neural language model trained on molecular fingerprints is introduced as a molecule generation model.
Discrete-variable gate-model quantum machines are promising quantum systems considering their wide applicability. Being in their noisy-intermediate-scale era, they allow executing only circuits of limited complexity and fitting the machines' features. Thus, such systems implement a key and unavoidable tailoring process to produce the most possible compact and device-compliant circuit. The qubits' initialisation is a primary and complex step that can ease/jeopardise the tailoring process and restrict/extend the machine's computational capacities. Ultimately, this bottleneck can be responsible of making quantum leaps like quantum supremacy. As a step towards the former, this work investigates how evolutionary algorithms can enhance the qubits' initialisation by tackling it as a single-objective problem using a genetic algorithm. The experiments used instances representing 19 real IBM quantum machines with 7 to 65 qubits and 9 different qubit topologies. Also, 76 GHZ circuits of sizes 7-65 qubits and 25%-100% of entanglement were created and studied. Extensive standard and statistical comparisons have been made against the IBM qubit initialiser that is currently used in real quantum machines. Results showed that the proposal outperforms IBM in 64 instances and is similar to it in 10 ones, with an average circuit-compression gain up to 46%.
The Multi-Objective Real-Valued Gene-pool Optimal Mixing Evolutionary Algorithm (MO-RV-GOMEA) has been proven effective and eficient in solving real-world problems. A prime example is optimizing treatment plans for prostate cancer brachytherapy, an internal form of radiation treatment, for which equally important clinical aims from a base protocol are grouped into two objectives and bi-objectively optimized. This use of MO-RV-GOMEA was recently successfully introduced into clinical practice. Brachytherapy can also play an important role in treating cervical cancer. However, using the same approach to optimize treatment plans often does not immediately lead to clinically desirable results. Concordantly, medical experts indicate that they use additional aims beyond the cervix base protocol. Moreover, these aims have different priorities and can be patient-specifically adjusted. For this reason, we propose a novel adaptive objective configuration method to use with MO-RV-GOMEA so that we can accommodate additional aims of this nature. Based on results using only the base protocol, in consultation with medical experts, we configured key additional aims. We show how, for 10 patient cases, the new approach achieves the intended result, properly taking into account the additional aims. Consequently, plans resulting from the new approach are preferred by medical specialists in 8/10 cases.
An emerging optimisation problem from real-world applications, named the multi-point dynamic aggregation (MPDA) problem, has become an active research of the multi-robot system. This paper focuses on a multi-objective MPDA (MO-MPDA) problem which is to design execution plans of robots for minimising the cost of used robots and maximising the efficiency of task execution. The MOMPDA problem has the issues of conflicting objectives, redundant representation, and variable-length encoding, posing extra challenges to address the MO-MPDA problem effectively. Combining the ∊-constraint method and decomposition mechanisms, a novel multi-objective evolutionary algorithm is proposed. The proposed algorithm selects the efficiency objective as the main objective and converts the cost objective as constraints. Thus, the multi-objective problem is decomposed into a series of scalar constrained optimisation subproblems by assigning each subproblem with an upper bound constraint. All the subproblems are optimised and evolved simultaneously with the transferring knowledge from other sub-problems to solve the MO-MPDA problem parallelly and efficiently. Besides, considering the characteristics of parent individuals, this paper designs a hybrid reproduction mechanism to transmit effective information to offspring individuals for tackling the encoding redundancy and varying-length. Experimental results show that the proposed algorithm significantly outperforms the state-of-the-art algorithms in terms of most-used metrics.
Active Directory (AD) is the default security management system for Windows domain networks. We study a Stackelberg game model between one attacker and one defender on an AD attack graph. The attacker initially has access to a set of entry nodes. The attacker can expand this set by strategically exploring edges. Every edge has a detection rate and a failure rate. The attacker aims to maximize their chance of successfully reaching the destination before getting detected. The defender's task is to block a constant number of edges to decrease the attacker's chance of success. We show that the problem is #P-hard and, therefore, intractable to solve exactly. We convert the attacker's problem to an exponential sized Dynamic Program that is approximated by a Neural Network (NN). Once trained, the NN provides an eficient fitness function for the defender's Evolutionary Diversity Optimisation (EDO). The diversity emphasis on the defender's solution provides a diverse set of training samples, which improves the training accuracy of our NN for modelling the attacker. We go back and forth between NN training and EDO. Experimental results show that for R500 graph, our proposed EDO based defense is less than 1% away from the optimal defense.
We present EvoIsland, a scalable interactive evolutionary user interface framework inspired by the spatially isolated land masses seen on Earth. Our generalizable interaction system encourages creators to spatially explore a wide range of design possibilities through the combination, separation, and rearrangement of hexagonal tiles on a grid. As these tiles are grouped into islandlike clusters, localized populations of designs form through an underlying evolutionary system. The interactions that take place within EvoIsland provide content creators with new ways to shape, display and assess populations in evolutionary systems that produce a wide range of solutions with visual phenotype outputs.
Database systems typically have many knobs that must be configured by database administrators to achieve high performance. RocksDB achieves fast data writing performance using a log-structured merge-tree. This database contains many knobs related to write and space amplification, which are important performance indicators in RocksDB. Previously, it was proved that significant performance improvements could be achieved by tuning database knobs. However, tuning multiple knobs simultaneously is a laborious task owing to the large number of potential configuration combinations and trade-offs.
To address this problem, we built a tuning system for RocksDB. First, we generated a valuable RocksDB data repository for analysis and tuning. To find the workload that is most similar to a target workload, we created a new representation for workloads. We then applied the Mahalanobis distance to create a combined workload that is as close to the original target workload as possible. Subsequently, we trained a deep neural network model with the combined workload and used it as the fitness function of a genetic algorithm. Finally, we applied the genetic algorithm to find the best solution for the original target workload. The experimental results demonstrated that the proposed system achieved a significant performance improvement for various target workloads.
Cooperative co-evolutionary approaches have shown the ability to evolve specialised agent behaviours in multi-agent systems. A key aspect of applying cooperative co-evolution as an optimisation technique is the decomposition of the overall problem into smaller, interacting sub-components. Typically, problem decomposition occurs a priori, with a common approach being to decompose by agent, where each agent's behaviour corresponds to a separate evolving individual. Such approaches have inherent scalability limitations as the number of agents increases. In this work, we demonstrate a novel dynamic decomposition scheme based on the hierarchal subdivision of the environment, rather than by agent. In our approach, decomposition is performed simultaneously with the evolution of behaviour, starting with behaviour for the whole environment, followed by environment sub-division and behaviour specialisation. The approach was evaluated by evolving the behaviour of a set of cooperating combat unit agents in a real-time strategy game. The game was set in an environment composed of areas, each requiring a different behaviour for optimal gameplay. Using the behaviour evolved by our approach, the agents consistently defeated a numerically superior opponent and demonstrated location-appropriate behaviour. This approach may benefit other multi-agent systems where distinct environment sections call for different specialisation.
Oftentimes the characteristics of real-world engineering optimization problems are not well understood. In this paper, we introduce an approach for characterizing highly nonlinear and Finite Element (FE) simulation-based engineering optimization problems, focusing on ten representative problem instances from automotive crashworthiness optimization. By computing characteristic Exploratory Landscape Analysis (ELA) features, we show that these ten crashworthiness problem instances exhibit landscape features different from classical optimization benchmark test suites, such as the widely-used Black-Box Optimization Benchmarking (BBOB) problem set. Using clustering approaches, we demonstrate that these ten problem instances are clearly distinct from the BBOB test functions. Further analysis of the crashworthiness problem instances reveal that, as far as ELA concerns, they are most similar to a class of artificially generated functions. We identify such artificially generated functions and propose to use them as scalable and fast-to-evaluate representatives of the real-world problems. Such artificially generated functions could be used for the automated design of an optimization algorithm for specific real-world problem classes.
Identifying the primary case, denoted as the index case or commonly "patient zero", of an epidemic is a critical part of epidemiological investigation for understanding outbreak occurrence and how to prevent future outbreaks. In this paper, we devise a Particle Swarm Optimization algorithm to find solutions to the patient zero inverse problem, which aims to estimate the location and time origin of a disease from a single snapshot of the infection status of the post-outbreak population. We consider various compartmental models for disease spread on a regular lattice and also propose a new variant of the patient zero inverse problem where the disease is reintroduced into a population. Experimental results show that the proposed algorithm is effective at identifying the location and time of an epidemic's inception, and shows improvements over other algorithms in the literature. Specifically, the estimates for the location of an outbreak are 50% closer than similar algorithms along with statistically significant improvements in identifying the onset time of an outbreak. Moreover, the proposed algorithm is also able to consistently perform in partially observable environments and scales well when the search space is increased.
Existing bug text reports are widely used by software engineers to assist them in understanding important components of individual defects and adjustments done to resolve the fault. However, bug reports are typically long and require significant effort to comprehend. Summarization of bug reports appears to be beneficial in this regard, covering essential and diverse information. We frame the Bug report summarization problem as a clustering-based optimization problem and solve it using a multi-view multi-objective optimization framework. To represent the bug reports, semantic and syntactic representations, which are regarded as separate views, are taken into account. Several cluster quality measures computed on partitionings obtained using distinct views are optimized simultaneously using a multi-objective optimization-based approach known as archived multi-objective simulated annealing. To determine the consensus between the partitionings generated using different views, an agreement index is computed, which is also optimized simultaneously along with other cluster quality measures. The proposed methodology automatically determines the number of clusters. The experiments are carried out using the two benchmark datasets (SDS and ADS) and evaluated using the well-known ROUGE, Precision, Recall, and F-measure evaluation metrics. The obtained results show that the proposed methodology outperforms state-of-the-art methods.
It is well known that anti-malware scanners depend on malware signatures to identify malware. However, even minor modifications to malware code structure results in a change in the malware signature thus enabling the variant to evade detection by scanners. Therefore, there exists the need for a proactively generated malware variant dataset to aid detection of such diverse variants by automated antivirus scanners. This paper proposes and demonstrates a generic assembly source code based framework that facilitates any evolutionary algorithm to generate diverse and potential variants of an input malware, while retaining its maliciousness, yet capable of evading antivirus scanners. Generic code transformation functions and a novelty search supported quality metric have been proposed as components of the framework to be used respectively as variation operators and fitness function, for evolutionary algorithms. The results demonstrate the effectiveness of the framework in generating diverse variants and the generated variants have been shown to evade over 98% of popular antivirus scanners. The malware variants evolved by the framework can serve as antigens to assist malware analysis engines to improve their malware detection algorithms.
The configuration of radar networks is a complex problem that is often performed manually by experts with the help of a simulator. Different numbers and types of radars as well as different locations that the radars shall cover give rise to different instances of the radar configuration problem. The exact modeling of these instances is complex, as the quality of the configurations depends on a large number of parameters, on internal radar processing, and on the terrains on which the radars need to be placed. Classic optimization algorithms can therefore not be applied to this problem, and we rely on "trial-and-error" black-box approaches.
In this paper, we study the performances of 13 black-box optimization algorithms on 153 radar network configuration problem instances. The algorithms perform considerably better than human experts. Their ranking, however, depends on the budget of configurations that can be evaluated and on the elevation profile of the location. We therefore also investigate automated algorithm selection approaches. Our results demonstrate that a pipeline that extracts instance features from the elevation of the terrain performs on par with the classical, far more expensive approach that extracts features from the objective function.
In this paper, we compare the performance of a canonical genetic algorithm (CGA), the Self Adaptive Genetic Algorithm (SAGA), and a feed-forward neural network (FFNN) on a predictive modeling problem with incomplete data. Predictive modeling involves learning relationships between the features and labels of the data points in a dataset. Datasets with missing input values may cause problems for some learning algorithms by biasing the learned models. Imputation refers to techniques for replacing missing data through methods such as statistical probabilities, multivariate analysis, machine learning, or K-nearest neighbors.
We study how imputed datasets impact the ability for CGA, SAGA, and FFNN to learn effective models. Results indicate that imputation method has little effect on CGA and SAGA performance and a noticeable effect on FFNN performance. All three algorithms perform similarly when applied to data imputed by univariate strategies, but FFNN is noticeably worse on data imputed by trained multivariate strategies. With increased quantities of imputed data, test accuracy decreases for all three algorithms while control accuracy remains surprisingly stable in all cases except for FFNN on trained multivariate imputation. Interestingly, CGA and SAGA identify the most relevant input values, even when a large amount of the data is imputed.
Autonomous robot swarm systems allow to address many inherent limitations of single robot systems, such as scalability and reliability. As a consequence, these have found their way into numerous applications including in the space and aerospace domains like swarm-based asteroid observation or counter-drone systems. However, achieving stable formations around a point of interest using different number of robots and diverse initial conditions can be challenging. In this article we propose a novel method for autonomous robots swarms self-organisation solely relying on their relative position (angle and distance). This work focuses on an evolutionary optimisation approach to calculate the parameters of the swarm, e.g. inter-robot distance, to achieve a reliable formation under different initial conditions. Experiments are conducted using realistic simulations and considering four case studies. The results observed after testing the optimal configurations on 72 unseen scenarios per case study showed the high robustness of our proposal since the desired formation was always achieved. The ability of self-organise around a point of interest maintaining a predefined fixed distance was also validated using real robots.
Cybersecurity simulations can offer deep insights into the behavior of agents in the battle to secure computer systems. We build on existing work modeling the competition between an attacker and defender on a network architecture in a zero-sum game using a graph database linking cybersecurity attack patterns, vulnerabilities, and software. We apply coevolution to this challenging environment, and in a novel modeling approach for this problem, interpret each population as a distribution over fixed strategies to form a mixed strategy Nash equilibrium. We compare the results to solutions generated by multi-agent reinforcement learning and show that evolutionary methods demonstrate a considerable degree of robustness to parameter misspecification in this environment.
Self-adaptive systems frequently use tactics to perform adaptations. Tactic examples include the implementation of additional security measures when an intrusion is detected, or activating a cooling mechanism when temperature thresholds are surpassed. Tactic volatility occurs in real-world systems and is defined as variable behavior in the attributes of a tactic, such as its latency or cost. A system's inability to effectively account for tactic volatility adversely impacts its efficiency and resiliency against the dynamics of real-world environments. To enable systems' efficiency against tactic volatility, we propose a Tactic Volatility Aware (TVA-E) process utilizing evolved Recurrent Neural Networks (eRNN) to provide accurate tactic predictions. TVA-E is also the first known process to take advantage of uncertainty reduction tactics to provide additional information to the decision-making process and reduce uncertainty. TVA-E easily integrates into popular adaptation processes enabling it to immediately benefit a large number of existing self-adaptive systems. Simulations using 52,106 tactic records demonstrate that: I) eRNN is an effective prediction mechanism, II) TVA-E represents an improvement over existing state-of-the-art processes in accounting for tactic volatility, and III) Uncertainty reduction tactics are beneficial in accounting for tactic volatility. The developed dataset and tool can be found at https://tacticvolatility.github.io/
Recently, optimization has become an emerging tool for neuroscientists to study neural code. In the visual system, neurons respond to images with graded and noisy responses. Image patterns eliciting highest responses are diagnostic of the coding content of the neuron. To find these patterns, we have used black-box optimizers to search a 4096d image space, leading to the evolution of images that maximize neuronal responses. Although genetic algorithm (GA) has been commonly used, there haven't been any systematic investigations to reveal the best performing optimizer or the underlying principles necessary to improve them.
Here1, we conducted a large scale in silico benchmark of optimizers for activation maximization and found that Covariance Matrix Adaptation (CMA) excelled in its achieved activation. We compared CMA against GA and found that CMA surpassed the maximal activation of GA by 66% in silico and 44% in vivo. We analyzed the structure of Evolution trajectories and found that the key to success was not covariance matrix adaptation, but local search towards informative dimensions and an effective step size decay. Guided by these principles and the geometry of the image manifold, we developed SphereCMA optimizer which competed well against CMA, proving the validity of the identified principles.
Multi-objective test case selection techniques are widely investigated with the goal of devising novel solutions to increase the cost-effectiveness of verification processes. When evaluating such approaches the entire Pareto-frontier of the algorithm needs to be considered. To do so, several quality indicators exist. The hyper-volume (HV) is one of the most well-known and applied quality indicator. However, in the context of test case selection, this metric has certain limitations. For instance, two different fitness function combinations are not comparable if this metric is used at the search algorithm's objective function level. Consequently, researchers proposed the revisited HV (rHV) indicator. To compute the rHV, each solution of the search algorithm is individually assessed through two external utility functions: the cost and the fault detection capability (FDC). However, this increases the risk of having dominated solutions, which in practice may lead a decision maker (DM) to select such dominated solution. In this paper we assess whether the rHV is an appropriate quality indicator to assess multi-objective test case selection algorithms. To do so, we empirically assess whether the results between the rHV and the FDC of the different DM instances hold. Long story short, the rHV is an appropriate quality indicator.
Deep Learning (DL) components are increasing their presence in safety and mission-critical software systems. To ensure a high dependability of DL systems, robust verification methods are required, for which automation is highly beneficial (e.g., more test cases can be executed). Metamorphic Testing (MT) is a technique that has shown to alleviate the test oracle problem when testing DL systems, and therefore, increasing test automation. However, a drawback of this technique lies into the need of multiple test executions to obtain the test verdict (named as the source and the follow-up test cases), requiring additional testing cost. In this paper we propose an approach based on multi-objective search to select follow-up test cases. Our approach makes use of source test cases to measure the uncertainty provoked by such test inputs in the DL model, and based on that, select failure-revealing follow-up test cases. We integrate our approach with the NSGA-II algorithm. An empirical evaluation on three DL models tackling the image classification problem, along with five different metamorphic relations demonstrates that our approach outperformed the baseline algorithm between 17.09 to 59.20% on average when considering the revisited Hypervolume quality indicator.
Automatically improving and repairing software using search-based methods is an active research topic. Many current systems use existing source code as the ingredients of repairs, either through evolutionary computation derived random mutation or other heuristic operators. However, these code transformation operators are not always well-matched to the granularity of the source code on which they operate. This paper proposes a static source-to-source preprocessing step to produce code with more uniform granularity that exposes relevant program components to the repair process. This approach, called Program Repair Enhancement via Preprocessing (PREP), has been applied to three different repair tools, each of which uses different code transformation operators and search algorithms. In every case, applying PREP before the search allows the tool to repair software defects that were previously unattainable by that tool. PREP finds 88 unique previously-unreported correct repairs across these tools. This result is significant because it is applicable to most search-based software improvement methods, and it addresses the fundamental issue of how to match the granularity of the representation to the granularity of operators.
Mutation testing is often used for designing new tests, and involves changing a program in minor ways, which results in mutated versions of the program, i.e., mutants. An effective test suite should find faults (or kill mutants) with a minimum number of test cases, to save resources required for executing test cases. In this paper, in the context of mutation testing for quantum programs, we present a multi-objective and search-based approach (MutTG) to generate the minimum number of test cases killing as many mutants as possible. MutTG tries to estimate the likelihood that a mutant is equivalent, and uses this as a discount factor in the fitness definition to avoid keeping on trying to kill mutants that cannot be killed. We employed NSGA-II as the multi-objective search algorithm. Then, we compared MutTG with another version of the approach that does not use the discount factor in its fitness definition, and with random search (RS), over a set of open-source quantum programs and their mutants of varying complexity. Results show that the discount factor does indeed help in guiding the test generation, as the approach with the discount factor performs better than the one without it.
A surrogate function is often employed to reduce the number of objective function evaluations for optimization. However, the effect of using a surrogate model in evolutionary approaches has not been theoretically investigated. This paper theoretically analyzes the information-geometric optimization framework using a surrogate function. The value of the expected objective function under the candidate sampling distribution is used as the measure of progress of the algorithm. We assume that the surrogate function is maintained so that the population version of the Kendall's rank correlation coefficient between the surrogate function and the objective function under the candidate sampling distribution is greater than or equal to a predefined threshold. We prove that information-geometric optimization using such a surrogate function leads to a monotonic decrease in the expected objective function value if the threshold is sufficiently close to one. The acceptable threshold value is analyzed for the case of the information-geometric optimization instantiated with Gaussian distributions, i.e., the rank-μ update CMA-ES, on a convex quadratic objective function. As an alternative to the Kendall's rank correlation coefficient, we investigate the use of the Pearson correlation coefficient between the weights assigned to candidate solutions based on the objective function and the surrogate function.
Combinatorial optimization problems are a prominent application area of evolutionary algorithms, where the (1+1) EA is one of the most investigated. We extend this algorithm by introducing some problem knowledge with a specialized mutation operator which works under the assumption that the number of 1s of a solution is critical, as frequently happens in combinatorial optimization. This slight modification increases the chance to correct wrongly placed bits while preserving the simplicity and problem independence of the (1+1) EA.
As an application of our algorithm we examine the vertex cover problem on certain instances, where we show that it leads to asymptotically better runtimes and even finds with higher probability optimal solutions in comparison with the usual (1+1) EA. Precisely, we compare the performance of both algorithms on paths and on complete bipartite graphs of size n. Regarding the path we prove that, for a particular initial configuration, the (1+1) EA takes in expectation Θ(n4) iterations while the modification reduces this to Θ(n3), and present experimental evidence that such a configuration is reached. Concerning the complete bipartite graph our modification finds the optimum in polynomial time with probability 1 - 1/2Ω(nξ) for every positive constant ξ < 1, which improves the known probability of 1 - 1/poly(n) for the (1+1) EA.
Theoretical evidence suggests that non-elitist evolutionary algorithms (EAs) with non-linear selection mechanisms can efficiently overcome broad classes of local optima where elitist EAs fail. However, the analysis assumes a weak selective pressure and mutation rates carefully chosen close to the "error threshold", above which they cease to be efficient. On problems easier for hill-climbing, the populations may slow down these algorithms, leading to worse runtime compared with variants of the elitist (1+1) EA.
Here, we show that a non-elitist EA with power-law ranking selection leads to fast runtime on easy benchmark problems, while maintaining the capability of escaping certain local optima where the elitist EAs spend exponential time in the expectation.
We derive a variant of the level-based theorem which accounts for power-law distributions. For classical theoretical benchmarks, the expected runtime is stated with small leading constants. For complex, multi-modal fitness landscapes, we provide sufficient conditions for polynomial optimisation, formulated in terms of deceptive regions sparsity and fitness valleys density. We derive the error threshold and show extreme tolerance to high mutation rates. Experiments on NK-Landscape functions, generated according to the Kauffman's model, show that the algorithm outperforms the (1+1) EA and the univariate marginal distribution algorithm (UMDA).
We prove that Simulated Annealing with an appropriate cooling schedule computes arbitrarily tight constant-factor approximations to the minimum spanning tree problem in polynomial time. This result was conjectured by Wegener (2005). More precisely denoting by n, m, wmax, and wmin the number of vertices and edges as well as the maximum and minimum edge weight of the MST instance, we prove that simulated annealing with initial temperature T0 ≥ wmax and multiplicative cooling schedule with factor 1 - 1/ℓ, where ℓ = ω(mn ln(m)), with probability at least 1 - 1/m computes in time O(ℓ(ln ln(ℓ) + ln(T0/wmin))) a spanning tree with weight at most 1 + κ times the optimum weight, where [EQUATION]. Consequently, for any ε > 0, we can choose ℓ in such a way that a (1 + ε)-approximation is found in time O((mn ln(n))1+1/ε+o(1) (ln ln n + ln(T0/wmin))) with probability at least 1 - 1/m. In the special case of so-called (1 + ε)-separated weights, this algorithm computes an optimal solution (again in time O((mn ln(n))1+1/ε+o(1) (ln ln n + ln(T0/wmin)))), which is a significant speed-up over Wegener's runtime guarantee of O(m8+8/ε).
While the theoretical analysis of evolutionary algorithms (EAs) has made significant progress for pseudo-Boolean optimization problems in the last 25 years, only sporadic theoretical results exist on how EAs solve permutation-based problems.
To overcome the lack of permutation-based benchmark problems, we propose a general way to transfer the classic pseudo-Boolean benchmarks into benchmarks defined on sets of permutations. We then conduct a rigorous runtime analysis of the permutation-based (1 +1) EA proposed by Scharnow, Tinnefeld, and Wegener (2004) on the analogues of the LeadingOnes and Jump benchmarks. The latter shows that, different from bit-strings, it is not only the Hamming distance that determines how difficult it is to mutate a permutation σ into another one τ, but also the precise cycle structure of στ−-1. For this reason, we also regard the more symmetric scramble mutation operator. We observe that it not only leads to simpler proofs, but also reduces the runtime on jump functions with odd jump size by a factor of Θ(n). Finally, we show that a heavy-tailed version of the scramble operator, as in the bit-string case, leads to a speed-up of order mΘ(m) on jump functions with jump size m.
In order to understand better how and why crossover can benefit optimization, we consider pseudo-Boolean functions with an upper bound B on the number of 1s allowed in the bit string (cardinality constraint). We consider the natural translation of the OneMax test function, a linear function where B bits have a weight of 1 + ε and the remaining bits have a weight of 1. The literature gives a bound of Θ(n2) for the (1+1) EA on this function.
Part of the difficulty when optimizing this problem lies in having to improve individuals meeting the cardinality constraint by flipping both a 1 and a 0. The experimental literature proposes balanced operators, preserving the number of 1s, as a remedy. We show that a balanced mutation operator optimizes the problem in O(n log n) if n - B = O(1). However, if n - B = Θ(n), we show abound of Ω(n2), just as classic bit flip mutation. Crossover and a simple island model gives O(n2/log n) (uniform crossover) and [EQUATION] (3-ary majority vote crossover). For balanced uniform crossover with Hamming distance maximization for diversity we show a bound of O(n log n).
As an additional contribution we analyze and discuss different balanced crossover operators from the literature.
Co-evolutionary algorithms have a wide range of applications, such as in hardware design, evolution of strategies for board games, and patching software bugs. However, these algorithms are poorly understood and applications are often limited by pathological behaviour, such as loss of gradient, relative over-generalisation, and mediocre objective stasis. It is an open challenge to develop a theory that can predict when co-evolutionary algorithms find solutions efficiently and reliably.
This paper provides a first step in developing runtime analysis for population-based competitive co-evolutionary algorithms. We provide a mathematical framework for describing and reasoning about the performance of co-evolutionary processes. An example application of the framework shows a scenario where a simple coevolutionary algorithm obtains a solution in polynomial expected time. Finally, we describe settings where the co-evolutionary algorithm needs exponential time with overwhelmingly high probability to obtain a solution.
The exploration vs exploitation dilemma is to balance exploring new but potentially less fit regions of the fitness landscape while also focusing on regions near the fittest individuals. For the tunable problem class SparseLocalOpt, a non-elitist EA with tournament selection can limit the percentage of "sparse" local optimal individuals in the population using a sufficiently high mutation rate (Dang et al., 2021). However, the performance of the EA depends critically on choosing the "right" mutation rate, which is problem instance-specific. A promising approach is self-adaptation, where parameter settings are encoded in chromosomes and evolved.
We propose a new self-adaptive EA for single-objective optimisation, which treats parameter control from the perspective of multiobjective optimisation: The algorithm simultaneously maximises the fitness and the mutation rates. Since individuals in "dense" fitness valleys survive high mutation rates, and individuals on "sparse" local optima only survive with lower mutation rates, they can coexist on a non-dominated Pareto front.
Runtime analyses show that this new algorithm (MOSA-EA) can efficiently escape a local optimum with unknown sparsity, where some fixed mutation rate EAs become trapped. Complementary experimental results show that the MOSA-EA outperforms a range of EAs on random NK-Landscape and k-Sat instances.
The compact genetic algorithm (cGA) is a non-elitist estimation of distribution algorithm which has shown to be able to deal with difficult multimodal fitness landscapes that are hard to solve by elitist algorithms. In this paper, we investigate the cGA on the Cliff function for which it has been shown recently that non-elitist evolutionary algorithms and artificial immune systems optimize it in expected polynomial time. We point out that the cGA faces major difficulties when solving the Cliff function and investigate its dynamics both experimentally and theoretically around the Cliff. Our experimental results indicate that the cGA requires exponential time for all values of the update strength K. We show theoretically that, under sensible assumptions, there is a negative drift when sampling around the location of the cliff. Experiments further suggest that there is a phase transition for K where the expected optimization time drops from nΘ(n) to 2Θ(n).