Introductory Tutorials

Introduction to Genetic Algorithms
more info
  • Erik Goodman (Michigan State University, USA)
  • Genetic Programming
    more info
  • Una-May O’Reilly (MIT, USA)
  • Introduction to Evolution Strategies
    more info
  • Thomas Bäck (Leiden University, The Netherlands)
  • Evolutionary Computation: A Unified Approach
    more info
  • Kenneth De Jong (George Mason University, USA)
  • Evolutionary Multiobjective Optimization
    more info
  • Dimo Brockhoff  (INRIA Lille - Nord Europe, France)
  • Representations for Evolutionary Algorithms
    more info
  • Franz Rothlauf (Gutenberg University Mainz, Germany)
  • Statistical Analysis for Experiments in Evolutionary Computation: An Introduction
    more info
  • Mark Wineberg (University of Guelph, Canada)
  • Particle Swarm Optimization
    more info
  • Andries Engelbrecht (University Of Pretoria, South Africa)
  • Learning Classifier Systems: A Gentle Introduction
    more info
  • Pier Luca Lanzi (Politecnico di Milano, Italy)
  • Model-Based Evolutionary Algorithms
    more info
  • Dirk Thierens (Utrecht University, The Netherlands)
  • Peter Bosman (Centre for Mathematics and Computer Science, The Netherlands)
  • Runtime Analysis of Evolutionary Algorithms: Basic Introduction
    more info
  • Per Kristian Lehre (University of Nottingham, UK)
  • Pietro S. Oliveto (University of Sheffield, UK)
  • Evolving Neural Networks
    more info
  • Risto Miikkulainen (University of Texas at Austin, USA)


    Evolution Strategies and CMA-ES (Covariance Matrix Adaptation)
    more info
  • Anne Auger (INRIA, France)
  • Nikolaus Hansen (INRIA, France)
  • Constraint-Handling Techniques used with Evolutionary Algorithms
    more info
  • Carlos Coello-Coello (CINVESTAV-IPN, Mexico)
  • Blind No More: Constant Time Non-Random Improving Moves and Exponentially Powerful Recombination
    more info
  • Darrell Whitley (Colorado State University, USA)
  • Expressive Genetic Programming
    more info
  • Lee Spector (Hampshire College, USA)
  • Parameterized Complexity Analysis of Evolutionary Algorithms
    more info
  • Frank Neumann (University of Adelaide, Australia)
  • Andrew Sutton (University of Adelaide, Australia)
  • Bioinspired Computation in Combinatorial Optimization - Algorithms and Their Computational Complexity
    more info
  • Carsten Witt (Technical University of Denmark)
  • Theory of Swarm Intelligence
    more info
  • Dirk Sudholt (University of Sheffield, UK)
  • Information Geometry in Evolutionary Computation
    more info
  • Luigi Malagò (Università degli Studi di Milano, Italy)
  • Tobias Glasmachers (Institut für Neuroinformatik in Bochum, Germany)
  • Black-Box Complexity: From Complexity Theory to Playing Mastermind
    more info
  • Benjamin Doerr (Ecole Polytechnique, France)
  • Carola Doerr (CNRS and Universite Pierre et Marie Curie - Paris 6)


    Cellular Genetic Algorithms
    more info
  • Enrique Alba (University of Málaga, Spain)
  • Artificial Immune Systems for Optimization
    more info
  • Thomas Jansen (Aberystwyth University, UK)
  • Christine Zarges (University of Birmingham, UK)
  • Generative and Developmental Systems
    more info
  • Kenneth Stanley (University of Central Florida, USA)
  • Evolutionary Image Analysis and Signal Processing
    more info
  • Stefano Cagnoni (University of Parma, Italy)
  • Decomposition and Cooperative Coevolution Techniques for Large Scale Global Optimization
    more info
  • Xiaodong Li (RMIT, Australia)
  • Evolutionary Search Algorithms for Protein Modeling: From De-novo Structure Prediction to Comprehensive Maps of Functionally-relevant Structures of Protein Chains and Assemblies
    more info
  • Amarda Shehu (George Mason University, USA)
  • Kenneth De Jong (George Mason University, USA)
  • Evolutionary Bilevel Optimization (EBO)
    more info
  • Kalyanmoy Deb (Michigan State University, USA)
  • Ankur Sinha (Aalto University, Finland)
  • Introduction to Evolutionary Game Theory
    more info
  • Marco Tomassini (University of Lausanne, Switzerland)
  • Medical Applications of Evolutionary Computation
    more info
  • Stephen Smith (University of York, UK)
  • Automatic (Offline) Configuration of Algorithms
    more info
  • Manuel López-Ibáñez (IRIDIA laboratory, ULB, Belgium)
  • Thomas Stützle (IRIDIA laboratory, ULB, Belgium)
  • Self-Assembly
    more info
  • Navneet Bhalla (Cornell University)
  • Peter J. Bentley (University College London)
  • Marco Dorigo (Université Libre de Bruxelles)

    Introductory Tutorials


    Introduction to Genetic Algorithms

    The Introduction to Genetic Algorithms Tutorial is aimed at GECCO attendees with limited knowledge of genetic algorithms, and will start "at the beginning," describing first a "classical" genetic algorithm in terms of the biological principles on which it is loosely based, then present some of the fundamental results that describe its performance, described using the schema concept. It will cover some variations on the classical model, some successful applications of genetic algorithms, and advances that are making genetic algorithms more useful.

    Erik Goodman
    He is a Professor of Electrical & Computer Engineering, of Mechanical Engineering, and of Computer Science & Engineering at Michigan State University. He is Director of the BEACON Center for the Study of Evolution in Action, an NSF Science and Technology Center funded at $25 million in 2010. He studied genetic algorithms under John Holland at the University of Michigan, before they had been named. His use of a genetic algorithm in 1971 to solve for 40 coefficients of a highly nonlinear model of a bacterial cell was the first known GA application on a real-world problem, and required nearly a year for one run on a dedicated computer. He has developed and used evolutionary algorithms ever since, including for parameterization of complex ecosystem models, for evolution of cooperative behavior in artificial life, for factory layout and scheduling, for protein folding and docking, for design of composite structures, and for data mining and pattern classification. His recent research has centered on sustainable evolutionary computation, design of mechatronic systems using genetic programming, and multi-objective evolutionary algorithms in support of multi-criterion decision making. He was co-founder and formerly VP Technology of Red Cedar Technology, which provides tools for automated engineering design based on evolutionary computation. He chaired ICGA-97 and GECCO-2001, chaired GECCO's sponsoring organization, ISGEC, from 2001-2004, and was the founding chair of ACM SIGEVO, 2005-2007.


    Genetic Programming

    Genetic programming emerged in the early 1990's as one of the most exciting new evolutionary algorithm paradigms. It has rapidly grown into a thriving area of research and application. While sharing the evolutionary inspired algorithm principles of a genetic algorithm, it differs by exploiting an executable genome. Genetic programming evolves a 'program' to solve a problem rather than a single solution. This tutorial introduces the basic genetic programming framework. It explains how the powerful capability of genetic programming is derived from modular algorithmic components: executable representations such as an abstract syntax tree, variation operators that preserve syntax and explore a variable length, hierarchical solution space, appropriately chosen programming functions and fitness function specification.

    Una-May O'Reilly
    She is leader of the AnyScale Learning For All (ALFA) group at MIT's Computer Science and Artificial Intelligence Laboratory. ALFA focuses on scalable machine learning, evolutionary algorithms, and frameworks for large scale knowledge mining, prediction and analytics. The group has projects in clinical medicine knowledge discovery: arterial blood pressure forecasting and pattern recognition, diuretics in the ICU; wind energy: turbine layout optimization, resource prediction, cable layout; and MOOC Technology: MoocDB, student persistence and resource usage analysis.

    Her research is in the design of scalable Artificial Intelligence systems that execute on a range of hardware systems: GPUs, workstations, grids, clusters, clouds and volunteer compute networks. These systems include machine learning components such as evolutionary algorithms (e.g. genetic programming, genetic algorithms and learning classifiers), classification, non-linear regression, and forecasting algorithms. They span the interpretation and analysis of raw data, through inference on conditioned exemplar data, to the deployment and evaluation of learned "algorithmic machines" in the original application context.

    Una-May received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She is a Junior Fellow (elected before age 40) of the International Society of Genetic and Evolutionary Computation, now ACM Sig-EVO. She now serves as Vice-Chair of ACM SigEVO. She served as chair of the largest international Evolutionary Computation Conference, GECCO, in 2005. She has served on the GECCO business committee, co-led the 2006 and 2009 Genetic Programming: Theory to Practice Workshops and co-chaired EuroGP, the largest conference devoted to Genetic Programming. In 2013 she inaugurated the Women in Evolutionary Computation group at GECCO. She is the area editor for Data Analytics and Knowledge Discovery for Genetic Programming and Evolvable Machines (Kluwer), and editor for Evolutionary Computation (MIT Press), and action editor for the Journal of Machine Learning Research.

    Una-May has a patent for a original genetic algorithm technique applicable to internet-based name suggestions. She holds a B.Sc. from the University of Calgary, and a M.C.S. and Ph.D. (1995) from Carleton University, Ottawa, Canada. She joined the Artificial Intelligence Laboratory, MIT as a Post-Doctoral Associate in 1996.


    Introduction to Evolution Strategies

    This tutorial gives a basic introduction to evolution strategies, a class of evolutionary algorithms. Key features such as mutation, recombination and selection operators are explained, and specifically the concept of self-adaptation of strategy parameters is introduced. All algorithmic concepts are explained to a level of detail such that an implementation of basic evolution strategies is possible. In addition, the tutorial also presents a brief taxonomy of contemporary evolution strategy variants, including e.g. the CMA-ES and variations thereof, and compares their performance for a small number of function evaluations – which represents many of today's practical application cases. Some guidelines for utilization as well as some application examples are also given.

    Thomas Bäck
    He is full professor of computer science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002.

    He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others.

    Thomas Bäck has more than 200 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and most recently, the Handbook of Natural Computing. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft für Informatik, GI) in 1995 and is an elected fellow of the International Society for Genetic and Evolutionary Computation for his contributions to the field.


    Evolutionary Computation: A Unified Approach

    The field of Evolutionary Computation has experienced tremendous growth over the past 20 years, resulting in a wide variety of evolutionary algorithms and applications. The result poses an interesting dilemma for many practitioners in the sense that, with such a wide variety of algorithms and approaches, it is often hard to se the relationships between them, assess strengths and weaknesses, and make good choices for new application areas.

    This tutorial is intended to give an overview of a general EC framework that can help compare and contrast approaches, encourages crossbreeding, and facilitates intelligent design choices. The use of this framework is then illustrated by showing how traditional EAs can be compared and contrasted with it, and how new EAs can be effectively designed using it.

    Finally, the framework is used to identify some important open issues that need further research.

    Kenneth A. De Jong
    He received his Ph.D. in computer science from the University of Michigan in 1975. He joined George Mason University in 1984 and is currently a Professor of Computer Science, head of the Evolutionary Computation Laboratory, and Associate Director of the Krasnow Institute. His research interests include genetic algorithms, evolutionary computation, machine learning, and adaptive systems. He is currently involved in research projects involving the development of new evolutionary algorithm (EA) theory, the use of EAs as heuristics for NP-hard problems, and the application of EAs to the problem of learning task programs in domains such as robotics, navigation and game playing. Support for these projects is provided by DARPA, ONR, NSF and NRL. He is an active member of the Evolutionary Computation research community and has been involved in organizing many of the workshops and conferences in this area. He is the founding editor-in-chief of the journal Evolutionary Computation (MIT Press), and a member of the board of ACM SIGEVO. He is the recipient of an IEEE Pioneer award in the field of Evolutionary Computation and a lifetime achievement award from the Evolutionary Programming Society.


    Evolutionary Multiobjective Optimization

    Many optimization problems are multiobjective in nature in the sense that multiple, conflicting criteria need to be optimized simultaneously. Due to the conflict between objectives, usually, no single optimal solution exists. Instead, the optimum corresponds to a set of so-called Pareto-optimal solutions for which no other solution has better function values in all objectives.

    Evolutionary Multiobjective Optimization (EMO) algorithms are widely used in practice for solving multiobjective optimization problems due to several reasons. As stochastic blackbox algorithms, EMO approaches allow to tackle problems with nonlinear, nondifferentiable, or noisy objective functions. As set-based algorithms, they allow to compute or approximate the full set of Pareto-optimal solutions in one algorithm run---opposed to classical solution-based techniques from the multicriteria decision making (MCDM) field. Using EMO approaches in practice has two other advantages: they allow to learn about a problem formulation, for example, by automatically revealing common design principles among (Pareto-optimal) solutions (innovization) and it has been shown that certain single-objective problems become easier to solve with randomized search heuristics if the problem is reformulated as a multiobjective one (multiobjectivization).

    This tutorial aims at giving a broad introduction to the EMO field and at presenting some of its recent research results in more detail. More specifically, we are going to (i) introduce the basic principles of EMO algorithms in comparison to classical solution-based approaches, (ii) show a few practical examples which motivate the use of EMO in terms of the mentioned innovization and multiobjectivization principles, and (iii) present a general overview of state-of-the-art algorithms and techniques. Moreover, we will present some of the most important research results in areas such as indicator-based EMO, preference articulation, and performance assessment.

    Though classified as introductory, this tutorial is intended for both novices and regular users of EMO. Those without any knowledge will learn about the foundations of multiobjective optimization and the basic working principles of state-of-the-art EMO algorithms. Open questions, presented throughout the tutorial, can serve for all participants as a starting point for future research and/or discussions during the conference.

    Dimo Brockhoff
    After obtaining his diploma in computer science (Dipl.-Inform.) from University of Dortmund, Germany in 2005, Dimo Brockhoff received his PhD (Dr. sc. ETH) from ETH Zurich, Switzerland in 2009. Between June 2009 and October 2011 he held postdoctoral research positions --- first at INRIA Saclay Ile-de-France in Orsay and then at Ecole Polytechnique in Palaiseau, both in France. Since November 2011 he has been a permanent researcher (currently CR1) at INRIA Lille - Nord Europe in Villeneuve d'Ascq, France. His current research interests are focused on evolutionary multiobjective optimization (EMO), in particular on decomposition-based approaches, benchmarking, and theoretical aspects of indicator-based search.


    Representations for Evolutionary Algorithms

    Successful and efficient use of evolutionary algorithms (EA) depends on the choice of the genotype, the problem representation (mapping from genotype to phenotype) and on the choice of search operators that are applied to the genotypes. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation.

    In EA practice one can distinguish two complementary approaches. The first approach uses indirect representations where a solution is encoded in a standard data structure, such as strings, vectors, or discrete permutations, and standard off-the-shelf search operators are applied to these genotypes. This is for example the case in standard genetic algorithms, evolution strategies, and some genetic programming approaches like grammatical evolution or cartesian genetic programming. To evaluate the solution, the genotype needs to be mapped to the phenotype space. The proper choice of this genotype-phenotype mapping is important for the performance of the EA search process. The second approach, the direct representation, encodes solutions to the problem in its most 'natural' space and designs search operators to operate on this representation.

    Research in the last few years has identified a number of key concepts to analyse the influence of representation-operator combinations on EA performance. Relevant properties of representations are locality and redundancy.

    Locality is a result of the interplay between the search operator and the genotype-phenotype mapping. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes or search operators favor some kind of phenotypes.

    The tutorial gives a brief overview about existing guidelines for representation design, illustrates the different aspects of representations, gives a brief overview of models describing the different aspects, and illustrates the relevance of the aspects with practical examples.

    It is expected that the participants have a basic understanding of EA principles.

    Franz Rothlauf
    He received a Diploma in Electrical Engineering from the University of Erlangen, Germany, a Ph.D. in Information Systems from the University of Bayreuth, Germany, and a Habilitation from the University of Mannheim, Germany, in 1997, 2001, and 2007, respectively.

    Since 2007, he is professor of Information Systems at the University of Mainz. He has published more than 90 technical papers in the context of planning and optimization, evolutionary computation, e-business, and software engineering, co-edited several conference proceedings and edited books, and is author of the books "Representations for Genetic and Evolutionary Algorithms" and "Design of Modern Heuristics". Since 2013, he is Academic Director of the Executive MBA program at the University of Mainz.

    His main research interests are the application of modern heuristics in planning and optimization systems. He is a member of the Editorial Board of Evolutionary Computation Journal (ECJ) and Business & Information Systems Engineering (BISE). Since 2007, he is member of the Executive Committee of ACM SIGEVO. He serves as treasurer for ACM SIGEVO since 2011. He has been organizer of many workshops and tracks on heuristic optimization issues, chair of EvoWorkshops in 2005 and 2006, co-organizer of the European workshop series on "Evolutionary Computation in Communications, Networks, and Connected Systems", co-organizer of the European workshop series on "Evolutionary Computation in Transportation and Logistics", and co-chair of the program committee of the GA track at GECCO 2006. He was conference chair of GECCO 2009.


    Statistical Analysis for Experiments in Evolutionary Computation: An Introduction

    Although the situation has been improving over the past few years, many researchers in the EC field are still using an unfortunate lack of statistical rigor when performing and analyzing experiments. Experimental results that lack a proper statistical analysis should be considered as anecdotal at best, and may even be wholly inaccurate. This tutorial aims at providing the basic statistical framework that all experimenters in the EC field should follow.

    The tutorial will cover introductory topics in Statistics such as the T test, confidence intervals, the Central Limit theorem (its power and limitations), non-parametric statistics, and will touch on multi-variate and multi-factorial analysis such as regression analysis, both linear and polynomial, as well as Analysis of Variance (ANOVA). The tutorial will be presented at an introductory level and is meant for any EC researchers who want to compare their newly designed EC system with established EC systems to see if there is an improvement, or who want to determine which EC system performs better on their chosen problem; i.e. nearly everyone.

    It is vital, if our community is to be taken seriously, for us to continue to educate ourselves (especially new Graduate students) on the statistical techniques that are considered de rigor for experiments performed in any field, such as ours, that is stochastic in nature.

    Mark Wineberg
    Prof. Wineberg has been actively researching the field of Evolutionary Computation (EC) and Genetic Algorithms (GA) since 1993 while he was still a graduate student at Carleton University, Ottawa, Canada. Over the years he has published on various topics including: the intersection of Genetic Algorithms and Genetic Programming, enhancing the GA for improved behavior in dynamic environments through specialized multiple populations, and exploring the concept of distances and diversity in GA populations. He is currently continuing his research in dynamic environments and studying the properties that allow one population to evolve more readily than another. Prof. Wineberg also teaches an undergraduate course at Guelph on computer simulation and modeling of discrete stochastic systems, which emphasizes proper statistical analysis, as well as a graduate course on experimental design and analysis for computer science, which is an outgrowth of the statistical analysis tutorial given at GECCO.


    Particle Swarm Optimization (PSO) [new tutorial]

    Since the first publication of Particle Swarm Optimization (PSO) in 1995, the number of research papers on PSO, and the number of researcher in PSO, have exploded. Many variations of the PSO have been developed to improve its performance, studies have been done to understand the dynamics of particles, and adaptations have been developed to apply the PSO to different optimization problem types.

    This tutorial is planned in two parts, one introductory, and one addressing more advanced topics. The introductory part will have as its objective to provide the attendee with an overview of PSO, its behaviors, and its basic variations, and will show how the different PSO algorithms can easily be implemented using an opensource CI library, called Cilib. This will provide the attendees with hands-on experience in using and coding PSO algorithms. The advanced part of the tutorial will consider PSO models for solving difficult optimization problems, and will also provide a review of theoretical analyses of PSO.

    In more detail, the tutorial will cover the following topics, in the order listed below:


    1. Basic PSO: The philosophy of PSO will be discussed, and the basic (original) PSO algorithms will be explained and illustrated. The need for social network structures will be discussed, as well as the importance of PSO control parameters, basic variations (velocity clamping, inertia, constriction). The behavior of particles under different conditions will be discussed. An overview of performance criteria will be given.
    2. Single-solution PSO: A number of variations of the basic PSO to locate a single solution will be presented and illustrated. This will include approaches that use multiple swarms, hybrids with evolutionary algorithms, and many more.


    3. Particle trajectories: The behavior of PSO particles will be discussed, and formal heuristics derived to guide selection of values for control parameters. Example particle trajectories will be illustrated, and one of the major pitfalls of PSO will be pointed out and corrected.
    4. Niching PSO: It will be shown how the PSO can be adapted to find multiple solutions.
    5. MOO PSO: This section will show how PSO can be used to solve multiobjective optimization problems.
    6. Constrained PSO: Here PSO variations to solve constrained problems will be discussed.
    7. Dynamic environments: It will be shown how PSO can be adapted to maintain and track single and multiple optima in dynamically changing environments.
    8. Discrete optimization: Changes to PSO to solve binary/discrete-valued problems will be discussed and illustrated.

    Andries Engelbrecht
    He received the Masters and PhD degrees in Computer Science from the University of Stellenbosch, South Africa, in 1994 and 1999, respectively. He is a Professor in Computer Science at the University of Pretoria, and serves as Head of the department. He also holds the position of South African Research Chair in Artificial Intelligence, and leads the Computational Intelligence Research Group at the University of Pretoria, consisting of 40 Masters and PhD students. His research interests include swarm intelligence, evolutionary computation, artificial neural networks, artificial immune systems, and the application of these Computational Intelligence paradigms to data mining, games, bioinformatics, finance, and difficult optimization problems. He has published over 220 papers in these fields in journals and international conference proceedings, and is the author of two books, Computational Intelligence: An Introduction and Fundamentals of Computational Swarm Intelligence. Prof Engelbrecht is a very active in the international community, annually serving as reviewer for over 20 journals and 10 conferences. He is an Associate Editor of the IEEE Transactions on Evolutionary Computation, Journal of Swarm Intelligence, IEEE Transactions on Computational intelligence and AI in Games, and Applied Soft Computing. Additionally, he serves on the editorial board of three other international journals, and was co-guest editor of special issues of the IEEE Transactions on Evolutionary Computation and the Journal of Swarm Intelligence. He served on the international program committee and organizing committee of a number of conferences, organized special sessions, presented tutorials, and took part in panel discussions. He was the founding chair of the South African chapter of the IEEE Computational Intelligence Society. He is a member of the Evolutionary Computation Technical Committee, Games Technical Committee, and the Evolutionary Computation in Dynamic and Uncertain Environments Task Force.


    Learning Classifier Systems: A Gentle Introduction [new tutorial]

    Learning Classifier Systems were introduced in the 1970s by John H. Holland as highly adaptive, cognitive systems. More than 40 years later, the introduction of Stewart W. Wilson's XCS, a highly engineered classifier system model, has transformed them into a state-of-the-art machine learning system. Learning classifier systems can effectively solve data-mining problems, reinforcement learning problems, and also cognitive, robotics control problems. In comparison to other, non-evolutionary machine learning techniques, their performance is competitive or superior, dependent on the setup and problem. Learning classifier systems can work both online and offline, they are extremely flexible, applicable to a larger range of problems, and are highly adaptive. Moreover, system knowledge can be easily extracted, visualized, or even used to focus the progressive search on particular interesting subspaces.

    This tutorial provides a gentle introduction to learning classifier systems and their general functionality. It then surveys the current theoretical understanding of the systems. Finally, we provide a suite of current successful LCS applications and discuss the most promising areas for future applications and research directions.

    Pier Luca Lanzi
    He is associate professor at the Dipartimento di Elettronica, Informazione e Bioingegneria of the Politecnico di Milano. He has been working in field of evolutionary computation and machine learning since 1996. He is interested in applications to data mining and computer games. He served as the general chair for ACM GECCO-2011 and IEEE CIG 2009. He is associate editor for the Evolutionary Computation Journal, the IEEE Transactions on Computational Intelligence and Artificial Intelligence in Games, and Applied Soft Computing.


    Model-Based Evolutionary Algorithms

    In model-building evolutionary algorithms the variation operators are guided by the use of a model that conveys as much problem-specific information as possible so as to best combine the currently available solutions and thereby construct new, improved, solutions. Such models can be constructed beforehand for a specific problem, or they can be learnt during the optimization process. Well-known types of algorithms of the latter type are Estimation-of-Distribution Algorithms (EDAs) where probabilistic models of promising solutions are built and subsequently sampled to generate new solutions.

    Although EDAs are the best known example, other approaches also exist, such as the use of statistical models in the Linkage Tree Genetic Algorithm (LTGA). In general, replacing traditional crossover and mutation operators by building and using models enables the use of machine learning techniques for automatic discovery of problem regularities and exploitation of these regularities for effective exploration of the search space. Using machine learning in optimization enables the design of optimization techniques that can automatically adapt to the given problem. This is an especially useful feature when considering optimization in a black-box setting. Successful applications include Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature subset selection, forest management, groundwater remediation design, telecommunication network design, antenna design, and scheduling.

    This tutorial will provide an introduction and an overview of major research directions in this area.

    Dirk Thierens
    He is affiliated with the Department of Information and Computing Sciences at Utrecht University, the Netherlands, where he is teaching courses on Evolutionary Computation and Computational Intelligence. He received his PhD in Computer Science with a work on "Analysis and Design of Genetic Algorithms" (KU Leuven, 1995).

    Dirk has been involved in research on Evolutionary Computation for more than 20 years. His current research is mainly focused on the use of model building techniques to improve evolutionary search. At the GECCO conference, Dirk has co-chaired the GECCO workshop on Analysis and Design of Representations and Operators (2003), and the GECCO workshop on Optimization by Building and Using Probabilistic Models (2004). He was (co-)chair of the Genetic Algorithm track in GECCO-2004 and GECCO-2006, and Editor-in-Chief of GECCO-2007.

    Peter A.N. Bosman
    He is a senior researcher in the Intelligent Systems research group at the Centrum Wiskunde & Informatica (CWI) (Centre for Mathematics and Computer Science) located in Amsterdam, the Netherlands. Peter was formerly affiliated with the Department of Information and Computing Sciences at Utrecht University, where also he obtained both his MSc and PhD degrees in Computer Science, more specifically on the design and application estimation-of-distribution algorithms (EDAs). His current research position is mainly focused on fundamental EA research and on applications of EAs (in energy systems and the life sciences). Peter is best known for his status of active researcher in the area of EDAs since its upcoming and has (co-)authored over 60 publications in the field of evolutionary computation. At the GECCO conference, Peter has previously been track (co-)chair (EDA track, 2006, 2009), late-breaking-papers chair (2007), (co-)workshop organizer (OBUPM workshop, 2006; EvoDOP workshop, 2007; GreenGEC workshop, 2012, 2013) and (co-)local chair (2013).


    Runtime Analysis of Evolutionary Algorithms: Basic Introduction

    Evolutionary algorithm theory has studied the time complexity of evolutionary algorithms for more than 20 years. Different aspects of this rich and diverse research field were presented in at least three different advanced or specialized tutorials at last year's GECCO. This tutorial presents the foundations of this field. We introduce the most important notions and definitions used in the field and consider different evolutionary algorithms on a number of well-known and important example problems. Through a careful and thorough introduction of important analytical tools and methods, including fitness-based partitions, typical events and runs, drift analysis and random family trees, by the end of the tutorial the attendees will be able to apply these techniques to derive relevant runtime results for non-trivial evolutionary algorithms. Moreover, the attendees will be fully prepared to follow the more advanced tutorials that cover more specialized aspects of the field. To assure the coverage of the topics required in the specialised tutorials, this introductory tutorial will be coordinated with the presenters of the more advanced ones. In addition to custom-tailored methods for the analysis of evolutionary algorithms we also introduce the relevant tools and notions from probability theory in an accessible form. This makes the tutorial appropriate for everyone with an interest in the theory of evolutionary algorithms without the need to have prior knowledge of probability theory and analysis of randomized algorithms. Last year's edition of this tutorial at GECCO 2013 attracted over 50 participants.

    Per Kristian Lehre
    He received MSc and PhD degrees in Computer Science from the Norwegian University of Science and Technology (NTNU) in Trondheim, Norway. He finished the PhD in 2006 under the supervision of Prof Pauline Haddow, and joined the School of Computer Science at The University of Birmingham, UK, as a Research Fellow in January 2007 with Prof Xin Yao. He was a Post Doctoral Fellow at DTU Informatics, Technical University of Denmark in Lyngby, Denmark from April 2010. He is since September 2011 a Lecturer in the School of Computer Science at the University of Nottingham, UK. Dr Lehre's research interests are in theoretical aspects of nature-inspired search heuristics, in particular runtime analysis of population-based evolutionary algorithms. His research has won numerous best paper awards, including GECCO (2013, 2010, 2009, 2006) and ICSTW (2008). He is vice-chair of IEEE Task Force on Theoretical Foundations of Bio-inspired Computation, and a member of the editorial board of Evolutionary Computation. He has guest-edited special issues of Theoretical Computer Science and IEEE Transaction on Evolutionary Computation on theoretical foundations of evolutionary computation. Dr Lehre has given many tutorials on evolutionary computation in summer schools (UK: 2007, France: 2009, 2010, and 2011, Estonia: 2010), as well as major conferences and workshops (GECCO 2013 and 2014, CEC 2013, ThRaSH 2013). He is the main coordinator of the 2M euro EU-funded project SAGE which brings together theory of evolutionary computation and population genetics.

    Pietro S. Oliveto
    He is a Vice-Chancellor Fellow at The University of Sheffield. He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. From October 2007 to April 2008, he was a visiting researcher at the Efficient Algorithms and Complexity Theory Institute at the Department of Computer Science of the University of Dortmund, Germany where he collaborated with Prof. Ingo Wegener's research group. From 2009 to 2013 he stayed at Birmingham as an EPSRC PhD+ Research Fellow (2009-2010) and as an EPSRC funded Postdoctoral Fellow in Theoretical Computer Science (2010-2013).

    His main research interest is the time complexity analysis of randomised search heuristics. He has published several runtime analysis papers on Evolutionary Algorithms (EAs), Artificial Immune Systems (AIS) and Ant Colony Optimisation (ACO) algorithms for classical NP-Hard combinatorial optimisation problems, together with a review paper and a book chapter on the topic. His work has won best paper awards at GECCO 2008 and ICARIS 2011 and got very close at PPSN 2008, CEC 2009 and ECTA 2011 through best paper nominations. He is a member of the steering committee of the Theory of Randomised Search Heuristics (ThRaSH) workshop series, has co-organised ThRaSH 2009, ThRASH 2013 and chaired the "Randomized Search Heuristics: Theoretical Aspects and Real World Applications" minisymposium at the 2011 Siam Conference on Optimization (SIAM-OPT 2011). Dr. Oliveto has presented introductory tutorials on the time complexity analysis of EAs at WCCI 2012, CEC 2013 and GECCO 2013. He is the Chair of IEEE CIS Task Force on Theoretical Foundations of Bio-inspired Computation.


    Evolving Neural Networks

    Neuroevolution, i.e. evolution of artificial neural networks, has recently emerged as a powerful technique for solving challenging reinforcement learning problems. Compared to traditional (e.g. value-function based) methods, neuroevolution is especially strong in domains where the state of the world is not fully known: The state can be disambiguated through recurrency, and novel situations handled through pattern matching. In this tutorial, I will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) ways of combining traditional neural network learning algorithms with evolutionary methods, and (3) applications of neuroevolution to control, robotics, artificial life, and games.

    Risto Miikkulainen
    Risto is a Professor of Computer Sciences at the University of Texas at Austin. He received an M.S. in Engineering from the Helsinki University of Technology, Finland, in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His recent research focuses on methods for evolving neural networks and applying these methods to game playing, robotics, and intelligent control. He is an author of over 300 articles on neuroevolution, connectionist natural language processing, and the computational neuroscience of the visual cortex. He is an associate editor of IEEE Transactions on Computational Intelligence and AI in Games, IEEE Transactions on Autonomous Mental Development, and Cognitive Systems Research, and a member of the Board of Governors of the International Neural Networks Society.



    Advanced Tutorials


    Evolution Strategies and CMA-ES (Covariance Matrix Adaptation)

    Evolution Strategies (ESs) and many continuous domain Estimation of Distribution Algorithms (EDAs) are stochastic optimization procedures that sample a multivariate normal (Gaussian) distribution in the continuous search space, R^n. Many of them can be formulated in a unified and comparatively simple framework.

    This introductory tutorial focuses on the most relevant algorithmic question: how should the parameters of the sample distribution be chosen and, in particular, updated in the generation sequence? First, two common approaches for step-size control are reviewed, one-fifth success rule and path length control. Then, Covariance Matrix Adaptation (CMA) is discussed in depth: rank-one update, the evolution path, rank-mu update. Invariance properties and the interpretation as natural gradient descent are touched upon.

    In the beginning, general difficulties in solving non-linear, non-convex optimization problems in continuous domain are revealed, for example non-separability, ill-conditioning and ruggedness. Algorithmic design aspects are related to these difficulties. In the end, the performance of the CMA-ES is related to other well-known evolutionary and non-evolutionary optimization algorithms, namely BFGS, DE, PSO,...

    Nikolaus Hansen
    He is a senior researcher at The French National Institute for Research in Computer Science and Control (INRIA). Educated in medicine and mathematics, he received a Ph.D. in civil engineering in 1998 from the Technical University Berlin under Ingo Rechenberg. Before he joined INRIA, he has been working in applied artificial intelligence and genomics, and he has done research in evolutionary computation and computational science at the Technical University Berlin and the ETH Zurich. His main research interests are learning and adaptation in evolutionary computation and the development of algorithms applicable in practice. He has been the main driving force behind the development of CMA-ES over many years.

    Anne Auger
    She is a permanent researcher at the French National Institute for Research in Computer Science and Control (INRIA). She received her diploma (2001) and PhD (2004) in mathematics from the Paris VI University. Before to join INRIA, she worked for two years (2004-2006) at ETH in Zurich. Her main research interest is stochastic continuous optimization including theoretical aspects and algorithm designs. She is a member of ACM-SIGECO executive committee and of the editorial board of Evolutionary Computation. She has been organizing the biannual Dagstuhl seminar "Theory of Evolutionary Algorithms" in 2008 and 2010 and served as track chair for the theory and ES track in 2011 and 2013. Together with Benjamin Doerr, she is editor of the book "Theory of Randomized Search Heuristics".


    Constraint-Handling Techniques used with Evolutionary Algorithms

    Evolutionary Algorithms (EAs), when used for global optimization, can be seen as unconstrained optimization techniques. Therefore, they require an additional mechanism to incorporate constraints of any kind (i.e., inequality, equality, linear, nonlinear) into their fitness function. Although the use of penalty functions (very popular with mathematical programming techniques) may seem an obvious choice, this sort of approach requires a careful fine tuning of the penalty factors to be used. Otherwise, an EA may be unable to reach the feasible region (if the penalty is too low) or may reach quickly the feasible region but being unable to locate solutions that lie in the boundary with the infeasible region (if the penalty is too severe). This has motivated the development of a number of approaches to incorporate constraints into the fitness function of an EA. This tutorial will cover the main proposals in current use, including novel approaches such as the use of tournament rules based on feasibility, multiobjective optimization concepts, hybrids with mathematical programming techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial immune systems, among others. Other topics such as the importance of maintaining diversity, current benchmarks and the use of alternative search engines (e.g., particle swarm optimization, differential evolution, evolution strategies, etc.) will be also discussed (as time allows).

    Carlos Artemio Coello Coello
    He received a BSc in Civil Engineering from the Universidad Autonoma de Chiapas in Mexico in 1991 (graduating Summa Cum Laude). Then, he was awarded a scholarship from the Mexican government to pursue graduate studies in Computer Science at Tulane University (in the USA). He received a MSc and a PhD in Computer Science in 1993 and 1996, respectively. Dr. Coello has been a Senior Research Fellow in the Plymouth Engineering Design Centre (in England) and a Visiting Professor at DePauw University (in the USA). He is currently full professor at CINVESTAV-IPN in Mexico City, Mexico. He has published over 350 papers in international peer-reviewed journals, book chapters, and conferences.  He has also co-authored the book "Evolutionary Algorithms for Solving Multi-Objective Problems", which is now in its Second Edition (Springer, 2007) and has co-edited the book "Applications of Multi-Objective Evolutionary Algorithms" (World Scientific, 2004). His publications currently report over 20,000 citations, according to Google Scholar (his h-index is 57).

    He has delivered invited talks, keynote speeches and tutorials at international conferences held in Spain, USA, Canada, Switzerland, UK, Chile, Colombia, Brazil, Argentina, China, India, Uruguay and Mexico. Dr. Coello has served as a technical reviewer for over 50 international journals and for more than 100 international conferences and actually serves as associate editor of the journals "Evolutionary Computation", "IEEE Transactions on Evolutionary Computation",  "Computational Optimization and Applications", "Applied Soft Computing", and "Pattern Analysis and Applications". He is also a member of the editorial board of "Engineering Optimization". He received the 2007 National Research Award (granted by the Mexican Academy of Science) in the area of "exact sciences" and, since January 2011, he is an IEEE Fellow for "contributions to multi-objective optimization and constraint-handling techniques."
    He is also the recipient of the prestigious "2013 IEEE Kiyo Tomiyasu Award" and of the "2012 National Medal of Science and Arts" in the area of "Physical, Mathematical and Natural Sciences" (this is the highest award that a scientist can receive in Mexico). His current research interests are: evolutionary multi-objective optimization and constraint-handling techniques for evolutionary algorithms.


    Blind No More: Constant Time Non-Random Improving Moves and Exponentially Powerful Recombination

    The field of Evolutionary Computation has long accepted that changes based on random mutation and random recombination are both necessary and fundamental to Evolutionary Algorithms. However, in a number of different domains it is possible to exactly compute the best or approximate best improving move in constant time. This includes all problems that directly map to k-bounded pseudo Boolean optimization. Such problems include weighted MAXSAT, NK-Landscapes, Spin Glass problems,  as well as other problems such as vertex coloring.

    All of these problems share the hypercube (or generalized hypercube) as the neighborhood adjacency graph. It can be shown that all k-bounded pseudo-Boolean optimization problems can be decomposed into k subfunctions, each of which is an eigenvector of the Graph Laplacian of the search space.  Building on this decomposition, it is possible to 1) evaluate new moves in constant time, 2) identify improving moves in constant time, and 3) compute the average over neighborhoods in constant time. It is also possible to compute low order statistical moments (variance and skew) in polynomial time over generalized Hamming Spheres of the search space. This has made it possible to construct a new generation of extremely powerful search algorithms that scale to very large problems, and which can identify improving moves multiple steps ahead. The resulting methods yield improvements over the best MAXSAT solvers (such as AdaptG2WSAT, and GSAT) and NK-Landscape solvers, particularly on large real world problems  of a million or more variables.  

    Decomposability can also be exploited by recombination.  A new  recombination operator for the Traveling Salesman Problem  partitions the recombination graph into q partitions;  in O(n)  time it can then identify the best of 2^q possible offspring. If the parents are local optima, then the majority of offspring are also local optima. This allows search to rapidly surge forward after a small number of local optima are located;  the resulting search improves on the best stochastic local search methods in the world, such as the Lin-Kernigham-Helsgaun algorithm.

    Darrell Whitley
    Prof. Darrell Whitley has been active in Evolutionary Computation since 1986, and has published more than 200 papers. These papers have garnered more than 15,000 citations. Dr. Whitley’s H-index is 51. He has served as Editor-in-Chief of the journal Evolutionary Computation, and recently served as Chair of the Governing Board of ACM Sigevo from 2007 to 2011. He currently serves as an Associate Editor of Artificial Intelligence.


    Expressive Genetic Programming

    The language in which evolving programs are expressed can have significant impacts on the problem-solving capabilities of a genetic programming system. These impacts stem both from the absolute computational power of the languages that are used, as elucidated by formal language theory, and from the ease with which various computational structures can be produced by random code generation and by the action of genetic operators. Highly expressive languages can facilitate the evolution of programs for any computable function using, when appropriate, multiple data types, evolved subroutines, evolved control structures, evolved data structures, and evolved modular program and data architectures. In some cases expressive languages can even support the evolution of programs that express methods for their own reproduction and variation (and hence for the evolution of their offspring).

    This tutorial will begin with a comparative survey of approaches to the evolution of programs in expressive programming languages ranging from machine code to graphical and grammatical representations. Within this context it will then provide a detailed introduction to the Push programming language, which was designed specifically for expressiveness and specifically for use in genetic programming systems. Push programs are syntactically unconstrained but can nonetheless make use of multiple data types and express arbitrary control structures, supporting the evolution of complex, modular programs in a particularly simple and flexible way. The Push language will be described and demonstrated, and ten years of Push-based research, including the production of human-competitive results, will be briefly surveyed. The tutorial will conclude with a discussion of recent enhancements to Push that are intended to support the evolution of complex and robust software systems.

    The tutorial will include live, audience-interactive demonstrations of a Push interpreter and of the PushGP genetic programming system. The demonstrations will use open source software that participants will be able to use for their own purposes after the tutorial.

    Lee Spector
    He is a Professor of Computer Science in the School of Cognitive Science at Hampshire College in Amherst, Massachusetts, and an adjunct professor in the Department of Computer Science at the University of Massachusetts, Amherst. He received a B.A. in Philosophy from Oberlin College in 1984 and a Ph.D. from the Department of Computer Science at the University of Maryland in 1992. His areas of teaching and research include genetic and evolutionary computation, quantum computation, and a variety of intersections between computer science, cognitive science, evolutionary biology, and the arts. He is the Editor-in-Chief of the journal Genetic Programming and Evolvable Machines (published by Springer) and a member of the editorial board of Evolutionary Computation (published by MIT Press). He is also a member of the SIGEVO executive committee and he was named a Fellow of the International Society for Genetic and Evolutionary Computation. More info:


    Parameterized Complexity Analysis of Evolutionary Algorithms [new tutorial]

    In real applications, problem inputs are typically structured or restricted in some way. Evolutionary algorithms and other randomized search heuristics can sometimes exploit such extra structure, while in some cases it can be problematic. In any case, from a theoretical perspective, little is understood about how different structural parameters affect the running time of such algorithms.

    In this tutorial we present techniques from the new and thriving field of parameterized complexity theory. These techniques allow for a rigorous understanding of the influence of problem structure on the running time of evolutionary algorithms on NP-hard combinatorial
    optimization problems. We show how these techniques allow one to decompose algorithmic running time as a function of both problem size and additional parameters. In this way, one can attain a more detailed description of what structural aspects contribute to the exponential running time of EAs applied to solving hard problems.

    We will present detailed and thorough results for evolutionary algorithms on the traveling salesperson problem, maximum satisfiability, and makespan scheduling. We will also outline
    directions for future research and discuss some open questions.

    Frank Neumann
    He received his diploma and Ph.D. from the Christian-Albrechts-University of Kiel in 2002 and 2006, respectively. Currently, he is an associate professor and leader of the Optimisation and Logistics Group at the School of Computer Science, The University of Adelaide, Australia. From 2008-2010, he has been the coordinator of the area "Bio-inspired Computation" at the Max Planck Institute for Informatics, Saarbruecken, Germany. With Kenneth De Jong he organised ACM FOGA 2013 in Adelaide and together with Carsten Witt he has written the textbook "Bioinspired Computation in Combinatorial Optimization - Algorithms and Their Computational Complexity" published by Springer. He is vice-chair of IEEE Task Force on Theoretical Foundations of Bio-inspired Computation, chair of the IEEE Task Force on Evolutionary Scheduling and Combinatorial Optimization, and has guest-edited special issues for the journals Algorithmica, Theoretical Computer Science, and Evolutionary Computation (MIT Press). Furthermore, he has organized several tracks and special sessions at IEEE CEC and GECCO. In his work, he considers algorithmic approaches in particular for combinatorial and multi-objective optimization problems and focuses on theoretical aspects of evolutionary computation as well as high impact applications in the areas of renewable energy and sports.

    Andrew Sutton
    He completed his Ph.D. at Colorado State University in 2011.  He has held a postdoctoral research fellowship at the University of Adelaide, and is currently a postdoctoral researcher at Colorado State University. He was recently co-organizer for the Special Session on Theoretical Foundations of Bio-inspired Computation at 2013 IEEE CEC. His work focuses on understanding the influence of problem structure on the performance of randomized search heuristics with a particular focus on parameterized runtime analysis on combinatorial optimization problems.


    Bioinspired Computation in Combinatorial Optimization - Algorithms and Their Computational Complexity

    Bioinspired computation methods, such as evolutionary algorithms and ant colony optimization, are being applied successfully to complex engineering and combinatorial optimization problems, and it is very important that we understand the computational complexity of these algorithms. This tutorial explains the most important results achieved in this area.

    We present how runtime behavior can be analyzed in a rigorous way, in particular for combinatorial optimization. We consider well-known problems such as minimum spanning trees, shortest paths, maximum matching, and covering and scheduling problems. Classical single-objective optimization is examined first. Then we investigate the computational complexity of bioinspired computation applied to multi-objective variants of the considered combinatorial optimization problems, and in particular we show how multi-objective optimization can help to speed up bioinspired computation for single-objective optimization problems.

    Carsten Witt
    He is an associate professor at the Technical University of Denmark. He received his diploma and Ph.D. in Computer Science from the Technical University of Dortmund in 2000 and 2004, respectively. Carsten's main research interests are the theoretical aspects of randomized search heuristics, in particular evolutionary algorithms, ant colony optimization and particle swarm optimization. He co-organized the biannual Dagstuhl seminar "Theory of Evolutionary Algorithms" in 2008 and 2010 and has given tutorials on the computational complexity of bioinspired search heuristics in combinatorial optimization at several previous GECCOs and other venues. Carsten Witt is a member of the steering committee of the international Theory of Randomized Search Heuristics (ThRaSH) workshop, which he co-organized in 2011, and a member of the editorial boards of Evolutionary Computation and Theoretical Computer Science. Together with Frank Neumann, he has authored the textbook "Bioinspired Computation in Combinatorial Optimization – Algorithms and Their Computational Complexity", published by Springer.


    Theory of Swarm Intelligence [new tutorial]

    Social animals as found in fish schools, bird flocks, bee hives, and ant colonies are able to solve highly complex problems in nature. This includes foraging for food, constructing astonishingly complex nests, and evading or defending against predators. Remarkably, these animals in many cases use very simple, decentralized communication mechanisms that do not require a single leader. This makes the animals perform surprisingly well, even in dynamically changing environments. The collective intelligence of such animals is known as swarm intelligence and it has inspired popular and very powerful optimization paradigms, including ant colony optimization (ACO) and particle swarm optimization (PSO).

    The reasons behind their success are often elusive. We are just beginning to understand when and why swarm intelligence algorithms perform well, and how to use swarm intelligence most effectively. Understanding the fundamental working principles that determine their efficiency is a major challenge.

    This tutorial will give a comprehensive overview of recent theoretical results on swarm intelligence algorithms, with an emphasis on their efficiency (runtime/computational complexity). In particular, the tutorial will show how techniques for the analysis of evolutionary algorithms can be used to analyze swarm intelligence algorithms and how the performance of swarm intelligence algorithms compares to that of evolutionary algorithms. The results shed light on the working principles of swarm intelligence algorithms, identify the impact of parameters and other design choices on performance, and thus help to use swarm intelligence more effectively.

    The tutorial will be divided into a first, larger part on ACO and a second, smaller part on PSO. For ACO we will consider simple variants of the MAX-MIN ant system. Investigations of example functions in pseudo-Boolean optimization demonstrate that the choices of the pheromone update strategy and the evaporation rate have a drastic impact on the running time. We further consider the performance of ACO on illustrative problems from combinatorial optimization: constructing minimum spanning trees, solving shortest path problems with and without noise, and finding short tours for the TSP.  For particle swarm optimization, the tutorial will cover results on PSO for pseudo-Boolean optimization as well as a discussion of theoretical results in continuous spaces.

    Dirk Sudholt
    He obtained his Diplom (Master's) degree in 2004 and his PhD in computer science in 2008 from the Technische Universitaet Dortmund, Germany, under the supervision of Prof. Ingo Wegener. He has held postdoc positions at the International Computer Science Institute in Berkeley, California, working in the Algorithms group led by Prof. Richard M. Karp and at the University of Birmingham, UK, working with Prof. Xin Yao. Since January 2012, he is a Lecturer at the University of Sheffield, UK.

    His research focuses on the computational complexity of randomized search heuristics such as evolutionary algorithms and swarm intelligence algorithms like ant colony optimization and particle swarm optimization. He is an editorial board member of the Evolutionary Computation journal and receives funding from the EU's Future and Emerging Technologies scheme (SAGE project). He has more than 50 refereed publications in international journals and conferences, including 6 best paper awards at leading conferences, GECCO and PPSN.


    Information Geometry in Evolutionary Computation [new tutorial]

    In recent years there have been independent developments in multiple branches of Evolutionary Computation (EC) that interpret population-based and model-based search algorithms in terms of information geometric concepts. This trend has resulted in the development of novel algorithms and in improved understanding of existing ones. This tutorial aims at making this new line of research accessible to a broader range of researchers.

    A statistical model, identified by a parametric family of distributions, is equipped with an intrinsic (Riemannian) geometry, the so-called information geometry. From this perspective, a statistical model is a manifold of distributions where the inner product is given by the Fisher information metric. Any evolutionary algorithm that implicitly or explicitly evolves the parameters of a search distribution defines a dynamic over the manifold. Taking into account the Riemannian geometry of the new search space given by the search distributions allows for the description and analysis of evolutionary operators in a new light.

    Notably, this framework can be used for the study of optimization algorithms. A core idea of several recent and novel heuristics, both in the continuous and the discrete domain, such as Estimation of Distribution Algorithms (EDAs) and Natural Evolution Strategies (NESs), is to perform stochastic gradient descent directly on the space of search distributions. However the definition of gradient depends on the metric, which is why it becomes fundamental to consider the information geometry of the space of search distributions.

    Despite being equivalent to classical gradient-based methods for a stochastically relaxed problem the approach performs randomized direct search on the original search space: the generation of an offspring population as well as selection and strategy adaptation turn out to implicitly sample a search distribution in a statistical model and to perform a stochastic gradient step in the direction of the natural gradient.

    Particular strengths of the information geometric framework are its ability to unify optimization in discrete and continuous domains as well as the traditionally separate processes of optimization and strategy parameter adaptation. Respecting the intrinsic information geometry automatically results in powerful invariance principles. The framework can be seen as an analysis toolbox for existing methods, as well as a generic design principle for novel algorithms.

    This tutorial will introduce from scratch the mathematical concept of information geometry to the EC community. It will transport not only rigorous definitions but also geometric intuition on Riemannian geometry, information geometry, natural gradient, and stochastic gradient descent. Stochastic relaxations of EC problems will act as a glue. The framework will be made accessible with applications to basic as well as state-of-the-art algorithms operating on discrete and continuous domains.

    Tobias Glasmachers
    Tobias received his diploma (2004) and his PhD (2008) from the department of mathematics of the Ruhr-University of Bochum, Germany. He joined the Swiss AI lab IDSIA for two years as a post-doc. Since 2012, he is a junior professor at the Institut für Neuroinformatik in Bochum. His research interests are evolutionary algorithms for continuous search spaces as well as supervised machine learning. His research in the domain of evolutionary computation is centered on natural evolution strategies, a class of algorithms based on information geometric optimization.


    Luigi Malagò
    Luigi received a PhD in Information Technology at Politecnico di Milano in 2011. Previously, he obtained a Bachelor and a Master cum Laude in Computer Science Engineering at Politecnico di Milano in 2004 and 2007, and a Diploma from Alta Scuola Politecnica in 2006. In 2012 he was awarded with the Dimitris N. Chorafas Foundation Award for his PhD thesis. Since 2012 he is a postdoc researcher at Department of Computer Science at Universita' degli Studi di Milano. His research interests include machine learning, in particular online classification with selective sampling in presence of multiple annotators; and evolutionary computation, especially the application of methods of information geometry to the study of model-based search algorithms.


    Black-Box Complexity: From Complexity Theory to Playing Mastermind [new tutorial]

    As in classic algorithmics, also in evolutionary computation the runtime analysis approach is most useful when it is complemented by a meaningful complexity theory. In the last few years, black-box complexity---a notion introduced by Droste, Jansen, and Wegener in their seminal 2006 Theory of Computing Systems paper---has gained more and more attention. Since then, several deep and surprising results gained the attention of both the GECCO community (best paper awards for black-box complexity papers in 2010, 2012, and 2013) and the general algorithms and complexity community (papers at CSR 2011 and STACS 2012).

    The black-box complexity of a problem, in simple words, is the minimum number of fitness evaluations needed by any algorithm to solve it. Comparing the black-box complexity with the runtime of an evolutionary algorithm allows us to judge how good the algorithm is. If they are equal, no better evolutionary algorithm can exist. For many problems, however, we observe a significant gap between the black-box complexity and the runtime of the best known randomized search heuristic. For such problems, we need to analyze where this gap comes from. As demonstrated at GECCO 2013, this can lead to the design of better search heuristics.

    In this tutorial, we give a gentle introduction to black-box complexity, we demonstrate some surprisingly low black-box complexities found recently, and show how to derive better evolutionary algorithms from a deeper understanding of these results. Since one of the most basic black-box complexity problems turns out to be essentially the problem of playing the classic board game of Mastermind, this tutorial also has an entertaining side.


    Benjamin Doerr
    He is a full professor at the French Ecole Polytechnique. He also is a senior researcher at the Max Planck Institute for Informatics (Germany) and an adjunct professor at Saarland University. He received his diploma (1998), PhD (2000) and habilitation (2005) in mathematics from Kiel University. His research area is the theory both of problem-specific algorithms and of randomized search heuristics like evolutionary algorithms. Major contributions to the latter include runtime analyses for evolutionary algorithms and ant colony optimizers, as well as the further development of the drift analysis method, in particular, multiplicative and adaptive drift. In the young area of black-box complexity, he proved several of the current best bounds.

    Together with Frank Neumann and Ingo Wegener, Benjamin Doerr founded the theory track at GECCO, served as its co-chair 2007-2009 and serves again in 2014. He is a member of the editorial boards of "Evolutionary Computation", "Natural Computing", "Theoretical Computer Science" and "Information Processing Letters". Together with Anne Auger, he edited the book "Theory of Randomized Search Heuristics".


    Carola Doerr
    She studied mathematics at Kiel University (Diploma in 2007) and computer science at the Max Planck Institute for Informatics and Saarland University (PhD in 2011). Her PhD studies were supported by a Google Europe Fellowship in Randomized Algorithms. From Dec. 2007 to Nov. 2009, Carola Doerr has worked as a business consultant for McKinsey & Company, mainly in the area of network optimization. She was a post-doc at the Université 7 in Paris and the Max Planck Institute for Informatics in Saarbrücken. Since October 2013, she is a permanent researcher with CNRS and the Université Pierre et Marie Curie (Paris 6).

    Carola Doerr's main research interest is in the theory of randomized algorithms, both in the design of efficient algorithms as well as in randomized query complexities. She has published several papers about black-box complexities. She has contributed to the field of evolutionary computation also through results on the runtime analysis of evolutionary algorithms and drift analysis, as well as through the development of search heuristics for solving geometric discrepancy problems.



    Specialized Tutorials


    Cellular Genetic Algorithms [new tutorial]

    Cellular GAs are a new class of optimization, search, and learning algorithms that allow a flexible balance between exploration and exploitation by using overlapped search neighborhoods. Their population is made of such small neighborhoods inside which solutions interact in micro-GAs, while the sharing of a solution between several neighborhoods allow a smooth diffusion of the best points along the population. Cellular GAs foster methodological and fundamental research, as well as they can be used in dynamic and multi-objective problems. Parallelism is a close issue in this domain, opening new lines in many core systems and especially in GPUs. The amazing interactions and emerging behavior of this swarm intelligence technique have already provided competitive results in varied domains like communications, bioinformatics, combinatorial optimization, logistics, and continuous optimization, among others. This tutorial will present their basics and several illustrative examples of application to grasp the actual power of this kind of structured population techniques.

    Enrique Alba
    He had his degree in engineering and PhD in Computer Science in 1992 and 1999, respectively, by the University of Málaga (Spain). He works as a Full Professor in this university with different teaching duties: data communications and evolutionary algorithms at graduate and master programs, respectively. Prof. Alba leads a team of researchers in the field of complex optimization with applications in bioinformatics, software engineering, telecoms, smart cities, and others. In addition to the organization of international events (ACM GECCO, IEEE IPDPS-NIDISC, IEEE MSWiM, IEEE DS-RT, …) Prof. Alba has offered dozens doctorate courses, multiple seminars in more than 20 international institutions and has directed several research projects (6 with national funds, 5 in Europe and numerous bilateral actions). Also, Prof. Alba has directed 6 contracts for innovation and transference to the industry (OPTIMI, Tartessos, ACERINOX, ARELANCE) and at present he also works as invited professor at INRIA, the Univ. of Luxembourg, and Univ. of Ostrava. He is editor in several international journals and book series of Springer-Verlag and Wiley, as well as he often reviews articles for more than 30 impact journals. He has published more than 60 articles in journals indexed by Thomson ISI, 17 articles in other journals, 40 papers in LNCS, and more than 200 refereed conferences. Besides that, Prof. Alba has published 11 books, 39 book chapters, and has merited 6 awards to his professional activities. Pr. Alba’s H index is 36, with more than 6000 cites to his work.


    Artificial Immune Systems for Optimisation

    Artificial immune systems (AIS) are a class of biologically inspired algorithms which are built after different theories from immunology. While the field of AIS is a relatively new area of research (having its own GECCO track for the first time this year), it has achieved numerous promising results in different areas of application, e.g., learning, classification, anomaly detection, and optimisation. In this tutorial we focus in particular on AIS built for the purpose of optimisation. From an algorithmic point of view AIS show on a high level similarities to other biologically inspired algorithms, e.g. evolutionary algorithms. Due to their different origin concrete AIS for optimisation are quite different from evolutionary algorithms. They constitute an interesting alternative approach to current methods.

    The tutorial gives an overview over different methods in the field of AIS. It addresses everyone who wants to broaden his or her area of research within this emerging field, both practitioners and theoreticians. It enables attendees without prior knowledge of AIS to learn about a novel kind of optimisation method that can be used as an alternative to other biologically inspired algorithms. Moreover, it gives researchers with prior knowledge of AIS the opportunity to deepen their understanding of the considered algorithms. It is an ideal companion to the novel GECCO track on Artificial Immune Systems. We start with an overview over the different areas of AIS, including different general approaches and some immunological background. Afterwards, we discuss several examples of AIS for optimisation. We introduce concrete algorithms and their implementations and point out similarities and differences to other biologically inspired algorithms. In the last part of the tutorial, we present an overview over recent theoretical results for this kind of algorithms. A web-based toolbox we provide allows attendees to see simple AIS on some example functions `in action' and experiment with the effects of different parameter settings.


    Thomas Jansen
    He is Senior Lecturer at the Department of Computer Science at Aberystwyth University, Wales, UK (since January 2013). He studied Computer Science at the University of Dortmund, Germany, and received his diploma (1996, summa cum laude) and Ph.D. (2000, summa cum laude) there. From September 2001 to August 2002 he stayed as a Post-Doc at Kenneth De Jong's EClab at the George Mason University in Fairfax, VA. He was Junior professor for Computational Intelligence from September 2002 to February 2009 at the Technical University Dortmund. From March 2009 to December 2012 he was Stokes Lecturer at the Department of Computer Science at the University College Cork, Ireland. He has published 19 journal papers, 40 conference papers, contributed seven book chapters and authored one book on evolutionary algorithm theory. His research is centred around design and theoretical analysis of artificial immune systems, evolutionary algorithms and other randomised search heuristics. He is associate editor of Evolutionary Computation (MIT Press) and Artificial Intelligence (Elsevier), member of the steering committee of the Theory of Randomised Search Heuristics workshop series, co-track chair of the Genetic Algorithm track of GECCO 2013 and 2014, was program chair at PPSN 2008, co-organised FOGA 2009, organised a workshop on Bridging Theory and Practice (PPSN 2008), two GECCO workshops on Evolutionary Computation Techniques for Constraint Handling (2010 and 2011) and Dagstuhl workshops on Theory of Evolutionary Computation (2004 and 2006) and on Artificial Immune Systems (2011). In 2015 he will be co-organising FOGA 2015.


    Christine Zarges
    She is a Birmingham Fellow in the School of Computer Science at the University of Birmingham, UK since 2012. Before that she spent 12 months as a visiting PostDoc at the University of Warwick, UK, supported by a scholarship of the German Academic Research Service (DAAD). She studied Computer Science at the TU Dortmund, Germany, and obtained her Diploma (2007) and PhD (2011, Theoretical Foundations of Artificial Immune Systems) there. Her dissertation was awarded the dissertation award of the TU Dortmund. In 2010 she received a Google Anita Borg Memorial Scholarship. Her research focuses on the theoretical analysis of randomised algorithms and processes, in particular randomised search heuristics such as artificial immune systems and evolutionary algorithms. She is also interested in computational and theoretical aspects of immunology. Two of her papers on artificial immune systems were awarded a best paper award at leading conferences (PPSN 2008 and ICARIS 2011). She is member of the editorial board of Evolutionary Computation (MIT Press) and co-chair of the new Artificial Immune Systems track at GECCO 2014. In 2015 she will be co-organising FOGA 2015.


    Generative and Developmental Systems

    In evolutionary computation it is common for the genotype to map directly to the phenotype such that each gene in the genotype represents a single discrete component of the phenotype. While this convention is widespread, in nature the mapping is not direct. Instead, the genotype maps to the phenotype through a process of development, which means that each gene may be activated multiple times and in multiple places in the process of constructing the phenotype.

    This tutorial will examine why such indirect mapping may be critical to the continuing success of evolutionary computation. Rather than just an artifact of nature, indirect mapping means that vast phenotypic spaces (e.g. the 100 trillion connection human brain) can be searched effectively in a space of far fewer genes (e.g. the 30,000 gene human genome). The tutorial will introduce this research area, called Generative and Developmental Systems (GDS), by surveying milestones in the field and introducing its most profound puzzles. Most importantly, what is the right abstraction of natural development to capture its essential advantages without introducing unnecessary overhead into the search?

    Kenneth Stanley
    He is an associate professor in the Department of Electrical Engineering and Computer Science at the University of Central Florida. He received a B.S.E. from the University of Pennsylvania in 1997 and received a Ph.D. in 2004 from the University of Texas at Austin. He is an inventor of the Neuroevolution of Augmenting Topologies (NEAT), HyperNEAT, and novelty search algorithms for evolving complex artificial neural networks. His main research contributions are in neuroevolution (i.e. evolving neural networks), generative and developmental systems (GDS), coevolution, machine learning for video games, and interactive evolution. He has won best paper awards for his work on NEAT, NERO, NEAT Drummer, FSMC, HyperNEAT, novelty search, and Galactic Arms Race. He is an associate editor of IEEE Transactions on Computational Intelligence and AI in Games, on the editorial board of Evolutionary Computation journal, on the ACM SIGEVO Executive Committee, and a co-founder and former track co-chair of the GDS track at GECCO.  He is also a co-founder and the editor-in-chief of


    Evolutionary Image Analysis and Signal Processing [new tutorial]

    The intertwining disciplines of image analysis, signal processing and pattern recognition are major fields of computer science and engineering, with past and on-going research covering a full range of topics and tasks, from basic research to a huge number of real-world industrial applications.

    Among the techniques studied and applied within these research fields, evolutionary computation (EC) and swarm intelligence are playing an increasingly relevant role.  The terms Evolutionary Image Analysis and Signal Processing and Evolutionary Computer Vision are more and more commonly accepted as descriptors of a clearly defined  research area and family of techniques and applications. This has also been favored by the recent availability of environments for general-purpose  programming of GPUs, whose architecture and computation paradigm fits EC algorithms extremely well, alleviating the intrinsically heavy computational burden imposed by such techniques and allowing even for real-time applications.

    The tutorial will introduce the general framework within which Evolutionary Image Analysis, Signal Processing and Pattern Recognition can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications.

    Stefano Cagnoni
    He graduated in Electronic Engineering at the University of Florence, Italy, where he has been a PhD student and a post-doc until 1997. In 1994 he was a visiting scientist at the Whitaker College Biomedical Imaging and Computation Laboratory at the Massachusetts Institute of Technology. Since 1997 he has been with the University of Parma, where he has been Associate Professor since 2004.

    Recent research grants include: co-management of a project funded by Italian Railway Network Society (RFI) aimed at developing an automatic inspection system for train pantographs; a "Marie Curie Initial Training Network" grant, for a four-year research training project in Medical Imaging using Bio-Inspired and Soft Computing; a grant from "Compagnia diS. Paolo" on "Bioinformatic and experimental dissection of the signalling pathways underlying dendritic spine function".

    He has been Editor-in-chief of the "Journal of Artificial Evolution and Applications" from 2007 to 2010. Since 1999, he has been chair of EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, now a track of the EvoApplications conference. Since 2005, he has co-chaired MedGEC, workshop on medical applications of evolutionary computation at GECCO. Co-editor of special issues of journals dedicated to Evolutionary Computation for Image Analysis and Signal Processing. Member of the Editorial Board of the journals “Evolutionary Computation” and “Genetic Programming and Evolvable Machines”.

    He has been awarded the "Evostar 2009 Award", in recognition of the most outstanding contribution to Evolutionary Computation.


    Decomposition and Cooperative Coevolution Techniques for Large Scale Global Optimization [new tutorial]

    Many real-world optimization problems involve a large number of decision variables. For example, in shape optimization a large number of shape design variables are often used to represent complex shapes, such as turbine blades, aircraft wings, and heat exchangers. However, existing optimization methods are ill-equipped in dealing with this sort of large scale global optimization (LSGO) problems. A natural approach to tackle LSGO problems is to adopt a divide-and-conquer strategy. A good example is the early work on a cooperative coevolutionary (CC) algorithm by Potter and De Jong (1994), where a problem is decomposed into several subcomponents of smaller sizes, and then each subcomponent is “cooperatively coevolved” with other subcomponents.

    In this tutorial we will provide an overview on the recent development of CC algorithms for LSGO problems, in particular those extended from the original Potter and De Jong’s CC model. One key challenge in applying CC is how to best decompose a problem in a way such that the inter-dependency between subcomponents can be kept at minimum. Another challenge is how to best allocate a fixed computational budget among different subcomponents when there is an imbalance of contributions from these subcomponents. Equally dividing the budget among these subcomponents and optimizing each through a round-robin fashion (as in the classic CC method) may not be a wise strategy, since it can waste lots of computational resource. Many more research questions still remain to be answered. In recent years, several interesting decomposition methods (or variable grouping methods) have been proposed. This tutorial will survey these methods, and identify their strengths and weakness. The tutorial will also describe a contribution-based method for better allocating computation among the subcomponents. Finally we will present a newly designed variable grouping method, namely differential grouping, which outperforms those early surveyed decomposition methods. We will provide experimental results on CEC’2010 LSGO benchmark functions to demonstrate the effectiveness of this method.

    Xiaodong Li
    He received his B.Sc. degree from Xidian University, Xi'an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. Currently, he is an Associate Professor at the School of Computer Science and Information Technology, RMIT University, Melbourne, Australia. His research interests include evolutionary computation, machine learning, complex systems, multiobjective optimization, and swarm intelligence. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation and International Journal of Swarm Intelligence Research. He is a founding member and currently a Vice-chair of IEEE CIS Task Force on Swarm Intelligence, and currently a Chair of IEEE CIS Task Force on Large Scale Global Optimization. He was the General Chair of SEAL'08, a Program Co-Chair AI'09, and a Program Co-Chair for IEEE CEC’2012. He is the recipient of 2013 SIGEVO Impact Award.


    Evolutionary Search Algorithms for Protein Modeling: From De-novo Structure Prediction to Comprehensive Maps of Functionally-relevant Structures of Protein Chains and Assemblies [new tutorial]

    In the last two decades, great progress has been made in molecular modelling through computational treatments of biological molecules grounded in evolutionary search techniques. Evolutionary search algorithms (EAs) are gaining popularity beyond exploring the relationship between sequence and function in biomolecules. In particular, recent work is showing the promise of EAs in exploring structure spaces of protein chains and protein assemblies to address open-standing problems in computational structural biology, such as de novo structure prediction and protein-protein docking.

    Exploring effective interleaving of global and local search has led to hybrid EAs that are now competitive with the Monte Carlo-based frameworks that have traditionally dominated de novo structure prediction. Similar advances have been made in protein-protein docking. Deeper understanding of the constraints posed by highly-coupled modular systems like proteins and integration of domain knowledge has resulted in effective reproductive operators. Multi-objective optimization has also shown promise in dealing with the conflicting terms that make up protein energy functions and effectively exploring protein energy surfaces. Combinations of these techniques have recently resulted in powerful stochastic search frameworks that go beyond de-novo structure prediction and are capable of yielding comprehensive maps of possible diverse functionally-relevant structures of proteins.

    The objective of this tutorial is to introduce the EC community to the rapid developments on EA-based frameworks for protein structure modeling through a concise but comprehensive review of developments in this direction over the last decade. The review will be accompanied with specific detailed highlights and interactive software demonstrations of representative methods. The tutorial will expand the view of EA-based frameworks beyond sequence-focused application settings. The tutorial will introduce EC researchers to open problems in computational structural biology and in the process spur the design of novel and powerful evolutionary search techniques.

    Kenneth De Jong
    He is a senior and well-known researcher in the EC community with a rich and diverse research profile. De Jong's research interests include genetic algorithms, evolutionary computation, machine learning, and adaptive systems. He is currently involved in research projects involving the development of new evolutionary algorithm (EA) theory, the use of EAs as heuristics for NP-hard problems, and the application of EAs to the problem of learning task programs in domains such as robotics, diagnostics, navigation and game playing. Support for these projects is provided by NSF, DARPA, ONR, and NRL. He is an active member of the Evolutionary Computation research community and has been involved in organizing many of the workshops and conferences in this area. He is the founding editor-in-chief of the journal Evolutionary Computation (MIT Press), and a member of the board of ACM SIGEVO.


    Amarda Shehu
    She is an energetic and prolific young researcher in computational structural biology. Shehu's research contributions are in computational structural biology, biophysics, and bioinformatics with a focus on issues concerning the relationship between sequence, structure, dynamics, and function in biological molecules. Shehu has unique expertise in tight coupling of robotics-inspired and evolutionary-based probabilistic search and optimization with computational protein biophysics, and has made significant contributions to modeling native structures, equilibrium fluctuations and conformational ensembles, loop motions, large-scale motions connecting diverse functional states, and assembly of proteins and peptides. This research is supported by NSF, Jeffress Trust Program in Interdisciplinary Sciences, and the Virginia Youth Tobacco Award. Shehu is also the recipient of an NSF CAREER award for a unifying framework for modeling protein molecules in the presence of constraints. She is also an active member of the Bioinformatics and Computational Biology ACM and IEEE Community and has been involved in organizing workshops, tutorials, and conferences in these communities.


    Evolutionary Bilevel Optimization (EBO)

    Many practical problem‐solving tasks involve multiple hierarchical search processes, requiring one search and optimization task to solve another lower--‐level search and Optimization problem in a nested manner. In their simplicity, these problems are known as "Bilevel optimization" problems, in which there are two levels of optimization tasks. A solution at the upper level is feasible if the corresponding lower level variable vector is optimal for the lower level optimization problem. Problems in economics and business involving company CEOs and department heads or governmental decision makers and NGOs are standard bilevel optimization problems. In engineering, optimal control problems, structural design problems, transportation problems and other hierarchical problems fall into this category. Consider, for example, an inverted pendulum problem for which the motion of the platform relates to the upper-level optimization problem of performing the balancing task in a time-optimal manner. For a given motion of the platform, whether the pendulum can be balanced at all becomes a lower level optimization problem of maximizing stability margin. Hierarchical games having multi-level controls are better posed as bilevel problems. These problems are also known as Stackelberg games in the operations research and computer science community.

    Bilevel problems are too complex to be solved using a classical optimization method simply due to the "nestedness" of one optimization task into another. Recent studies have shown that evolutionary optimization methodologies provide a distinct niche in solving such problems due to their flexibility, parallel processing, population approach and their ability to handle constrained search spaces efficiently. In the recent past, there has been a surge in research activities towards solving bilevel optimization problems using EAs and it is an appropriate time to have more EC researchers aware of "what bilevel problems are?" and "how efficient EAs can be developed for solving such problems?". GECCO-2014 provides a great platform to share the research challenges and current state-of-the-art in evolutionary bilevel optimization with EC researchers, so that more researchers are aware and get interested in this emerging and important area.

    In this tutorial, we shall introduce principles of bilevel optimization for single and multiple objectives and discuss the difficulties in solving such problems in general. With a brief survey of the existing literature, we shall present a few viable efficient evolutionary algorithms for both single and multi-objective EAs for bilevel optimization. Our recent studies on bilevel test problems and some application studies will be discussed. Finally, a number of immediate and future research ideas on evolutionary bilevel optimization (EBO) will be highlighted. Movies demonstrating the working principles of EBOs will be shown to make the understanding of the tutorial clear. More information about Evolutionary Bilevel Optimization can be found from the recently created website

    Kalyanmoy Deb
    He is Koenig Endowed Chair Professor at the Department of Electrical and Computer Engineering in Michigan State University (MSU), East Lansing, USA. Prof. Deb's main research interests are in genetic and evolutionary optimization algorithms and their application in optimization, modeling, and machine learning. He is largely known for his seminal research in developing and applying Evolutionary Multi-Criterion Optimization. Prof. Deb was awarded the prestigious `Infosys Prize' in 2012, `TWAS Prize' in Engineering Sciences in 2012, `CajAstur Mamdani Prize' in 2011, `JC Bose National Fellowship' in 2011, `Distinguished Alumni Award' from IIT Kharagpur in 2011, 'Edgeworth-Pareto' award in 2008, Shanti Swarup Bhatnagar Prize in Engineering Sciences in 2005, `Thomson Citation Laureate Award' from Thompson Reuters. His 2002 IEEE-TEC NSGA-II paper is judged as the Most Highly Cited paper and a Current Classic by Thomson Reuters having more than 4,200+ citations. He is a fellow of IEEE and International Society of Genetic and Evolutionary Computation (ISGEC). He has written two text books on optimization and more than 350+ international journal and conference research papers with Google Scholar citations of 55,000+ with h-index of 77. He is in the editorial board on 20 major international journals.

    Ankur Sinha
    He is currently a post-doc researcher at the Aalto University School of Business. He has completed his PhD from Aalto University School of Business, Helsinki, Finland and Bachelor of Technology from the Indian Institute of Technology, Kanpur, India. His research interests are in the area of Bilevel Optimization, Multi-objective Optimization, Decision Making and Evolutionary Algorithms.


    Introduction to Evolutionary Game Theory

    Evolutionary game theory studies the dynamical properties of populations of agents which play a strategic game in pairs or within small groups. It has been introduced essentially by biologists in the seventies and has immediately diffused into economical and sociological circles. Today, it is a main pillar of the whole edifice of game theory and widely used both in theory and in applications. This tutorial aims at presenting evolutionary game theory in an easy, yet rigorous way, and to relate it with other approaches in game theory. The material presented does not require a previous acquaintance with standard game theory: these fundamentals will be developed in the first part of the tutorial, which is self-contained. In the second part the main concepts of the evolutionary and dynamical approach will be introduced, namely the concept of an evolutionarily stable strategy and the replicator dynamics. The analogies between Nash equilibria, evolutionarily stable strategies, and rest points of the dynamics will be explained. All the concepts will be illustrated using simple well known paradigmatic games such as the Prisoner's Dilemma, Hawks and Doves, and coordination games among others. Finally, some recent exciting trends in evolutionary games on networks will be introduced and discussed.

    Marco Tomassini
    Marco is a professor of Computer Science at the Information Systems Department of the University of Lausanne, Switzerland. He got a Doctor's degree in theoretical chemistry from the University of Perugia, Italy, working on computer simulations of condensed matter systems. His current research interests are centered around the application of biological ideas to artificial systems. His current main research interests are the modeling of evolutionary games in networks, experimental games, complex networks and systems, and the structure of hard combinatorial search spaces. He is also active in evolutionary computation, especially spatially structured systems, genetic programming, and the structure of program search spaces. He has been Program Chairman of several international events and has published many scientific papers and several authored and edited books in these fields. He has received the EvoStar 2010 Award in recognition for outstanding contribution to evolutionary computation.


    Medical Applications of Evolutionary Computation [new tutorial]

    My research has always been centered on the application of computational intelligence to problems in medicine and healthcare, and for the last 10 years has used EC intensely.  I have co-chaired an annual workshop on Medical Applications of GEC at GECCO since 2005 and published book on the subject in 2010 (see:

    My current research is using GP (a special representation of CGP to be precise) in the diagnosis and monitoring of Parkinson's disease, Alzheimer's disease and other neurodegenerative conditions.  It is also being used in the early detection of breast cancer through automated assessment of mammograms.  I currently have multiple clinical studies in progress in the UK (Leeds General Infirmary), USA (UCSF), UAE (Dubai Rashid Hospital) and presently in Australia (Monash Medical Center). The technology is protected through 3 patent applications and a spinout company marketing 4 medical devices will be launched in the next 3 months.

    The proposed tutorial would be based on the following:

    1) Introduction to medical applications of EC and how these differ from other real-world applications
    2) Overview of past work in the from a medical and EC point of view
    3) Two case examples of medical applications in depth
    4) Practical advice on how to get started working on medical applications, including existing medical databases and conducting new medical studies and protecting intellectual property.
    5) Summary, further reading and links

    Stephen L. Smith
    He received a BSc in Computer Science and then an MSc and PhD in Electronic Engineering from the University of Kent, UK. He is currently a senior lecturer in the Department of Electronics at the University of York, UK. Steve's main research interests are in developing novel representations of evolutionary algorithms particularly with application to problems in medicine. His work is currently centered on the diagnosis of neurological dysfunction and analysis of mammograms. Steve was program chair for the Euromicro Workshop on Medical Informatics, program chair and local organizer for the Sixth International Conference on Information Processing in Cells and Tissues (IPCAT) and guest editor for the subsequent special issue of BioSystems journal.  More recently he was tutorial chair for the IEEE Congress on Evolutionary Computation (CEC) in 2009, local organiser for the International Conference on Evolvable Systems (ICES) in 2010 and co-general chair for the Ninth International Conference on Information Processing in Cells and Tissues (IPCAT) in April 2012. Steve currently holds a Royal Academy of Engineering Enterprise Fellowship.

    Steve is co-founder and organizer of the MedGEC Workshop, which is now in its ninth year. He is also guest editor for a special issue of Genetic Programming and Evolvable Machines (Springer) on medical applications and co-editor of a book on the subject (John Wiley, November 2010).

    Steve is associate editor for the journal Genetic Programming and Evolvable Machines and a member of the editorial board for the International Journal of Computers in Healthcare and Neural Computing and Applications.

    Steve has some 75 refereed publications, is a Chartered Engineer and a fellow of the British Computer Society.


    Automatic (Offline) Configuration of Algorithms

    Most optimization algorithms, including evolutionary algorithms and metaheuristics, and general-purpose solvers for integer or constraint programming, have many parameters that need to be properly configured for obtaining the best performance on a particular problem.  (Offline) Automatic algorithm configuration methods help algorithm users to determine the parameter settings that optimize the performance of the algorithm before the algorithm is actually deployed. Moreover, automatic algorithm configuration methods have the potential to lead to a paradigm shift in algorithm design and configuration because they enable algorithm designers to explore much larger design spaces than by traditional trial-and-error and experimental design procedures. Thus, algorithm designers can focus on inventing new algorithmic components, combine them in flexible algorithm frameworks, and let final algorithm design decisions be taken by automatic algorithm configuration techniques for specific application contexts.

    This tutorial will be divided in two parts. The first part will give an overview of the algorithm configuration problem, review recent methods for automatic algorithm configuration, and illustrate the potential of these techniques using recent, notable applications from the presenters' and other researchers work. The second part of the tutorial will focus on a detailed discussion of more complex scenarios, including multi-objective problems, anytime algorithms, heterogeneous problem instances, and the automatic generation of algorithms from algorithm frameworks. The focus is, hence, on practical but challenging applications of automatic algorithm configuration. In this second part of the tutorial we demonstrate how to tackle these configuration tasks using our irace software (, which implements an iterated racing procedure for automatic algorithm configuration. We will provide a practical step-by-step guide on using irace for the typical algorithm configuration scenario.

    Manuel López-Ibáñez
    He is a postdoctoral researcher (Chargé de recherche) of the Belgian F.R.S.-FNRS working at the IRIDIA laboratory of Université libre de Bruxelles (ULB), Belgium. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published journal papers on diverse areas such as evolutionary algorithms, ant colony optimization, multi-objective optimization, pump scheduling and various combinatorial optimization problems. His current research interests are algorithm engineering, experimental analysis and automatic configuration of stochastic optimization algorithms, for single and multi-objective optimization. He is the lead developer and current maintainer of the irace software package (


    Thomas Stützle
    He is a senior research associate of the Belgian F.R.S.-FNRS working at the IRIDIA laboratory of Université libre de Bruxelles (ULB), Belgium. He received the Diplom (German equivalent of M.S. degree) in business engineering from the Universität Karlsruhe (TH), Karlsruhe, Germany in 1994, and his PhD and his habilitation in computer science both from the Computer Science Department of Technische Universität Darmstadt, Germany in 1998 and 2004, respectively. He is the co-author of two books about “Stochastic Local Search: Foundations and Applications” and “Ant Colony Optimization” and he has extensively published in the wider area of metaheuristics including 19 edited proceedings or books, 8 journal special issues, and more than 170 journal, conference articles and book chapters, many of which are highly cited (for example, his h-index in google scholar is 54). He is associate editor of Computational Intelligence and Swarm Intelligence and on the editorial board of five other journals including Evolutionary Computation and JAIR. His main research interests are in metaheuristics, swarm intelligence, methodologies for engineering stochastic local search algorithms, multi-objective optimization, and automatic algorithm configuration. In fact, since more than a decade he is interested in automatic algorithm configuration and design methodologies and he has contributed to some effective algorithm configuration techniques such as F-race, Iterated F-race and ParamILS. His 2002 GECCO paper on “A Racing Algorithm For Configuring Metaheuristics” (joint work with M. Birattari, L. Paquete, and K. Varrentrapp) has received the 2012 SIGEVO impact award.


    Self-Assembly [new tutorial]

    Self-assembly, the autonomous construction of a set of building blocks in an environment into a desired entity, is a fundamental process in nature and is viewed as an enabling technology for the creation of artificial systems. Understanding this bottom-up construction process is an emerging, interdisciplinary research subject impacting a wide variety of fields including nanotechnology, material science, robotics, natural computing, artificial life, and evolutionary computing. The aim of this tutorial is to give a broad introduction to self-assembly, and present recent research that is relevant to the evolutionary computing community.

    This introductory tutorial on self-assembly is targeted towards a general audience. The tutorial will start by discussing why self-assembly is worth investigating in the first place, and what are the main challenges in creating self-assembling systems. Next, we will present in detail the original, artificial, physical self-assembling system created by Lionel S. Penrose and Roger Penrose. This historical example will facilitate a discussion of the broad area of self-assembly research including DNA nanotechnology and computing, and modular robotics. Following this background material, we will present current research in self-assembly that uses 3D printing to fabricate the building blocks and environments of self-assembling systems. We will also discuss current research that uses evolutionary computing to design self-assembling systems. The tutorial will conclude by discussing future work such as unsolved challenges in self-assembly, new materials and fabrication techniques, and further applications of evolutionary computing.

    Interactive activities will be conducted during the tutorial to further explore the self-assembly principles and self-assembling systems presented. A reverse engineered replica of the original Penrose system and 3D printed building blocks and environments will be passed amongst the attendees for a hands-on opportunity with self-assembling systems. These interactive activities, along with the material covered in the tutorial, will be used to demonstrate how to design and fabricate something that puts itself together by means of self-assembly.

    Navneet Bhalla
    Dr. Navneet Bhalla has been conducting research in self-assembly for 10 years. His specialties include using evolutionary computing to design, and 3D printing to fabricate, self-assembling systems. His research interests include synthetic development; applying the principles of biological development to create evermore complex self-assembling systems. Dr. Bhalla is currently a Postdoctoral Fellow at Cornell University. He graduated with a B.Sc. in Computer Science from the University of Ottawa, an M.Sc. in Intelligent Systems from University College London, and a Ph.D. in Computer Science from the University of Calgary.

    Peter J. Bentley
    Dr. Peter J. Bentley is an Honorary Reader at the Department of Computer Science, University College London (UCL). Peter runs the Digital Biology Interest Group at UCL. His research investigates evolutionary algorithms, computational development, artificial immune systems, swarming systems and other complex systems, applied to diverse applications including design, control, novel robotics, nanotechnology, fraud detection, mobile wireless devices, security, art and music composition. He has published over 200 scientific papers and is editor of the books "Evolutionary Design by Computers", "Creative Evolutionary Systems" and "On Growth, Form and Computers", and author of "The PhD Application Handbook" and the popular science books "Digital Biology", "The Book of Numbers", "The Undercover Scientist" and "Digitized."


    Marco Dorigo
    Prof. Marco Dorigo is the inventor of the ant colony optimization metaheuristic. His research interests include swarm intelligence, swarm robotics, and metaheuristics for discrete optimization. He is the Editor-in-Chief of Springer's journal "Swarm Intelligence". He is a Fellow of the IEEE and of the ECCAI, and was awarded the Marie Curie Excellence Award in 2003, the Dr. A. De Leeuw-Damry-Bourlart award in applied sciences in 2005, the Cajastur "Mamdami" International Prize for Soft Computing in 2007, and an ERC Advanced Grant in 2010.

    Prof. Marco Dorigo received his PhD in electronic engineering in 1992 from Politecnico di Milano, Milan, Italy. From 1992 to 1993, he was a research fellow at the International Computer Science Institute, Berkeley, CA. In 1993, he was a NATO-CNR Fellow, and from 1994 to 1996, a Marie Curie Fellow. Since 1996, he has been a tenured researcher of the FNRS, the Belgian National Funds for Scientific Research, and a research director of IRIDIA, the artificial intelligence lab of the Université Libre de Bruxelles. Since 2011 is also a full professor of Computer Science and Swarm Intelligence at University of Paderborn, Germany.




    Introductory Tutorials
    Advanced Tutorials
    Specialized Tutorials
    GECCO 2013 site   GECCO 2012 site      GECCO 2011 site    GECCO 2010 site    GECCO 2009 site   GECCO 2008 site   
        GECCO 2007 site     GECCO 2006 site    GECCO 2005 site        GECCO 2004 site    GECCO 2003 site  
         GECCO 2002 site      GECCO 2001 site      GECCO 2000 site      GECCO 1999 site