Introductory Tutorials

Genetic Programming
more info

- Una-May O’Reilly (MIT, USA)

Evolutionary Computation: A Unified Approach
more info

- Kenneth De Jong (George Mason University, USA)

Evolutionary Multiobjective Optimization
more info

- Dimo Brockhoff  (INRIA Lille - Nord Europe, France)
- Tobias Wagner (Technische Universität Dortmund)

Particle Swarm Optimization
more info

- Andries Engelbrecht (University Of Pretoria, South Africa)

Model-Based Evolutionary Algorithms
more info

- Dirk Thierens (Utrecht University, The Netherlands)
- Peter Bosman (Centre for Mathematics and Computer Science, The Netherlands)

Runtime Analysis of Evolutionary Algorithms: Basic Introduction
more info

- Per Kristian Lehre (University of Nottingham, UK)
- Pietro S. Oliveto (University of Sheffield, UK)

Evolving Neural Networks
more info

- Risto Miikkulainen (University of Texas at Austin, USA)

Complex Networks
more info

- Marco Tomassini (University of Lausanne, Switzerland)

Cartesian Genetic Programming
more info

- Julian Miller (University of York)
- Andrew Turner (University of York)

more info

-John R. Woodward (University of Stirling)
- Daniel Tauritz ( Missouri University of Science and Technology )

Evolutionary Robotics
more info

- Nicolas Bredeche (Université Pierre et Marie Curie)
- Stéphane Doncieux (Université Pierre et Marie Curie)
- Jean-Baptiste Mouret (Institute for Intelligent Systems and Robotics -UPMC)

Introducing Rule-based machine learning - A Practical Guide
more info
- Ryan Urbanowicz (University of Pennsylvania, USA)
- Will Browne (Victoria University of Wellington)
Multimodal Optimization
more info
- Mike Preuss, (University of Münster, Germany)
Continuous Optimization and CMA-ES
more info
- Nikolaus Hansen (Inria, France)
- Anne Auger (Inria, France)
- Youhei Akimoto (Shinshu University, Japan)
Representations for Evolutionary Algorithms
more info
-Franz Rothlauf


Constraint-Handling Techniques used with Evolutionary Algorithms
more info

- Carlos Coello-Coello (CINVESTAV-IPN, Mexico)

Blind No More: Constant Time Non-Random Improving Moves and Exponentially Powerful Recombination
more info

- Darrell Whitley (Colorado State University, USA)

Expressive Genetic Programming
more info

- Lee Spector (Hampshire College, USA)

Parameterized Complexity Analysis of Evolutionary Algorithms
more info

- Frank Neumann (University of Adelaide, Australia)
- Andrew Sutton (University of Adelaide, Australia)

Theory of Swarm Intelligence
more info

- Dirk Sudholt (University of Sheffield, UK)

Evolutionary Image Analysis and Signal Processing
more info

- Mengjie Zhang (University of Wellington)
- Stefano Cagnoni (University of Parma)

Generative and Developmental Systems
more info

- Kenneth Stanley (University of Central Florida)

Evolutionary Algorithms for Protein Structure Modeling
more info

- Amarda Shehu (George Mason University, Fairfax, VA)
- Kenneth De Jong (George Mason University, Fairfax, VA)

Solving complex problems with coevolutionary algorithms
more info

- Malcolm Heywood (Dalhousie University)
- Krzysztof Krawiec (Poznan University of Technology)

Theory of Evolution Strategies and Related Algorithms
more info

- Anne Auger (Inria Saclay-Ile-de-France )

Gene Regulatory Network
more info

- Sylvain Cussat-Blanc (University of Toulouse, France)
- Wolfgang Banzhaf (Memorial University of Newfoundland, Canada)

Semantic Genetic Programming
more info

- Krzysztof Krawiec (Poznan University of Technology, Poznań, Poland)
- Alberto Moraglio r (University of Exeter, UK)

Evolutionary Computation for Dynamic Optimization Problems
more info

- Shengxiang Yang (De Montfort University, UK)



Medical Applications of Evolutionary Computation
more info

- Stephen Smith (University of York, UK)

Automatic (Offline) Configuration of Algorithms
more info

- Manuel López-Ibáñez (IRIDIA laboratory, ULB, Belgium)
- Thomas Stützle (IRIDIA laboratory, ULB, Belgium)

Low or no cost distributed evolutionary computation
more info

- JJ Merelo (CITIC U. Granada)

Intelligent Systems for Smart Cities
more info

- Enrique Alba (University of Málaga, Spain)

Computational Intelligence in Games
more info

- Pier Luca Lanzi (Politecnico di Milano)

Synergies between Evolutionary Algorithms and Reinforcement Learning
more info

- Madalina M. Drugan (Vrije Universiteit Brussel, Belgium)


Introductory Tutorials


Genetic Programming

Genetic programming emerged in the early 1990's as one of the most exciting new  evolutionary algorithm paradigms. It has rapidly grown into a thriving area of research and application. While sharing the evolutionary inspired algorithm principles of a genetic algorithm, it differs by exploiting an executable genome. Genetic programming evolves a 'program' to solve a problem rather than a single solution. This tutorial introduces the basic genetic programming framework. It explains how the powerful capability  of genetic programming  is derived from modular algorithmic components: executable representations such as an abstract syntax tree, variation operators that preserve syntax and explore a variable length, hierarchical  solution space, appropriately chosen programming functions and fitness function specification.

Una-May O'Reilly
is leader of the AnyScale Learning For All (ALFA) group at CSAIL. ALFA focuses on scalable machine learning, evolutionary algorithms, and frameworks for large scale knowledge mining, prediction and analytics.  The group has projects in clinical medicine knowledge discovery: arterial blood pressure forecasting and pattern recognition, diuretics in the ICU; wind energy: turbine layout optimization, resource prediction, cable layout; and MOOC Technology: MoocDB, student persistence and resource usage analysis.
Her research is in the design of scalable Artificial Intelligence systems that execute on a range of hardware systems: GPUs, workstations, grids, clusters, clouds and volunteer compute networks. These systems include machine learning components such as evolutionary algorithms (e.g. genetic programming, genetic algorithms and learning classifiers), classification, non-linear regression, and forecasting algorithms. They span the interpretation and analysis of raw data, through inference on conditioned exemplar data, to the deployment and evaluation of learned “algorithmic machines” in the original application context.

Una-May received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She is a Junior Fellow (elected before age 40) of the International Society of Genetic and Evolutionary Computation, now ACM Sig-EVO. She now serves as Vice-Chair of ACM SigEVO. She served as chair of the largest international Evolutionary Computation Conference,  GECCO, in 2005.  She has served on the GECCO business committee, co-led the 2006 and 2009 Genetic Programming: Theory to Practice Workshops and co-chaired EuroGP, the largest conference devoted to Genetic Programming. In 2013 she inaugurated the Women in Evolutionary Computation group at GECCO. She is the area editor for Data Analytics and Knowledge Discovery for Genetic Programming and Evolvable Machines (Kluwer), and editor for Evolutionary Computation (MIT Press), and action editor for the Journal of Machine Learning Research.

Una-May has a patent  for a original genetic algorithm technique applicable to internet-based name suggestions. She holds a B.Sc. from the University of Calgary, and a M.C.S. and Ph.D. (1995) from Carleton University, Ottawa, Canada. She joined the Artificial Intelligence Laboratory, MIT as a Post-Doctoral Associate in 1996.

Evolutionary Computation: A Unified Approach

The field of Evolutionary Computation has experienced tremendous growth over the past 20 years, resulting in a wide variety of evolutionary algorithms and applications. The result poses an interesting dilemma for many practitioners in the sense that, with such a wide variety of algorithms and approaches, it is often hard to se the relationships between them, assess strengths and weaknesses, and make good choices for new application areas. This tutorial is intended to give an overview of a general EC framework that can help compare and contrast approaches, encourages crossbreeding, and facilitates intelligent design choices.    The use of this framework is then illustrated by showing how traditional EAs can be compared and contrasted with it, and how new EAs can be effectively designed using it.

Finally, the framework is used to identify some important open issues that need further research.

Kenneth A. De Jong
He received his Ph.D. in computer science from the University of Michigan in 1975. He joined George Mason University in 1984 and is currently a University Professor, a Professor of Computer Science, head of the Evolutionary Computation Laboratory, and Associate Director of the Krasnow Institute. His research interests include genetic algorithms, evolutionary computation, machine learning, and adaptive systems. He is currently involved in research projects involving the development of new evolutionary algorithm (EA) theory, the use of EAs as heuristics for NP-hard problems, and the application of EAs to the problem of learning task programs in domains such as robotics, navigation and game playing. He is an active member of the Evolutionary Computation research community and has been involved in organizing many of the workshops and conferences in this area. He is the founding editor-in-chief of the journal Evolutionary Computation (MIT Press), and a member of the board of ACM SIGEVO. He is the recipient of an IEEE Pioneer award in the field of Evolutionary Computation and a lifetime achievement award from the Evolutionary Programming Society.


Evolutionary Multiobjective Optimization

Many optimization problems are multiobjective in nature in the sense that multiple, conflicting criteria need to be optimized simultaneously. Due to the conflict between objectives, usually, no single optimal solution exists. Instead, the optimum corresponds to a <i>set</i> of so-called Pareto-optimal solutions for which no other solution has better function values in all objectives.

Evolutionary Multiobjective Optimization (EMO) algorithms are widely used in practice for solving multiobjective optimization problems due to several reasons. As randomized blackbox algorithms, EMO approaches allow to tackle problems with nonlinear, nondifferentiable, or noisy objective functions. As set-based algorithms, they allow to compute or approximate the full set of Pareto-optimal solutions in one algorithm run&emdash;opposed to classical solution-based techniques from the multicriteria decision making (MCDM) field. Using EMO approaches in practice has two other advantages: they allow to learn about a problem formulation, for example, by automatically revealing common design principles among (Pareto-optimal) solutions (<i>innovization</i>) and it has been shown that certain single-objective problems become easier to solve with randomized search heuristics if the problem is reformulated as a multiobjective one (<i>multiobjectivization</i>).

This tutorial aims at giving a broad introduction to the EMO field and at presenting some of its recent research results in more detail. More specifically, we are going to (i) introduce the basic principles of EMO algorithms in comparison to classical solution-based approaches, (ii) show a few practical examples which motivate the use of EMO in terms of the mentioned innovization and multiobjectivization principles, and (iii) present a general overview of state-of-the-art algorithms and techniques. Moreover, we will present some of the most important research results in areas such as indicator-based EMO, preference articulation, surrogate-assisted EMO, and performance assessment.

Though classified as introductory, this tutorial is intended for both novices and regular users of EMO. Those without any knowledge will learn about the foundations of multiobjective optimization and the basic working principles of state-of-the-art EMO algorithms. Open questions, presented throughout the tutorial, can serve for all participants as a starting point for future research and/or discussions during the conference.

Dimo Brockhoff
After obtaining his diploma in computer science (Dipl.-Inform.) from University of Dortmund, Germany in 2005, Dimo Brockhoff received his PhD (Dr. sc. ETH) from ETH Zurich, Switzerland in 2009. Between June 2009 and October 2011 he held postdoctoral research positions---first at INRIA Saclay Ile-de-France in Orsay and then at Ecole Polytechnique in Palaiseau, both in France. Since November 2011 he has been a junior researcher (now CR1) at INRIA Lille - Nord Europe in Villeneuve d'Ascq, France.

His most recent research interests are focused on evolutionary multiobjective optimization (EMO) and other (single-objective) blackbox optimization techniques, in particular with respect to benchmarking, theoretical aspects, and expensive optimization.

Tobias Wagner
After obtaining his diploma in computer science (Dipl.-Inform.) from the University of Dortmund, Germany in 2006, Tobias Wagner received his PhD in mechanical engineering (Dr.-Ing.) from the Technische Universität Dortmund, Germany in 2013. Between June 2006 and Sepember 2013 he held a scientific assistant position at the Institute of Machining Technology (ISF). Since October 2013 he works as a nonpermanent academic councilor at the ISF. His research is focused on surrogate-assisted single- and multi-objective optimization and sequential design techniques. With regard to EMO, he is particularly interested in the use of performance indicators and preference information within sequential design techniques.


Particle Swarm Optimization (PSO)

This tutorial will provide a short introduction to particle swarm optimization (PSO), and will then focus on a number of aspects about PSO, misunderstood by many PSO researchers and practitioners. The talk will firstly show that the standard PSO is not even a local minimizer, and will show why this is the case. The talk will show that the standard PSO have been adapted to produce PSO algorithms that have proven convergence to local minima, and also algorithms with proven convergence to global optima. The sensitivity of the PSO to values for its control parameters will be discussed in relation to particle trajectories. Convergence conditions will be discussed which, if satisfied, guarantee convergence to an equilibrium state. The frequently published opinion that fully connected neighborhood topologies should not be used due to early stagnation and bad results, will shown to be false, and that the choice of which neighborhood topology to use is very strongly problem dependent. Synchronous and asynchronous updates of particle and best positions will be discussed, and it will be  shown that a number of opinions in favor of asynchronous updates are also ill-founded, and that sunchronous updates actually perform very well. Different approaches to velocity initialization will be discussed, and it will be shown that random initial velocites are not a good idea. The last part of the tutorial will be devoted to aspects of the empirical analysis of PSO, including the incorrect use of fitness function evaluations as a stopping condition, the need to optimize control parameters for the algorithms used in empirical analysis, and a formal statistical approach to analyze performance will be presented.

As a result of this tutorial, researchers and practitioners will gain a better understanding of these very important issues about PSO, and will receive guidelines on the use of PSO, and on empirical analysis of PSO algorithms.

Andries Engelbrecht
He is a Full Professor in Computer Science at the Department of Computer Science, University of Pretoria,and South African Research Chair in AI. He manages a research group of 40 Masters and PhD students, most of whom do research in swarm intelligence. He has recently authored a book, “Fundamentals of Computational Swarm Intelligence”, published by Wiley. He is also the author of a book, “Computational Intelligence: An Introduction”, also published by Wiley. He has presented tutorials on PSO and Coevolutionary methods for evolving game agents at IEEE CEC 2005 and IEEE CIG 2005 respectively. He is co-presenter of a tutorial on PSO and DE at IEEE CEC 2007, and PSO at GECCO 2007. He also presented PSO tutorials at ACISS and ACAL 2010, CEC 2009, CEC 2012, CEC 2013, GECCO 2013, and to a number of universities. He has published approximately 200 papers in the last decade, serves as a reviewer for a number of conferences and journals, and is an associate-editor of IEEE TEC, IEEE TCIAIG, ans Swarm Intelligence, and serves on the editorial board of three other journals. He served as a member of a large number of conference program committees, and is in the organizing committee of several conferences.


Model-Based Evolutionary Algorithms

In model-building evolutionary algorithms the variation operators are guided by the use of a model that conveys as much problem-specific information as possible so as to best combine the currently available solutions and thereby construct new, improved, solutions. Such models can be constructed beforehand for a specific problem, or they can be learnt during the optimization process. Well-known types of algorithms of the latter type are Estimation-of-Distribution Algorithms (EDAs) where probabilistic models of promising solutions are built and subsequently sampled to generate new solutions.

Although EDAs are the best known example, other approaches also exist, such as the use of statistical models in the Linkage Tree Genetic Algorithm (LTGA). In general, replacing traditional crossover and mutation operators by building and using models enables the use of machine learning techniques for automatic discovery of problem regularities and exploitation of these regularities for effective exploration of the search space. Using machine learning in optimization enables the design of optimization techniques that can automatically adapt to the given problem. This is an especially useful feature when considering optimization in a black-box setting. Successful applications include Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature subset selection, forest management, groundwater remediation design, telecommunication network design, antenna design, and scheduling.

This tutorial will provide an introduction and an overview of major research directions in this area.

Dirk Thierens
He is affiliated with the Department of Information and Computing Sciences at Utrecht University, the Netherlands, where he is teaching courses on Evolutionary Computation and on Computational Intelligence. He has been involved in genetic algorithm research since 1990. His current research interests are mainly focused on the design and application of model learning techniques to improve evolutionary search. Dirk is (has been) a member of the Editorial Board of the journals Evolutionary Computation, Evolutionary Intelligence, IEEE Transactions on Evolutionary Computation, and a member of the program committee of the major international conferences on evolutionary computation. He was elected member of the SIGEVO ACM board and contributed to the organization of several GECCO conferences: workshop co-chair (2003, 2004), track (co-)chair (2004, 2006, 2014), and Editor-in-Chief (2007).

Peter A. N. Bosman
He is a senior researcher in the Intelligent Systems research group at the Centrum Wiskunde & Informatica (CWI) (Centre for Mathematics and Computer Science) located in Amsterdam, the Netherlands. Peter was formerly affiliated with the Department of Information and Computing Sciences at Utrecht University, where also he obtained both his MSc and PhD degrees in Computer Science, more specifically on the design and application estimation-of-distribution algorithms (EDAs). His current research position is mainly focused on fundamental EA research and on applications of EAs (in energy systems and the life sciences). Peter is best known for his status of active researcher in the area of EDAs since its upcoming and has (co-)authored over 60 publications in the field of evolutionary computation. At the GECCO conference, Peter has previously been track (co-)chair (EDA track, 2006, 2009), late-breaking-papers chair (2007), (co-)workshop organizer (OBUPM workshop, 2006; EvoDOP workshop, 2007; GreenGEC workshop, 2012, 2013) and (co-)local chair (2013).


Runtime Analysis of Evolutionary Algorithms: Basic Introduction

Evolutionary algorithm theory has studied the time complexity of evolutionary algorithms for more than 20 years. Different aspects of this rich and diverse research field were presented in four different advanced or specialized tutorials at last year's GECCO. This tutorial presents the foundations of this field. We introduce the most important notions and definitions used in the field and consider different evolutionary algorithms on a number of well-known and important example problems. Through a careful and thorough introduction of important analytical tools and methods, including fitness-based partitions, typical events and runs and drift analysis, by the end of the tutorial the attendees will be able to apply these techniques to derive relevant runtime results for non-trivial evolutionary algorithms. Moreover, the attendees will be fully prepared to follow the more advanced tutorials that cover more specialized aspects of the field, including the new advanced runtime analysis tutorial on realistic population-based EAs. To assure the coverage of the topics required in the specialised tutorials, this introductory tutorial will be coordinated with the presenters of the more advanced ones. In addition to custom-tailored methods for the analysis of evolutionary algorithms we also introduce the relevant tools and notions from probability theory in an accessible form. This makes the tutorial appropriate for everyone with an interest in the theory of evolutionary algorithms without the need to have prior knowledge of probability theory and analysis of randomized algorithms. The last two editions of this tutorial at GECCO 2013 and GECCO 2014 attracted over 50 participants each.

Description of any interactive activity or demo planned within the tutorial presentation

Making theory concrete and tangible is a difficult challenge. We answer this challenge by
including a large number of computer simulations of evolutionary algorithms and other random
processes that we visualize for a more intuitive understanding.

Per Kristian Lehre
He received MSc and PhD degrees in Computer Science from the Norwegian University of Science and Technology (NTNU) in Trondheim, Norway. He finished the PhD in 2006 under the supervision of Prof Pauline Haddow, and joined the School of Computer Science at The University of Birmingham, UK, as a Research Fellow in January 2007 with Prof Xin Yao. He was a Post Doctoral Fellow at DTU Informatics, Technical University of Denmark in Lyngby, Denmark from April 2010. He is since September 2011 a Lecturer in the School of Computer Science at the University of Nottingham, UK. Dr Lehre's research interests are in theoretical aspects of nature-inspired search heuristics, in particular runtime analysis of population-based evolutionary algorithms.  His research has won numerous best paper awards, including GECCO (2013, 2010, 2009, 2006) and ICSTW (2008). He is vice-chair of IEEE Task Force on Theoretical Foundations of Bio-inspired Computation, and a member of the editorial board of Evolutionary Computation. He has guest-edited special issues of Theoretical Computer Science and IEEE Transaction on Evolutionary Computation on theoretical foundations of evolutionary computation. Dr Lehre has given many tutorials on evolutionary computation in summer schools (UK: 2007, France: 2009, 2010, and 2011, Estonia: 2010), as well as major conferences and workshops (GECCO 2013 and 2014, CEC 2013, ThRaSH 2013).  He is the main coordinator of the 2M euro EU-funded project SAGE which brings together theory of evolutionary computation and population genetics.

Pietro S. Oliveto
He is currently a Vice-Chancellor Fellow at the University of Sheffield, UK and has recently been awarded an EPSRC Early Career Fellowship which he will start in March 2015. He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. From October 2007 to April 2008, he was a visiting researcher of the Ecient Algorithms and Complexity Theory Institute at the Department of Computer Science of the University of Dortmund where he collaborated with Prof. Ingo Wegener's research group.
His main research interest is the time complexity analysis of randomized search heuristics for combinatorial optimization problems. He has published several runtime analysis papers on Evolutionary Algorithms (EAs), Articial Immune Systems (AIS) and Ant Colony Optimization (ACO) algorithms for classical NP-Hard combinatorial optimization problems, a review paper of the field of time complexity analysis of EAs for combinatorial optimization problems and a book chapter containing a tutorial on the runtime analysis of EAs. He has won best paper awards at the GECCO08, ICARIS11 and GECCO14 conferences and got very close at CEC09 and at ECTA11 through best paper nominations.
Dr. Oliveto has given tutorials on the runtime complexity analysis of EAs at WCCI 2012, CEC 2013, GECCO 2013, WCCI 2014 and GECCO 2014. He is part of the Steering Committee of the annual workshop on Theory of Randomized Search Heuristics (ThRaSH), IEEE member and Chair of the IEEE CIS Task Force on Theoretical Foundations of Bio-inspired Computation.


Evolving Neural Networks

Neuroevolution, i.e. evolution of artificial neural networks, has recently emerged as a powerful technique for solving challenging reinforcement learning problems. Compared to traditional (e.g. value-function based) methods, neuroevolution is especially strong in domains where the state of the world is not fully known: The state can be disambiguated through recurrency, and novel situations handled through pattern matching. In this tutorial, I will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) ways of combining traditional neural network learning algorithms with evolutionary methods, and (3) applications of neuroevolution to control, robotics, artificial life, and games.

Demos: The tutorial includes over 30 videos illustrating methods and applications of neuroevolution, including rocket control, multilegged walking, predator-prey coevolution, evolving morphology and behavior for virtual creatures, the NERO machine-learning video game, and a Turing Test for game bots.

Risto Miikkulainen
Risto is a Professor of Computer Sciences at the University of Texas at Austin. He received an M.S. in Engineering from the Helsinki University of Technology, Finland, in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His recent research focuses on methods for evolving neural networks and applying these methods to game playing, robotics, and intelligent control. He is an author of over 350 articles on neuroevolution, connectionist natural language processing, and the computational neuroscience of the visual cortex. He is an associate editor of IEEE Transactions on Computational Intelligence and AI in Games and Cognitive Systems Research, and a member of the Board of Governors of the International Neural Networks Society.


Complex Networks
Complex networks are everywhere around us. They play a central role in many ways, from internet-mediated social networks, to transportation, biological processes, and the transmission of information among others. This tutorial will provide an overview and synthesis of state-of-the-art knowledge on complex networks, thus allowing researchers to better understand their structure, function, and dynamics.

The following subjects will be treated.

  • What are complex networks? Examples from several  fields and fundamental structural features.
  • Characterization of complex networks: network statistics.
  • Models of complex networks: Erdos-Renyi random graphs, Watts-Strogatz small worlds, preferential attachment and Barabasi-Albert growing networks. 
  • Networks embedded in physical space, random geometric graphs.
  • Network evolution: Dynamical complex networks.
  • Dynamics on networks: epidemic spreading.
  • Applications of complex network theory in population-based metaheuristics and to the fitness landscapes of hard problems.

Marco Tomassini
Marco is a professor of Computer Science at the Information Systems Department of the University of Lausanne, Switzerland. He got a Doctor's degree in theoretical chemistry from the University of Perugia, Italy, working on computer simulations of condensed matter systems. His current research interests are centered around the application of biological ideas to artificial systems. His current main research interests are the modeling of evolutionary games in networks, experimental games, complex networks and systems, and the structure of hard combinatorial search spaces. He is also active in evolutionary computation, especially spatially structured systems, genetic programming, and the structure of program search spaces. He has been Program Chairman of several international events and has published many scientific papers and several authored and edited books in these fields. He has received the EvoStar 2010 Award in recognition for outstanding contribution to evolutionary computation.


Cartesian Genetic Programming (CGP)

Cartesian Genetic Programming (CGP) is a well-known form of Genetic Programming developed by Julian Miller in 1999-2000. In its classic form, it uses a very simple integer address-based genetic representation of a program in the form of a directed graph. Graphs are very useful program representations and can be applied to many domains (e.g. electronic circuits, neural networks). It can handle cyclic or acyclic graphs.

In a number of studies, CGP has been shown to be comparatively efficient to other GP techniques. It is also very simple to program. The c lassical form of CGP has undergone a number of developments which have made it more useful, efficient and flexible in various ways. These include self-modifying CGP (SMCGP), cyclic connections (recurrent-CGP), encoding artificial neural networks and automatically defined functions (modular CGP).

SMCGP uses functions that cause the evolved programs to change themselves as a function of time. This makes it possible to find general solutions to classes of problems and mathematical algorithms (e.g. arbitrary parity, n-bit binary addition, sequences that provably compute pi and e to arbitrary precision, and so on).

Recurrent-CGP allows evolution to create programs which contain cyclic, as well as acyclic, connections. This enables application to tasks which require internal states or memory. It also allows CGP to create recursive equations.

CGP encoded artificial neural networks represent a powerful training method for neural networks. This is because CGP is able to simultaneously evolve the networks connections weights, topology and neuron transfer functions. It is also compatible with Recurrent-CGP enabling the evolution of recurrent neural networks.

The tutorial will cover the basic technique, advanced developments and applications to a variety of problem domains. It will present some live demos of CGP in action and interactive activities including evolutionary art and music.

Julian Miller
He is a Reader in the Department of Electronics at the University of York. His main research interests are genetic programming (GP), and computational development. He has over 5,500 citations and over 200 refereed papers on evolutionary computation, genetic programming, evolvable hardware, computational development and other topics.

He has been chair or co-chair of sixteen conferences or workshops in genetic programming, computational development, evolvable hardware and evolutionary techniques.

Dr. Miller chaired of the Evolvable Hardware tracks at the Genetic and Evolutionary Computation Conference in 2002-2003 and was Genetic Programming track chair in 2008. He was co-chair the Generative and Developmental Systems(GDS) track in 2007, 2009 and 2010. He was program co-chair for the European Conference on Genetic Programming in 2001.

He is an associate editor of the Genetic Programming and Evolvable Machines and Natural Computing and a former associate editor of IEEE Transactions on Evolutionary Computation. He is an editorial board member of the journals Evolutionary Computation and Unconventional Computing. 

Andrew Turner
He obtained his first class honours Master's degree in Electronic Engineering from The University of York in 2012. His PhD applies Evolutionary Computation towards the training of Artificial Neural Networks; a field referred to as NeuroEvolution. Now in his third year of study Andy has nine publications demonstrating the advantages of NeuroEvolution including two in the journals Genetic Programming and Evolvable Machines and Evolutionary Intelligence.

Andy helped organise the 2013 and 2014 York Doctoral Symposium as a committee member and also helped organise the York 2014 NASA Space Apps Challenge; he now sits on the 2015 committee.

Andy currently maintains the Cartesian Genetic Programming Library as a open source, cross platform resource to help encourage the use of a reliable and efficient implementation of CGP.



The automatic design of algorithms has been an early aim of both machine learning and AI, but has proved elusive. The aim of this tutorial is to introduce hyper-heuristics as a principled approach to the automatic design of algorithms. Hyper-heuristics are metaheuristics applied to a space of algorithms; i.e., any general heuristic method of sampling a set of candidate algorithms. In particular, this tutorial will demonstrate how to mine existing algorithms to obtain algorithmic primitives for the hyper-heuristic to compose new algorithmic solutions from, and to employ various types of genetic programming to execute the composition process; i.e., the search of program space.

This tutorial will place hyper-heuristics in the context of genetic programming - which differs in that it constructs solutions from scratch using atomic primitives - as well as genetic improvement - which takes a program as starting point and improves on it (a recent direction introduced by William Langdon).

The approach proceeds from the observation that it is possible to define an invariant framework for the core of any class of algorithms (often by examining existing human-written algorithms for inspiration). The variant components of the algorithm can then be generated by genetic programming. Each instance of the framework therefore defines a family of algorithms. While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach, as the template can be chosen to be any executable program and the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence. This leads to a technique for mass-producing algorithms that can be customised to the context of end-use. This is perhaps best illustrated as follows: typically a researcher might create a travelling salesperson algorithm (TSP) by hand. When executed, this algorithm returns a solution to a specific instance of the TSP. We will describe a method that generates TSP algorithms that are tuned to representative instances of interest to the end-user. This method has been applied to a growing number of domains including; data mining/machine learning, combinatorial problems including bin packing (on- and off- line), Boolean satisfiability, job shop scheduling, exam timetabling, image recognition, black-box function optimization, wind-farm layout, and the automated design of meta-heuristics themselves (from selection and mutation operators to the overall meta-heuristic architecture). This tutorial will provide a step-by-step guide which takes the novice through the distinct stages of automatic design. Examples, including live demos, will illustrate and reinforce the issues of practical application. This technique has continually produced results which outperform their manually designed counterparts, and a theoretical underpinning will be given to demonstrate why this is the case. Automatic design will become an increasingly attractive proposition: the cost of human design will only increase in-line with inflation, while the speed of processors increases in-line with Moore's law, thus making automatic design attractive for industrial application. Knowledge of genetic programming will be assumed.

REFERENCES Burke Edmund K, Hyde Matthew R, Kendall Graham, Ochoa Gabriela, Ozcan Ender, Woodward John, Exploring hyper-heuristic methodologies with genetic programming Computational Intelligence, 177-201, 2009, Springer Berlin Heidelberg

Matthew A. Martin and Daniel R. Tauritz. 2014. A problem configuration study of the robustness of a black-box search algorithm hyper-heuristic. In Proceedings of the 2014 conference companion on Genetic and evolutionary computation companion (GECCO Comp '14). ACM, New York, NY, USA, 1389-1396.

Matthew A. Martin and Daniel R. Tauritz. 2013. Evolving black-box search algorithms employing genetic programming. In Proceedings of the 15th annual conference companion on Genetic and evolutionary computation(GECCO '13 Companion), Christian Blum (Ed.). ACM, New York, NY, USA, 1497-1504.

Su Nguyen and Mengjie Zhang and Mark Johnston and Kay Chen Tan. Automatic Design of Scheduling Policies for Dynamic Multi-objective Job Shop Scheduling via Cooperative Coevolution Genetic Programming. IEEE Transactions on Evolutionary Computation, 18(2):193-208, 2014.   Su Nguyen and Mengjie Zhang and Mark Johnston and Kay Chen Tan. Genetic Programming for Evolving Due-date Assignment Models in Job Shop Environments. Evolutionary Computation, 22(1):105-138, 2014.

Woodward John, Swan Jerry, Automatically designing selection heuristics, Proceedings of the 13th annual conference companion on Genetic and evolutionary computation, 583-590, 2011, ACM

John R. Woodward
He is a lecturer at the University of Stirling, within the CHORDS group (http://chords.cs.stir.ac.uk/) and is employed on the DAASE project (http://daase.cs.ucl.ac.uk/), and for the previous four years was a lecturer with the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming.  He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 50 talks at International Conferences and as an invited speaker at Universities. He has worked in industrial, military, educational and academic setting, and been employed by EDS, CERN and RAF and three UK Universities. He is currently ranked 45th position out of over 8,000 authors on the Genetic Programming Bibliography http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html

Daniel R. Tauritz
He is an Associate Professor in the Department of Computer Science at the Missouri University of Science and Technology (S&T), on sabbatical at Sandia National Laboratories for the 2014-2015 academic year, a former Guest Scientist at Los Alamos National Laboratory (LANL), the founding director of S&T's Natural Computation Laboratory (http://web.mst.edu/~tauritzd/nc-lab/), and founding academic director of the LANL/S&T Cyber Security Sciences Institute. He received his Ph.D. in 2002 from Leiden University for Adaptive Information Filtering employing a novel type of evolutionary algorithm. He served previously as GECCO 2010 Late Breaking Papers Chair, COMPSAC 2011 Doctoral Symposium Chair, GECCO 2012 GA Track Co-Chair, and GECCO 2013 GA Track Co-Chair. For several years he has served on the GECCO GA track program committee, the IEEE CEC program committee, and a variety of other international conference program committees. His research interests include the design of hyper-heuristics and self-configuring evolutionary algorithms and the application of computational intelligence techniques in cyber security, critical infrastructure protection, and search-based software engineering. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.



Evolutionary Robotics

In the same way as evolution shapes animals, is it possible to use artificial evolution to automatically design robots? This attractive vision gave birth to Evolutionary Robotics (ER), an interdisciplinary field that incorporates ideas from Biology, Robotics and Artificial Intelligence. Within the last 20 years, Evolutionary Robotics yielded to a large variety of scientific questions, from understanding Natural Evolution thanks to ER models, to addressing the autonomous design of robots' bodies and brains.

This tutorial will first give an overview of ER and its motivations. It will then present the main features of ER to introduce the main topics of the tutorials, which are:

• how to make it work ? Fitness function design and the influence of selection pressure;
• ER for real robots: the reality gap and how to cross it;
• ER and collective robotic systems: how to build such systems ? How to use these tools to understand collective behaviors in nature?

The questions related to the representation and the principles of neuro-evolution will be only briefly mentioned, as they are described in another tutorial.

Nicolas Bredeche
He is Professeur des Universités (Professor) at Université Pierre et Marie Curie (UPMC, Paris, France), His research is done in the team Architectures and Models for Adaptation and Cognition, within the Institute of Intelligent Systems and Robotics (ISIR, CNRS). His field of research is mostly concerned with Evolutionary Computation and Complex Systems (self- adaptive collective robotic systems, generative and developmental systems, evolutionary robotics). In particular, part of his work is about (a) evolutionary design and/or adaptation of group of embodied agents and (b) artificial ontogeny such as multi-cellular developmental systems and self-regulated networks.

Previously, he graduated in Computer Science from UPMC (1998) and obtained a Master's degree in Cognitive Science from Université Paris-Sud in 1999. He pursued and defended a PhD in Computer Science in 2002, on Machine Learning and Autonomous Robotics. Then, he worked as a post-doctoral researcher at ICT/CAS in 2003 (Beijing, China). From end 2003 to mid-2012, he was Assistant (and then Associate) Professor at Université Paris-Sud and conducted his research in the Machine Learning and Optimization team (Univ. Paris-Sud, INRIA, CNRS) within the University's Computer Science Lab.

Nicolas Bredeche is author of more than 30 papers in journals and major conferences in the field. He has (or currently is) advisor or co-advisor of six PhD students and has served in program com- mittees and/or as track chair of the major conferences in the field of Evolutionary Computation (incl. GECCO track chair for the Artificial Life and Robotics track 2012/2013). He is also a member of the french Evolution Artificielle association and has been regularly co-organizing the french Artificial Evo- lution one-day seminar (JET) since 2009. He has also organized several international workshops in Evolution, Robotics, and Development of Artificial Neural Networks.

Stéphane Doncieux
He is Professeur des Universités (Professor) in Computer Sci- ence at Université Pierre et Marie Curie (UPMC, Paris, France). His research is mainly concerned with the use of evolutionary algorithms in the context of optimization or synthesis of robot controllers. He worked in a robotics context to design, for instance, controllers for flying robots, but also in the context of modeling where he worked on the use of multi-objective evolutionary algorithms to optimize and study computational models. More recently, he focused on the use of multi-objective approaches to tackle learning problems like premature convergence or generalization.

He is engineer of the ENSEA, a french electronic engineering school. He obtained a Master's degree in Artificial Intelligence and Pattern Recognition in 1999. He pursued and defended a PhD in Computer Science in 2003. He was responsible, with Bruno Gas, of the SIMA research team since its creation in 2007 and up to 2011. Since then, he is the head of the AMAC (Architecture and Models of Adaptation and Cognition) research team with 11 permanent researchers, 3 post-doc students and 9 PhD students. Researchers of the team work on different aspects of learning in the context of motion control and cognition, both from a computational neuroscience perspective and a robotics perspective. He has published 10 journal papers and more than 30 articles in international conferences. He has organized several workshops on ER at conferences like GECCO or IEEE-IROS and has edited 2 books.

Jean-Baptiste Mouret
He is assistant professor at ISIR (Institute for Intelligent Systems and Robotics) which is part of Université Pierre et Marie Curie-Paris 6 (UPMC). He mainly conducts researches that combine neuro-evolution and multi-objective evolutionary algorithms to im- prove the adaptive abilities of robots. In his recent work, his main contributions deal with behavioral diversity, synaptic plasticity, generative encodings and resilient robots.

He obtained a M.S. in computer science from EPITA in 2004, a M.S. in artificial intelligence from UPMC in 2005 and a Ph.D. in computer science from UPMC in 2008. He edited two books and participated to the organization of several conferences and workshops about bio-inspired robotics and evolutionary robotics.


Introducing Rule-based machine learning - A Practical Guide

Learning classifier systems (LCSs) are an often overlooked class of rule-based machine learning algorithms with a unique and flexible set of features setting them apart from other strategies.  Learning Classifier Systems combine the global search of evolutionary algorithms with the local optimization of reinforcement or supervised learning. Two major genre’s of LCS algorithms exist, including Michigan-style and Pittsburgh-style systems.  Michigan-style LCS algorithms are the more traditional of the two LCS architectures and is the focus of the present work.  Michigan LCS algorithms uniquely distribute learned patterns over a collaborative population of individually interpretable (IF: THEN) rules.  This allows the algorithm to flexibly and effectively describe complex and diverse problem spaces found in behavior modeling, function approximation, classification, and data mining.

Our proposed tutorial will focus on a gentle introduction to learning classifier systems, how they work, and how they can be applied to a vast assortment of problems in a very powerful and flexible way.    We will provide hands on demonstrations of how to get ExSTraCS up and running easily, with examples of how it works, and how to interpret the results it outputs.  This will include discussions of statistical analysis, and data visualization strategies.  While ExSTraCS will be the focus of the demonstration, this tutorial will review and contrast many of the major learning classifier systems available in the literature, as well as identify the advantages and disadvantages of learning classifier systems compared to other major machine learning strategies.  Additionally we will be able to tie the upcoming publication of our co-authored introductory textbook to the lecture and demonstrations in this tutorial.  Specifically we had previously coded and posted some easy code for an ‘educational’-learning classifier system (eLCS) also freely available online.  We will pair code and analysis demonstrations directly with eLCS and ExSTrACS, as two clear, and available examples of Learning Classifier Systems.

Ryan Urbanowicz
Dr. Urbanowicz's research is focused on bioinformatics, machine learning, epidemiology, data mining, and the development of a new learning classifier system that is maximally functional, accessible, and easier to use and interpret.  He has written one of the most cited and regarded reviews of the Learning Classifier System research field as well as 12 additional peer-reviewed LCS research papers, has co-chaired the International Workshop on Learning Classifier Systems for the past 4 years, and has recently published and a new open source learning classifier system algorithm implemented in python, called ExSTraCS.  He has also given several invited introductory lectures on LCS algorithms in addition to co-presenting this tutorial in 2013.  Dr. Urbanowicz received a Bachelors and Masters of Biological Engineering from Cornell University, as well as a PhD in Genetics from Dartmouth College.  Currently he is a post-doctoral researcher in the Geisel School of Medicine, about to transition to a new research position at the University of Pennsylvania, USA.

Will Brown
Dr. Brown's research focuses on applied cognitive systems.  Specifically, how to use inspiration from natural intelligence to enable computers/machines/robots to behave usefully.  This includes cognitive robotics, learning classifier systems, and modern heuristics for industrial application. Dr. Brown has been co-track chair for the Genetics-Based Machine Learning (GBML) track,  chaired the International Workshop on Learning Classifier Systems and has lectured for a graduate course on LCS for two years.  He completed his Bachelors of Mechanical Engineering at the University of Bath, a Masters and EngD from Cardiff, and post-doctoral work at Leicester.   Currently he is a senior lecturer at the Victoria University of Wellington.


Multimodal Optimization

Multimodal optimization is currently getting established as a research direction that collects approaches from various domains of evolutionary computation that strive for delivering multiple very good solutions at once. We start with discussing why this is actually useful and therefore provide some real-world examples. From that on, we set up several scenarios and list currently employed and potentially available performance measures. This part also calls for user interaction: currently, it is very open what the actual targets of multimodal optimization shall be and how the algorithms shall be compared experimentally.

In-tutorial discussion of this topic will be encouraged.

As there has been little work on theory (not runtime complexity; rather the limits of different mechanisms) in the area, we present a high-level modelling approach that provides some insight in how niching can actually improve optimization methods if it fulfils certain conditions.

While the algorithmic ideas for multimodal optimization (as niching) originally stem from biology and have been introduced into evolutionary algorithms from the 70s on, we only now see the consolidation of the field. The vast number of available approaches is getting sorted into collections and taxonomies start to emerge. We present our version of a taxonomy, also taking older but surpisingly modern global optimization approaches into account. We highlight some single mechanisms as clustering, multiobjectivization and archives that can be used as additions to existing algorithms or building blocks of new ones.

We also discuss recent relevant competitions and their results, point to available software and outline the possible future developments in this area.

Mike Preuss
Mike is Research Associate at ERCIS, the European Research Center for Information Systems, at the University of Muenster, Germany. Previously, he was with the Chair of Algorithm Engineering at the Computer Science Department, TU Dortmund, Germany, where he received his Diploma degree in 1998 and his PhD in 2013.

His research interests focus on the field of evolutionary algorithms for real-valued problems, namely on multimodal and multiobjective optimization and the experimental methodology for (non-deterministic) optimization algorithms. He is currently working on the adaptability and applicability of computational intelligence techniques for computer games and various engineering domains, pushing forward modern approaches of experimental analysis as the Exploratory Landscape Analysis (ELA) and innovative uses of surrogate models. He was involved in founding the EvoGames track at Evo* and the Digital Entertainment Technologies and Arts (DETA) track at GECCO. Within the games field, he is mainly interested in AI for realtime strategy (RTS) games and procedural content generation (PCG).


Continuous Optimization and CMA-ES

When solving non-linear optimization problems in continuous domain (continuous optimization), we generally face difficulties such as non-convexity, non-separability, ill-conditioning and ruggedness. These difficulties need to be addressed to design an efficient algorithm for continuous optimization. Evolution Strategies and many continuous domain Estimation of Distribution Algorithms (EDAs) are stochastic algorithms that maintain a multivariate normal distribution as their sample distribution. Many of them can be formulated in a unified and comparatively simple two-step framework where candidate solutions are first sampled from the sample distribution and then the parameters of the distribution are updated.

This tutorial focuses on the most relevant algorithmic question: how should the parameters of the sample distribution be chosen and, in particular, be updated in the generation sequence?

We will start the tutorial with revealing general difficulties of continuous optimization and talk about how evolutionary algorithms address with these difficulties. In evolution strategies, invariance to transformations of the objective function and the search space is an essential concept to design the parameter update. Then, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is considered more detailed with each of its component: step-size control, rank-one update and rank-mu update of the covariance matrix. In the end, the performance of the CMA-ES is related to other well-know evolutionary and non-evolutionary optimization algorithms, namely BFGS, DE, PSO, etc.

Nikolaus Hansen
Nikolaus (https://www.lri.fr/~hansen) is a senior researcher at Inria, France. He received his PhD 1998 in civil engineering from the Technical University Berlin, and his habilitation 2010 in computer science from the University Paris-Sud XI. Before joining Inria he has been working in evolutionary computation, applied artificial intelligence and genomics in Berlin, and in computational science at the ETH Zurich. His main interests are evolutionary computation, learning and adaptation in optimization, and the development and benchmarking of general purpose continuous black-box optimization algorithms, where his main focus is always on algorithms applicable in practice. His works have received more than 6500 citations over the last five years (Google Scholar).

Anne Auger
She is a permanent researcher at the French National Institute for Research in Computer Science and Control (INRIA). She received her diploma (2001) and PhD (2004) in mathematics from the Paris VI University. Before to join INRIA, she worked for two years (2004-2006) at ETH in Zurich. Her main research interest is stochastic continuous optimization including theoretical aspects and algorithm designs. She is a member of ACM-SIGECO executive committee and of the editorial board of Evolutionary Computation. She has been organizing the biannual Dagstuhl seminar "Theory of Evolutionary Algorithms" in 2008 and 2010 and served as track chair for the theory and ES track in 2011, 2013 and 2014. Together with Benjamin Doerr, she is editor of the book "Theory of Randomized Search Heuristics".

Youhei Akimoto
He is an assistant professor at Shinshu University, Japan. He received his diploma (2007) in computer science and his master degree (2008) and PhD (2011) in computational intelligence and systems science from Tokyo Institute of Technology, Japan. He was a research fellow of Japan Society for the Promotion of Science for one year (2010-2011) at Tokyo Institute of Technology. Afterwords, he was a post-doctoral research fellow at INRIA, research center Saclay in France (2011-2013) and started working at Shinshu University in April 2013. His research interests include design principle and theoretical analysis of stochastic search heuristics, in particular, the Covariance Matrix Adaptation Evolution Strategy.


Representations for Evolutionary Algorithms

Successful and efficient use of evolutionary algorithms (EA) depends on the choice of the genotype, the problem representation (mapping from genotype to phenotype) and on the choice of search operators that are applied to the genotypes. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation.

In EA practice one can distinguish two complementary approaches. The first approach uses indirect representations where a solution is encoded in a standard data structure, such as strings, vectors, or discrete permutations, and standard off-the-shelf search operators are applied to these genotypes. This is for example the case in standard genetic algorithms, evolution strategies, and some genetic programming approaches like grammatical evolution or cartesian genetic programming. To evaluate the solution, the genotype needs to be mapped to the phenotype space. The proper choice of this genotype-phenotype mapping is important for the performance of the EA search process. The second approach, the direct representation, encodes solutions to the problem in its most 'natural' space and designs search operators to operate on this representation.

Research in the last few years has identified a number of key concepts to analyse the influence of representation-operator combinations on EA performance. Relevant properties of representations are locality and redundancy.

Locality is a result of the interplay between the search operator and the genotype-phenotype mapping. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes or search operators favor some kind of phenotypes.

The tutorial gives a brief overview about existing guidelines for representation design, illustrates the different aspects of representations, gives a brief overview of models describing the different aspects, and illustrates the relevance of the aspects with practical examples.

It is expected that the participants have a basic understanding of EA principles.

Franz Rothlauf
He received a Diploma in Electrical Engineering from the University of Erlangen, Germany, a Ph.D. in Information Systems from the University of Bayreuth, Germany, and a Habilitation from the University of Mannheim, Germany, in 1997, 2001, and 2007, respectively.

Since 2007, he is professor of Information Systems at the University of Mainz. He has published more than 90 technical papers in the context of planning and optimization, evolutionary computation, e-business, and software engineering, co-edited several conference proceedings and edited books, and is author of the books "Representations for Genetic and Evolutionary Algorithms" and "Design of Modern Heuristics". Since 2013, he is Academic Director of the Executive MBA program at the University of Mainz.

His main research interests are the application of modern heuristics in planning and optimization systems. He is a member of the Editorial Board of Evolutionary Computation Journal (ECJ) and Business & Information Systems Engineering (BISE). Since 2007, he is member of the Executive Committee of ACM SIGEVO. He serves as treasurer for ACM SIGEVO since 2011. He has been organizer of many workshops and tracks on heuristic optimization issues, chair of EvoWorkshops in 2005 and 2006, co-organizer of the European workshop series on "Evolutionary Computation in Communications, Networks, and Connected Systems", co-organizer of the European workshop series on "Evolutionary Computation in Transportation and Logistics", and co-chair of the program committee of the GA track at GECCO 2006. He was conference chair of GECCO 2009.


Advanced Tutorials


Constraint-Handling Techniques used with Evolutionary Algorithms

Evolutionary Algorithms (EAs), when used for global optimization, can be seen as unconstrained optimization techniques. Therefore, they require an additional mechanism to incorporate constraints of any kind (i.e., inequality, equality, linear, nonlinear) into their fitness function. Although the use of penalty functions (very popular with mathematical programming techniques) may seem an obvious choice, this sort of approach requires a careful fine tuning of the penalty factors to be used. Otherwise, an EA may be unable to reach the feasible region (if the penalty is too low) or may reach quickly the feasible region but being unable to locate solutions that lie in the boundary with the infeasible region (if the penalty is too severe). This has motivated the development of a number of approaches to incorporate constraints into the fitness function of an EA. This tutorial will cover the main proposals in current use, including novel approaches such as the use of tournament rules based on feasibility, multiobjective optimization concepts, hybrids with mathematical programming techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial immune systems, among others. Other topics such as the importance of maintaining diversity, current benchmarks and the use of alternative search engines (e.g., particle swarm optimization, differential evolution, evolution strategies, etc.) will be also discussed (as time allows).

Carlos Artemio Coello Coello
He received a BSc in Civil Engineering from the Universidad Autonoma de Chiapas in Mexico in 1991 (graduating Summa Cum Laude). Then, he was awarded a scholarship from the Mexican government to pursue graduate studies in Computer Science at Tulane University (in the USA). He received a MSc and a PhD in Computer Science in 1993 and 1996, respectively. Dr. Coello has been a Senior Research Fellow in the Plymouth Engineering Design Centre (in England) and a Visiting Professor at DePauw University (in the USA). He is currently full professor at CINVESTAV-IPN in Mexico City, Mexico. He has published over 390 papers in international peer-reviewed journals, book chapters, and conferences. He has also co-authored the book "Evolutionary Algorithms for Solving Multi-Objective Problems", which is now in its Second Edition (Springer, 2007) and has co-edited the book "Applications of Multi-Objective Evolutionary Algorithms" (World Scientific, 2004). His publications currently report over 23,600 citations, according to Google Scholar (his h-index is 61). He has delivered invited talks, keynote speeches and tutorials at international conferences held in Spain, USA, Canada, Switzerland, UK, Chile, Colombia, Brazil, Argentina, China, India, Uruguay and Mexico. Dr. Coello has served as a technical reviewer for over 50 international journals and for more than 100 international conferences and actually serves as associate editor of the journals "Evolutionary Computation", "IEEE Transactions on Evolutionary Computation", "Computational Optimization and Applications", "Applied Soft Computing", and "Pattern Analysis and Applications". He is also a member of the editorial board of "Engineering Optimization". He received the 2007 National Research Award (granted by the Mexican Academy of Science) in the area of "exact sciences" and, since January 2011, he is an IEEE Fellow for "contributions to multi-objective optimization and constraint-handling techniques." He is also the recipient of the prestigious "2013 IEEE Kiyo Tomiyasu Award" and of the "2012 National Medal of Science and Arts" in the area of "Physical, Mathematical and Natural Sciences" (this is the highest award that a scientist can receive in Mexico). His current research interests are: evolutionary multiobjective optimization and constraint-handling techniques for evolutionary algorithms.


Blind No More: Constant Time Non-Random Improving Moves and Exponentially Powerful Recombination

The field of Evolutionary Computation has long accepted that changes based on random mutation and random recombination are both necessary and fundamental to Evolutionary Algorithms. However, in a number of different domains it is possible to exactly compute the best or approximate best improving move in constant time. We will generalized the concept of "NK-Landscapes" to include "Mk-Landscapes" where M is the number of subfunctions, and each subfunction is a function of at most k bits. This corresponds to the class of k-bounded pseudo Boolean optimization problems. Such problems include weighted MAXSAT, NK-Landscapes, Spin Glass problems. Furthermore, every problem with a bit representation and a compressible algebraic evaluation function can be expressed as an Mk-Landscape.

All of these problems share the hypercube (or generalized hypercube) as the neighborhood adjacency graph. It can be shown that all "Mk-landscapes" can be decomposed into k subfunctions, each of which is an eigenvector of the Graph Laplacian of the search space. Building on this decomposition, it is possible to 1) evaluate new moves in constant time, 2) identify improving moves in constant time, and 3) compute the average over neighborhoods in constant time. This has made it possible to construct a new generation of extremely powerful search algorithms that scale to very large problems, and which can identify improving moves multiple steps ahead. The resulting methods yield improvements over the best MAXSAT solvers (such as AdaptG2WSAT, and GSAT) and NK-Landscape solvers, particularly on large real world problems of a million or more variables.

Decomposability can also be exploited by recombination. New results show that "partition crossover" can be applied to "Mk-Landscapes" as well as problems such as Traveling Salesman Problem. Partition Crossove decomposes the recombination graph into q subgraphs; in O(n) time it can then identify the best of 2^k possible offspring. If the parents are local optima, the offspring are guaranteed to be locally optimal in a subspace, and offspring are often local optima in the full space. This allows partition crossover to directly "tunnel" between local optima, moving directly from local optimum to local optimum.

Darrell Whitley
Prof. Darrell Whitley has been active in Evolutionary Computation since 1986, and has published more than 200 papers. These papers have garnered more than 16,000 citations. Dr. Whitley's H-index is 54. He has served as Editor-in-Chief of the journal Evolutionary Computation, and recently served as Chair of the Governing Board of ACM Sigevo from 2007 to 2011. He currently serves as an Associate Editor of Artificial Intelligence.


Expressive Genetic Programming

The language in which evolving programs are expressed can have significant impacts on the problem-solving capabilities of a genetic programming system. These impacts stem both from the absolute computational power of the languages that are used, as elucidated by formal language theory, and from the ease with which various computational structures can be produced by random code generation and by the action of genetic operators. Highly expressive languages can facilitate the evolution of programs for any computable function using, when appropriate, multiple data types, evolved subroutines, evolved control structures, evolved data structures, and evolved modular program and data architectures. In some cases expressive languages can even support the evolution of programs that express methods for their own reproduction and variation (and hence for the evolution of their offspring).

This tutorial will begin by presenting a range of approaches that have been taken for evolving programs in expressive programming languages. Within this context it will then provide a detailed introduction to the Push programming language, which was designed specifically for expressiveness and specifically for use in genetic programming systems. Push programs are syntactically unconstrained but can nonetheless make use of multiple data types and express arbitrary control structures, supporting the evolution of complex, modular programs in a particularly simple and flexible way. The Push language will be described and demonstrated, and over ten years of Push-based research, including the production of human-competitive results, will be briefly surveyed. The tutorial will conclude with a discussion of recent enhancements to Push that are intended to support the evolution of complex and robust software systems.

Lee Spector
He is a Professor of Computer Science in the School of Cognitive Science at Hampshire College in Amherst, Massachusetts, and an adjunct professor in the Department of Computer Science at the University of Massachusetts, Amherst. He received a B.A. in Philosophy from Oberlin College in 1984 and a Ph.D. from the Department of Computer Science at the University of Maryland in 1992. His areas of teaching and research include genetic and evolutionary computation, quantum computation, and a variety of intersections between computer science, cognitive science, evolutionary biology, and the arts. He is the Editor-in-Chief of the journal Genetic Programming and Evolvable Machines (published by Springer) and a member of the editorial board of Evolutionary Computation (published by MIT Press). He is also a member of the SIGEVO executive committee and he was named a Fellow of the International Society for Genetic and Evolutionary Computation.

More info: http://hampshire.edu/lspector


Parameterized Complexity Analysis of Evolutionary Algorithms

In real applications, problem inputs are typically structured or restricted in some way. Evolutionary algorithms and other randomized search heuristics can sometimes exploit such extra structure, while in some cases it can be problematic. In any case, from a theoretical perspective, little is understood about how different structural parameters affect the running time of such algorithms.

In this tutorial we present techniques from the new and thriving field of parameterized complexity theory. These techniques allow for a rigorous understanding of the influence of problem structure on the running time of evolutionary algorithms on NP-hard combinatorial optimization problems. We show how these techniques allow one to decompose algorithmic running time as a function of both problem size and additional parameters. In this way, one can attain a more detailed description of what structural aspects contribute to the exponential running time of EAs applied to solving hard problems.

We will present detailed and thorough results for evolutionary algorithms on the traveling salesperson problem, vertex cover, maximum satisfiability, and makespan scheduling. We will also outline directions for future research and discuss some open questions.

Frank Neumann
He received his diploma and Ph.D. from the Christian-Albrechts-University of Kiel in 2002 and 2006, respectively.

Currently, he is an associate professor and leader of the Optimisation and Logistics Group at the School of Computer Science, The University of Adelaide, Australia. From 2008-2010, he has been the coordinator of the area "Bio-inspired Computation" at the Max Planck Institute for Informatics, Saarbruecken, Germany. With Kenneth De Jong he organised ACM FOGA 2013 in Adelaide and together with Carsten Witt he has written the textbook "Bioinspired Computation in Combinatorial Optimization - Algorithms and Their Computational Complexity" published by Springer. He is vice-chair of IEEE Task Force on Theoretical Foundations of Bio-inspired Computation, chair of the IEEE Task Force on Evolutionary Scheduling and Combinatorial Optimization, and has guest-edited special issues for the journals Algorithmica, Theoretical Computer Science, and Evolutionary Computation (MIT Press). Furthermore, he has organized several tracks and special sessions at IEEE CEC and GECCO. In his work, he considers algorithmic approaches in particular for combinatorial and multi-objective optimization problems and focuses on theoretical aspects of evolutionary computation as well as high impact applications in the areas of renewable energy and sports.

Andrew Sutton
He completed his Ph.D. at Colorado State University in 2011. He has held postdoctoral research fellowships at the University of Adelaide and Colorado State University. He is currently a researcher in the Lehrstuhl für Informatik I at FSU Jena in Germany.

He has co-organized the Special Session on Theoretical Foundations of Bio-inspired Computation in 2013 at IEEE CEC and again in 2014 at WCCI. He is currently guest editing a special issue on "Theory of Evolutionary Algorithms 2014" in Evolutionary Computation Journal (MIT Press). His work focuses on understanding the influence of problem structure on the performance of randomized search heuristics with a particular focus on parameterized runtime analysis on combinatorial optimization problems.


Theory of Swarm Intelligence

Social animals as found in fish schools, bird flocks, bee hives, and ant colonies are able to solve highly complex problems in nature. This includes foraging for food, constructing astonishingly complex nests, and evading or defending against predators. Remarkably, these animals in many cases use very simple, decentralized communication mechanisms that do not require a single leader. This makes the animals perform surprisingly well, even in dynamically changing environments. The collective intelligence of such animals is known as swarm intelligence and it has inspired popular and very powerful optimization paradigms, including ant colony optimization (ACO) and particle swarm optimization (PSO).

The reasons behind their success are often elusive. We are just beginning to understand when and why swarm intelligence algorithms perform well, and how to use swarm intelligence most effectively. Understanding the fundamental working principles that determine their efficiency is a major challenge.

This tutorial will give a comprehensive overview of recent theoretical results on swarm intelligence algorithms, with an emphasis on their efficiency (runtime/computational complexity). In particular, the tutorial will show how techniques for the analysis of evolutionary algorithms can be used to analyze swarm intelligence algorithms and how the performance of swarm intelligence algorithms compares to that of evolutionary algorithms.
The results shed light on the working principles of swarm intelligence algorithms, identify the impact of parameters and other design choices on performance, and thus help to use swarm intelligence more effectively.

The tutorial will be divided into a first, larger part on ACO and a second, smaller part on PSO. For ACO we will consider simple variants of the MAX-MIN ant system. Investigations of example functions in pseudo-Boolean optimization demonstrate that the choices of the pheromone update strategy and the evaporation rate have a drastic impact on the running time. We further consider the performance of ACO on illustrative problems from combinatorial optimization: constructing minimum spanning trees, solving shortest path problems with and without noise, and finding short tours for the TSP.

For particle swarm optimization, the tutorial will cover results on PSO for pseudo-Boolean optimization as well as a discussion of theoretical results in continuous spaces.

Dirk Sudholt
He obtained his Diplom (Master's) degree in 2004 and his PhD in computer science in 2008 from the Technische Universitaet Dortmund, Germany, under the supervision of Prof. Ingo Wegener. He has held postdoc positions at the International Computer Science Institute in Berkeley, California, working in the Algorithms group led by Prof. Richard M. Karp and at the University of Birmingham, UK, working with Prof. Xin Yao. Since January 2012 he is a Lecturer at the University of Sheffield, UK.

His research focuses on the computational complexity of randomized search heuristics such as evolutionary algorithms and swarm intelligence algorithms like ant colony optimization and particle swarm optimization. He is an editorial board member of the Evolutionary Computation journal and receives funding from the EU's Future and Emerging Technologies scheme (SAGE project). He has more than 60 refereed publications in international journals and conferences, including 8 best paper awards at leading conferences, GECCO and PPSN. He has given 6 tutorials at ThRaSH, WCCI/CEC, GECCO, and PPSN.


Evolutionary Image Analysis, Signal Processing and Pattern Recognition

The intertwining disciplines of image analysis, signal processing and pattern recognition are major fields of computer science, computer engineering and electrical and electronic engineering, with past and on-going research covering a full range of topics and tasks, from basic research to a huge number of real-world industrial applications.

Among the techniques studied and applied within these research fields, evolutionary computation (EC) including evolutionary algorithms, swarm intelligence and other paradigms is playing an increasingly relevant role.  The terms Evolutionary Image Analysis and Signal Processing and Evolutionary Computer Vision are more and more commonly accepted as descriptors of a clearly defined  research area and family of techniques and applications. This has also been favoured by the recent availability of environments for computer hardware and systems such as GPUs and grid/cloud/parallel computing systems, whose architecture and computation paradigm fit EC algorithms extremely well, alleviating the intrinsically heavy computational burden imposed by such techniques and allowing even for real-time applications.

The tutorial will introduce the general framework within which Evolutionary Image Analysis, Signal Processing and Pattern Recognition can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include edge detection, segmentation, feature extraction/selection/construction, object tracking, object recognition, motion detection, pattern classification, speech recognition, and biomarker detection. EC techniquies to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, learning classifier systems, evolutionary multi-objective optimisation as well as memetic/hybrid paradigms.  We will show how such EC techniques can be effectively applied to image analysis and signal processing problems and provide promising results.

Proposed Interactive Activities/Demos

This tutorial will present three or four demonstrations: (1) genetic programming for object tracking; (2) genetic programming for motion detection; (3) particle swarm optimisation for edge detection; (4) learning classifier systems for online digit recognition.

Mengjie Zhang
He is currently Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation Research Group. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, a member of the Faculty of Graduate Research Board at the University, Associate Dean (Research and Innovation) in the Faculty of Engineering, Deputy Head of School of Engineering and Computer Science, and Chair of the Research Committee of the School.

His research is mainly focused on evolutionary computation, particularly genetic programming, particle swarm optimisation and learning classifier systems with application areas of computer vision and image processing, job shop scheduling, multi-objective optimisation, classification with unbalanced data, and feature selection and dimension reduction for classification with high dimensions.  He is also interested in data mining, machine learning, and web information extraction. Prof Zhang has published over 300 research papers in refereed international journals and conferences in these areas. 

He has been serving as an associated editor or editorial board member for five international journals including IEEE Transactions on Evolutionary Computation, the Evolutionary Computation Journal (MIT Press) and Genetic Programming and Evolvable Machines (Springer), and as a reviewer of over 20 international journals. He has been a major chair for six international conferences. He has also been serving as a steering committee member and a program committee member for over 80 international conferences including all major conferences in evolutionary computation.  Since 2007, he has been listed as one of the top ten world genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html).

He is the Tutorial Chair for GECCO 2014. Since 2012, he has been co-chairing EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, now a track of the EvoApplications conference. Since 2006, he has been co-organising and co-chairing the special session on evolutionary computer vision at IEEE CEC.

Prof Zhang is the Chair of the IEEE CIS Evolutionary Computation Technical Committee, a vice-chair of the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing, and the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.

Stefano Cagnoni
He graduated in Electronic Engineering at the University of Florence, Italy, where he has been a PhD student and a post-doc until 1997. In 1994 he was a visiting scientist at the Whitaker College Biomedical Imaging and Computation Laboratory at the Massachusetts Institute of Technology. Since 1997 he has been with the University of Parma, where he has been Associate Professor since 2004.

Recent research grants include: co-management of a project funded by Italian Railway Network Society (RFI) aimed at developing an automatic inspection system for train pantographs; a "Marie Curie Initial Training Network" grant, for a four-year research training project in Medical Imaging using Bio-Inspired and Soft Computing; a grant from "Compagnia diS. Paolo" on "Bioinformatic and experimental dissection of the signalling pathways underlying dendritic spine function".

He has been Editor-in-chief of the "Journal of Artificial Evolution and Applications" from 2007 to 2010. Since 1999, he has been chair of EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, now a track of the EvoApplications conference. Since 2005, he has co-chaired MedGEC, workshop on medical applications of evolutionary computation at GECCO. Co-editor of special issues of journals dedicated to Evolutionary Computation for Image Analysis and Signal Processing. Member of the Editorial Board of the journals “Evolutionary Computation” and “Genetic Programming and Evolvable Machines”.

He has been awarded the "Evostar 2009 Award", in recognition of the most outstanding contribution to Evolutionary Computation.


Generative and Developmental Systems

In evolutionary computation it is common for the genotype to map directly to the phenotype such that each gene in the genotype represents a single discrete component of the phenotype. While this convention is widespread, in nature the mapping is not direct. Instead, the genotype maps to the phenotype through a process of development, which means that each gene may be activated multiple times and in multiple places in the process of constructing the phenotype.

This tutorial will examine why such indirect mapping may be critical to the continuing success of evolutionary computation. Rather than just an artifact of nature, indirect mapping means that vast phenotypic spaces (e.g. the 100 trillion connection human brain) can be searched effectively in a space of far fewer genes (e.g. the 30,000 gene human genome). The tutorial will introduce this research area, called Generative and Developmental Systems (GDS; also a track at GECCO), by surveying milestones in the field and introducing its most profound puzzles. In particular, the first half of the tutorial will review the most historically high-impact algorithms and achievements in the field while the second half will probe more deeply into cutting-edge research and its greatest current challenges.

Most importantly, what is the right abstraction of natural development to capture its essential advantages without introducing unnecessary overhead into the search?

Included Demo: The tutorial will include an exploration of how the genes within indirectly-encoded faces found on Picbreeder.org map to the final phenotypes. An interactive "DNA-explorer" application will be showcased for this purpose.

Kenneth O. Stanley
He is an associate professor in the Department of Electrical Engineering and Computer Science at the University of Central Florida and director of the Evolutionary Complexity Research Group. He received a B.S.E. from the University of Pennsylvania in 1997 and received a Ph.D. in 2004 from the University of Texas at Austin. He is an inventor of the Neuroevolution of Augmenting Topologies (NEAT), HyperNEAT, and novelty search algorithms for evolving complex artificial neural networks. His main research contributions are in neuroevolution (i.e. evolving neural networks), generative and developmental systems (GDS), coevolution, machine learning for video games, and interactive evolution. He has won best paper awards for his work on NEAT, NERO, NEAT Drummer, FSMC, HyperNEAT, ES-HyperNEAT, adaptive HyperNEAT, novelty search, Galactic Arms Race, and NA-IEC. A founder of the Generative and Developmental Systems track at GECCO, he is also an associate editor of IEEE Transactions on Computational Intelligence and AI in Games, on the editorial board of Evolutionary Computation journal, on the ACM SIGEVO Executive Committee, and a co-founder and the editor-in-chief of aigameresearch.org.



Evolutionary Algorithmsfor Protein StructureModeling

In the last two decades, great progress has been made in molecular modeling through computational treatments of biological molecules grounded in evolutionary search techniques. Evolutionary algorithms (EAs) are gaining popularity beyond exploring the relationship between sequence and function in biomolecules. In particular, recent work is showing the promise of EAs in exploring structure spaces of protein chains to address open problems in computational structural biology, such as de novo structure prediction and other structure modeling problems.
Exploring effective interleaving of global and local search has led to hybrid EAs that are now competitive with the Monte Carlo-based frameworks that have traditionally dominated de novo structure prediction. Deeper understanding of the constraints posed by highly-coupled modular systems like proteins and integration of domain knowledge have resulted in effective reproductive operators. Multi-objective optimization has also shown promise in dealing with the conflicting terms that make up protein energy functions and effectively exploring protein energy surfaces. Combinations of these techniques have recently resulted in powerful stochastic search frameworks that go beyond de novo structure prediction and are capable of yielding comprehensive energy landscapes containing possible diverse functionally-relevant structures of proteins.
The objective of this tutorial is to introduce the EC community to the rapid developments on EA-based frameworks for protein structure modeling through a concise but comprehensive review of developments in this direction over the last decade. The review will be accompanied with specific detailed highlights and interactive software demonstrations of representative methods. Building on the success and feedback of a related tutorial presented by the organizers at GECCO 2014, highlights will focus on de novo structure prediction and then energy landscape mapping of wildtype and disease- causing variants of proteins involved in proteinopathies, such as cancer. The tutorial will expand the view of EA-based frameworks beyond sequence-focused application settings. It will introduce EC researchers to open problems in computational structural biology and in the process spur the design of novel and powerful evolutionary search techniques.

Kenneth De Jong
He is a senior and well-known researcher in the EC community with a rich and diverse research profile. De Jong's research interests include genetic algorithms, evolutionary computation, machine learning, and adaptive systems. He is currently involved in research projects involving the development of new evolutionary algorithm (EA) theory, the use of EAs as heuristics for NP-hard problems, and the application of EAs to the problem of learning task programs in domains such as robotics, diagnostics, navigation and game playing. Support for these projects is provided by NSF, DARPA, ONR, and NRL. He is an active member of the Evolutionary Computation research community and has been involved in organizing many of the workshops and conferences in this area. He is the founding editor-in-chief of the journal Evolutionary Computation (MIT Press), and a member of the board of ACM SIGEVO.

Amarda Shehu
She is an energetic and prolific young researcher in computational structural biology. Shehu's research contributions are in computational structural biology, biophysics, and bioinformatics with a focus on issues concerning the relationship between sequence, structure, dynamics, and function in biological molecules. Shehu has unique expertise in tight coupling of robotics-inspired and evolutionary-based probabilistic search and optimization with computational protein biophysics, and has made significant contributions to modeling native structures, equilibrium fluctuations and conformational ensembles, loop motions, large-scale motions connecting diverse functional states, and assembly of proteins and peptides. This research is supported by NSF, Jeffress Trust Program in Interdisciplinary Sciences, and the Virginia Youth Tobacco Award. Shehu is also the recipient of an NSF CAREER award for a unifying framework for modeling protein molecules in the presence of constraints. She is also an active member of the Bioinformatics and Computational Biology ACM and IEEE Community and has been involved in organizing workshops, tutorials, and conferences in these communities.


Solving complex problems with coevolutionary algorithms

Coevolutionary algorithms (CoEAs) go beyond the conventional paradigm of evolutionary computation in having  the potential to answer questions about what to evolve against (competition) and / or establish how multi-agent behaviours can be discovered (cooperation). Competitive coevolution can be considered from the perspective of discovering tests that distinguish between the performance of candidate solutions. Cooperative coevolution implies that mechanisms are adopted for distributing fitness across more than one individual. In both these variants, the evolving entities engage in interactions that affect all the engaged parties, and result in search gradients that may be very different from those observed in conventional evolutionary algorithm, where fitness is defined externally. This allows CoEAs to model complex systems and solve problems that are difficult or not naturally addressed using conventional evolution.

This tutorial will begin by first establishing basic frameworks for competitive and cooperative coevolution and noting the links to related formalisms (interactive domains and test-based problems). We will identify the pathologies that potentially appear when assuming such coevolutionary formulations (disengagement, forgetting, mediocre stable states) and present the methods that address these issues. Compositional formulations will be considered in which hierarchies of development are explicitly formed leading to the incremental complexification of solutions. The role of system dynamics will also be reviewed with regards to providing additional insight into how design decisions regarding, say, the formulation assumed for cooperation, impact on the development of effective solutions. We will also present the concept of coordinate systems with the goal of detecting underlying objectives, and demonstrate how these developments can make search/learning process more effective. Selected other developments will be covered as well, including hybridization with local search and relationships to shaping.

Malcolm Heywood
He is a Professor of Computer Science at Dalhousie University, Canada. He has conducted research in genetic programming (GP) since 2000. He has a particular interest in scaling up the tasks that GP can potentially be applied to. His current research is attempting the appraise the utility of coevolutionary methods under non-stationary environments as encountered in streaming data applications, and coevolving agents for single and multi-agent reinforcement learning tasks. In the latter case the goal is to coevolve behaviours for playing soccer under the RoboSoccer environment (a test bed for multi-agent reinforcement learning). Dr. Heywood is a member of the editorial board for Genetic Programming and Evolvable Machines (Springer). He was a track co-chair for the GECCO GP track in 2014 and a co-chair for European Conference on Genetic Programming in 2015.

Krzysztof Krawiec
He is an Associate Professor in the Laboratory of Intelligent Decision Support Systems at Poznan University of Technology, Poznań, Poland. His primary research areas are genetic programming and coevolutionary algorithms, with applications in program synthesis, modeling, image analysis, and games. Dr. Krawiec co-­chaired the European Conference on Genetic Programming in 2013 and 2014 and is an associate editor of Genetic Programming and Evolvable Machines journal. His work in the area of CoEAs includes problem decomposition using cooperative coevolution (for machine learning and pattern recognition tasks), learning game strategies for Othello, Go, and other games using  competetive CoEAs, and discovery of underlying objectives in test-based problems.

Description of any interactive activity or demo planned within the tutorial presentation.

Throughout the tutorial we will provide examples of the benefits of assuming coevolutionary mechanisms through the use of case studies taken from applications to classification, gaming and optimization. Specific examples include classification under class imbalance and / or high cardinality, classification under non-stationary streaming data, and the hierarchical composition of GP individuals to solve reinforcement learning tasks such as the `Antwars' and half-field soccer. The tutorial will feature also life demonstrations of the capabilities of coevolved agents, possibly engaging the audience as game players.


Theory of Evolution Strategies and Related Algorithms

Evolution Strategies (ES) are continuous stochastic black-box algorithms widely applied to solve complex real-word problems. The most prominent ES, the so-called CMA-ES is now a standard optimization tool recognized as the state-of-the-art stochastic black-box algorithm.

While the design of ESs went always hand in hand with theory (from the first progress rate estimations used to derive the (1+1)-ES with one-fifth success rule to invariance concepts for CMA-ES), a posteriori theory allows to shed new lights on ESs (for instance how CMA-ES is connected to natural gradient optimization on manifolds) and supports rigorous understanding of algorithms.

In this context, this tutorial is aiming at

1- teaching the participants basic theoretical notions

2- give an overview of state-of-the art theoretical results

3- explain what theory contributes to understanding of algorithms

4- explain how to use theoretical results to design new efficient algorithms.

It will start by basic theoretical notions for the theoretical analysis of continuous stochastic algorithms (convergence, convergence rates, runtime) and then present quantitative optimal bounds on convergence rates for ES types algorithms. Those bounds will be connected to progress-rate theory. The tutorial will then continue with a review of linear convergence results and insights on proof techniques will be given (Markov chain analysis, …). The connexion between ESs and information geometry will be presented. Invariance concepts will be explained.

Anne Auger
She is a permanent researcher at the French National Institute for Research in Computer Science and Control (INRIA). She received her diploma (2001) and PhD (2004) in mathematics from the Paris VI University. Before to join INRIA, she worked for two years (2004-2006) at ETH in Zurich. Her main research interest is stochastic continuous optimization including theoretical aspects and algorithm designs. She is a member of ACM-SIGECO executive committee and of the editorial board of Evolutionary Computation. She has been organizing the biannual Dagstuhl seminar "Theory of Evolutionary Algorithms" in 2008 and 2010 and served as track chair for the theory and ES track in 2011, 2013 and 2014. Together with Benjamin Doerr, she is editor of the book "Theory of Randomized Search Heuristics".


Gene Regulatory Network

Gene regulatory networks are a central mechanism in the regulation of gene expression in all living organisms' cells. Their functioning is nowadays very well understood: they are based on the production of proteins enhanced or inhibited by other proteins inside and outside the cells. This produces a complex dynamics with which, with the exact same DNA, cells from the same organism can form very different types (through cell differentiation) and very different behaviors according to their types and positions. This tutorial will first review the biology of these networks, from the genetic aspect to the dynamics of gene regulation. Then, a biologically plausible model will be presented to highlight the complexity of dynamics gene regulatory networks can produce. Using that model, we will show how a computational model can be designed so that a genetic algorithm can optimize the network efficiently. We will present a set of applications in which artificial gene regulatory networks are plugged into diverse virtual agents (artificial cells in an artificial embryogenesis context, direct connection to the sensors and effectors or high-­‐level behavior regulation in others). Demos, showing the results obtained with this system, will be presented all along the tutorial.

Example of potential Demos: The following websites present some videos of agents controlled by a gene regulatory network. These kinds of applications will be presented during the tutorial.

Sylvain Cussat-Blanc
He has a permanent research and teaching position at University of Toulouse. He is a member of the Institute of Research in Computer Science of Toulouse (IRIT), a research unit of the French National Center for Research (CNRS). He is interested in developmental models, artificial embryogenesis, gene regulatory networks, evolutionary robotics, artificial life, bio-­‐inspired optimization and evolutionary computation in general. He obtained his Ph.D. in 2009. His work was about a cell-­‐based developmental model to produce artificial creatures. During his postdoctoral fellowship with Jordan Pollack in 2011, he applied artificial embryogenesis to evolutionary robotics with the aim to automatically generate real modular robots' morphologies.

Wolfgang Banzhaf
He is a professor of Computer Science at Memorial University of Newfoundland. He received a degree in Physics from the Ludwig-Maximilians-University in Munich and a PhD from the Department of Physics of the Technical University of Karlsruhe. He was postdoctoral researcher at the University of Stuttgart and Visiting / Senior Researcher at Mitsubishi Electric Corporation in Japan and the US. From 1993 to 2003 he was Associate Professor for Applied Computer Science at the Technical University of Dortmund. His research interests are in the field of bio-inspired computing, notably evolutionary computation and complex adaptive systems. Studies of self-­‐organization and the field of Artificial Life are also of very much interest to him. Recently he has become more involved with network research as it applies to natural and man-‐made systems.


Semantic Genetic Programming

Semantic genetic programming is a recent, rapidly growing trend in Genetic Programming (GP) that aims at opening the 'black box' of the evaluation function and make explicit use of more information on program behavior in the search. In the most common scenario of evaluating a GP program on a set of input-output examples (fitness cases), the semantic approach characterizes program with a vector of outputs rather than a single scalar value (fitness). The past research on semantic GP has demonstrated that the additional information obtained in this way facilitates designing more effective search operators. In particular, exploiting the geometric properties of the resulting semantic space leads to search operators with attractive properties, which have provably better theoretical characteristics than conventional GP operators. This in turn leads to dramatic improvements in experimental comparisons.

The aim of the tutorial is to give a comprehensive overview of semantic methods in genetic programming, illustrate in an accessible way a formal geometric framework for program semantics to design provably good mutation and crossover operators for traditional GP problem domains, and to analyze rigorously their performance (runtime analysis). A number of real-world applications of this framework will be also presented. Other promising emerging approaches to semantics in GP will be reviewed. In particular, the recent developments in the behavioral programming, which aims at characterizing the entire program behavior (and not only program outputs) will be covered as well. Current challenges and future trends in semantic GP will be identified and discussed.

Selected methods and concepts will be accompanied with live software demonstrations. Also, efficient implementation of semantic search operators may be challenging. We will illustrate very efficient, concise and elegant implementations of these operators, which are available for download from the web.

Alberto Moraglio
He is a Lecturer in Computer Science in the College of Engineering, Mathematics and Physical Sciences at the University of Exeter, UK. He has been active in bio-inspired computing and genetic programming research for the last 10 years with a substantial publication record in the area. He is the founder of the Geometric Theory of Evolutionary Algorithms, which unifies Evolutionary Algorithms across representations and has been used for the principled design of new successful search algorithms, including a new form of Genetic Programming based on semantics, and for their rigorous theoretical analysis. He was co-chair of the European Conference on Genetic Programming 2012 and 2013, and has regular tutorials at GECCO and IEEE CEC. He is a member of the editorial board for Genetic Programming and Evolvable Machines (Springer).


Krzysztof Krawiec
He is an Associate Professor in the Laboratory of Intelligent Decision Support Systems at Poznan University of Technology, Poznań, Poland. His primary research areas are genetic programming and coevolutionary algorithms, with applications in program synthesis, modeling, image analysis, and games. Dr. Krawiec co-chaired the European Conference on Genetic Programming in 2013 and 2014 and is an associate editor of Genetic Programming and Evolvable Machines journal. His work in GP includes design of effective search operators (particularly crossovers), discovery of semantic modularity of programs, and exploitation of program execution traces for improving search performance (best paper in GP track at GECCO'14). Within coevolutionary algorithms, he is interested in problem decomposition using cooperative coevolution, learning strategies for Othello and other games using competetive coevolution, and discovery of underlying objectives in test-based problems.


Evolutionary Computation for Dynamic Optimization Problems

Many real-world optimization problems are subject to dynamic environments, where changes may occur over time regarding optimization objectives, decision variables, and/or constraint conditions. Such dynamic optimization problems (DOPs) are challenging problems for researchers and practitioners in decision-making due to their nature of difficulty. Yet, they are important problems that decision-makers in many domains need to face and solve. Evolutionary computation (EC) is a class of stoch astic optimization methods that mimic principles from natural evolution to solve optimization and search problems. EC methods are good tools to address DOPs due to their inspiration from natural and biological evolution, which has always been subject to changing environments. EC for DOPs has attracted a lot of research effort during the last twenty years with some promising results. However, this research area is still quite young and far away from well-understood.

This tutorial aims to summarise the research area of EC for DOPs and attract potential young researchers into the important research area. It will provide an introduction to the research area of EC for DOPs and carry out an in- depth description of the state-of-the-art of research in the field regarding the following five aspects: benchmark problems and generators, performance measures, algorithmic approaches, theoretical studies, and applications. Some future research issues and directions regarding EC for DOPs will also be presented. The purpose is to (i) provide clear definition and classification of DOPs; (ii) review current approaches and provide detailed explanations on how they work; (iii) review the strengths and weaknesses of each approach; (iv) discuss the current assumptions and coverage of existing research on EC for DOPs; and (v) identify current gaps, challenges, and opportunities in EC for DOPs.

Demos planned within the tutorial presentation:
A few demos are planned to illustrate several DOP benchmark problems/generators and show the solving process of several EC methods on some DOP benchmark problems during the tutorial presentation.

Shengxiang Yang
He (http://www.tech.dmu.ac.uk/~syang/) is now a Professor of Computational Intelligence (CI) and the Director of the Centre for Computational Intelligence (http://www.cci.dmu.ac.uk/), De Montfort University (DMU), UK. Before joining DMU in July 2012, he has worked in the UK at Brunel University, University of Leicester, and King's College London, from October 1999 to June 2012, respectively. He has worked extensively for over 15 years in the areas of CI methods, including EC and artificial neural networks, and their applications for real-world problems. He has over 180 publications in these domains, including over 50 SCI journal papers. His work has been supported by UK research councils (e.g., EPSRC, Royal Society, and Royal Academy of Engineering), Chinese Ministry of Education, Hong Kong Polytechnic University Research Grants, and industry partners (e.g., BT, Honda, Railway Safety and Standards Board, and Network Rail, etc.), with a total funding of approximately £1M to him as the Principle Investigator (PI), of which two EPSRC standard research projects have been focused on EC for DOPs. He is an Associate Editor or Editorial Board Member of 5 international journals, including IEEE Transactions on Cybernetics and the MIT journal Evolutionary Computation. He is the founding chair of the Task Force on Intelligent Network Systems (TF-INS) and the chair of the Task Force on EC in Dynamic and Uncertain Environments (ECiDUEs) of the IEEE CI Society (CIS). He has chaired over 10 workshops and special sessions relevant to ECiDUEs for several major international conferences. He is the founding co-chair of the IEEE Symposium on CI in Dynamic and Uncertain Environments. He has been on the program committee for over 60 conferences. He has co-edited 11 books, proceedings, and journal special issues. He has given a number of invited keynote speeches at international conferences and workshops, and seminars in different countries. Since 2003, Prof. Yang has played a world leading role in the rapidly growing area of EC for DOPs. He has developed several DOP benchmark generators, has devised several advanced schemes, e.g., associative memory, dualism, and guided immigrants, for EC for DOPs, and has developed several state-of-the-art EC methods for DOPs. Both the benchmark generators and state-of-the-art EC methods developed for DOPs are being widely used by other researchers to test and compare their optimization algorithms for DOPs.




Specialized Tutorials


Medical Applications of Evolutionary Computation

The application of genetic and evolutionary computation to problems in medicine has increased rapidly over the past five years, but there are specific issues and challenges that distinguish it from other real-world applications.  Obtaining reliable and coherent patient data, establishing the clinical need and demonstrating value in the results obtained are all aspects that require careful and detailed consideration.

My research uses genetic programming (a representation of Cartesian Genetic Programming) in the diagnosis and monitoring of Parkinson's disease, Alzheimer's disease and other neurodegenerative conditions.  It is also being used in the early detection of breast cancer through automated assessment of mammograms.  I currently have multiple clinical studies in progress in the UK (Leeds General Infirmary), USA (UCSF), UAE (Dubai Rashid Hospital), Australia (Monash Medical Center) and Singapore (National Neuroscience Institute). The technology is protected through 3 patent applications and a spinout company marketing 4 medical devices.

The tutorial considers the following topics:

  1. Introduction to medical applications of genetic and evolutional computation and how these differ from other real-world applications
  2. Overview of past work in the from a medical and evolutional computation point of view
  3. Three case examples of medical applications:
    i. diagnosis and monitoring of Parkinson's disease
    ii. detection of beast cancer from mammograms
    iii. cancer screening using Raman spectroscopy
  4. Practical advice on how to get started working on medical
    applications, including existing medical databases and conducting new
    medical studies, commercialization and protecting intellectual property.
  5. Summary, further reading and links


A unique practical element will be the demonstration of two prototype devices for the diagnosis and monitoring of Parkinson's disease.

Stephen L. Smith
He received a BSc in Computer Science and then an MSc and PhD in Electronic Engineering from the University of Kent, UK. He is currently a reader in the Department of Electronics at the University of York, UK.

Stephen's main research interests are in developing novel representations of evolutionary algorithms particularly with application to problems in medicine. His work is currently centered on the diagnosis of neurological dysfunction and analysis of mammograms. Stephen was program chair for the Euromicro Workshop on Medical Informatics, program chair and local organizer for the Sixth International Conference on Information Processing in Cells and Tissues (IPCAT) and guest editor for the subsequent special issue of BioSystems journal.  More recently he was tutorial chair for the IEEE Congress on Evolutionary Computation (CEC) in 2009, local organiser for the International Conference on Evolvable Systems (ICES) in 2010 and co-general chair for the Ninth International Conference on Information Processing in Cells and Tissues (IPCAT) in April 2012. Stephen currently holds a Royal Academy of Engineering Enterprise Fellowship.

Stephen is co-founder and organizer of the MedGEC Workshop, which is now in its tenth year. He is also guest editor for a special issue of Genetic Programming and Evolvable Machines (Springer) on medical applications and co-editor of a book on the subject (John Wiley, November 2010).

Stephen is associate editor for the journal Genetic Programming and Evolvable Machines and a member of the editorial board for the International Journal of Computers in Healthcare and Neural Computing and Applications.

Stephen has some 75 refereed publications, is a Chartered Engineer and a fellow of the British Computer Society.


Automatic (Offline) Configuration of Algorithms

Most optimization algorithms, including evolutionary algorithms and metaheuristics, and general-purpose solvers for integer or constraint programming, have often many parameters that need to be properly configured (i.e., tuned) for obtaining the best results on a particular problem. Automatic (offline) algorithm configuration methods help algorithm users to determine the parameter settings that optimize the performance of the algorithm before the algorithm is actually deployed. Moreover, automatic algorithm configuration methods may potentially lead to a paradigm shift in algorithm design and configuration because they enable algorithm designers to explore much larger design spaces than by traditional trial-and-error and experimental design procedures. Thus, algorithm designers can focus on inventing new algorithmic components, combine them in flexible algorithm frameworks, and let final algorithm design decisions be taken by automatic algorithm configuration techniques for specific application contexts.

This tutorial will be divided in two parts. The first part will give an overview of the algorithm configuration problem, review recent methods for automatic algorithm configuration, and illustrate the potential of these techniques using recent, notable applications from the presenters' and other researchers work. The second part of the tutorial will focus on a detailed discussion of more complex scenarios, including multi-objective problems, anytime algorithms, heterogeneous problem instances, and the automatic generation of algorithms from algorithm frameworks. The focus of this second part of the tutorial is, hence, on practical but challenging applications of automatic algorithm configuration. The second part of the tutorial will demonstrate how to tackle these configuration tasks using our irace software (http://iridia.ulb.ac.be/irace), which implements the iterated racing procedure for automatic algorithm configuration. We will provide a practical step-by-step guide on using irace for the typical algorithm configuration scenario.


The second part of the tutorial will include a detailed explanation of how automatic algorithm configuration tools can be applied to challenging configuration tasks as a demo. In particular, the practical demonstration will use our irace software (http://iridia.ulb.ac.be/irace), which implements the iterated racing procedure for automatic algorithm configuration. Nonetheless, the topics discussed are applicable to other automatic configuration methods. The demo will cover the following topics:

1. The irace software package: design and components. 2. Setup of a typical automatic configuration scenario. 3. Analyzing the output of irace. 4. Advanced features: parallel execution, MPI, computing clusters. 5. Complex scenarios I: heterogeneous benchmark instances. 6. Complex scenarios II: multi-objective optimization and anytime optimization.

Manuel López-Ibáñez
Dr. López-Ibáñez is a postdoctoral researcher (Chargé de recherche) of the Belgian F.R.S.-FNRS working at the IRIDIA laboratory of Université libre de Bruxelles, Belgium. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 13 journal papers, 6 book chapters and 33 papers in peer-reviewed proceedings of international conferences on diverse areas such as evolutionary algorithms, ant colony optimization, multi-objective optimization, pump scheduling and various combinatorial optimization problems. His current research interests are experimental analysis and automatic configuration and automatic design of stochastic optimization algorithms, for single and multi-objective optimization. He is the lead developer and current maintainer of the irace software package (http://iridia.ulb.ac.be/irace).


Thomas Stützle
Dr. Stützle is a senior research associate of the Belgian F.R.S.-FNRS working at the IRIDIA laboratory of Université libre de Bruxelles (ULB), Belgium. He received the Diplom (German equivalent of M.S. degree) in business engineering from the Universität Karlsruhe (TH), Karlsruhe, Germany in 1994, and his PhD and his habilitation in computer science both from the Computer Science Department of Technische Universität Darmstadt, Germany, in 1998 and 2004, respectively. He is the co-author of two books about ``Stochastic Local Search: Foundations and Applications'' and ``Ant Colony Optimization'' and he has extensively published in the wider area of metaheuristics including 20 edited proceedings or books, 8 journal special issues, and more than 190 journal, conference articles and book chapters, many of which are highly cited. He is associate editor of Computational Intelligence, Swarm Intelligence, and Applied Mathematics and Computation and on the editorial board of seven other journals including Evolutionary Computation and Journal of Artificial Intelligence Research. His main research interests are in metaheuristics, swarm intelligence, methodologies for engineering stochastic local search algorithms, multi-objective optimization, and automatic algorithm configuration. In fact, since more than a decade he is interested in automatic algorithm configuration and design methodologies and he has contributed to some effective algorithm configuration techniques such as F-race, Iterated F-race and ParamILS. His 2002 GECCO paper on "A Racing Algorithm For Configuring Metaheuristics" (joint work with M. Birattari, L. Paquete, and K. Varrentrapp) has received the 2012 SIGEVO impact award.

Low or no cost distributed evolutionary computation

Having a grid or cluster or money to pay for vast amounts of cloud resources is great, but the need to do science and the performance attached to it is not always in sync with what is provided by your friendly science funding agency. At the same time, nowadays there are many resources connected to the Internet which you can tap when free or when they are offered to you voluntarily. 
In this tutorial we will, first, make a general review of volunteer/citizen grid and cloud computing resources. Then we will make a brief introduction to JavaScript, which is the language that is used in all browsers and which can also be used in the client, and explain how to create, very simply, a client-server application that uses free or low-cost cloud resources to create a fully distributed evolutionary computing system.

Along the line, we will also explain which evolutionary computing paradigms are better suited to this kind of environment, introducing pool-based evolutionary algorithms and a very simple implementation; we will also show what is the state of the art in that area and how researchers, so far, have dealt with this problem and what kind of application areas we could we use.

This is the syllabus for the talk:

  1. Volunteer/stealth/freeriding computing: what is it, what is there, what can we use, what are their advantages and disadvantages
  2. Distributed evolutionary computation in an asynchronous and ephemeral environment: why what you know about DEA is not true any more.
  3. Using these resources: a primer on free cloud technologies and what usually goes with them.
  4. A minimal implemenation: using a scripting language to create a free, and potentially large-scale DEA implementation
  5. Case studies: what some people have done in the past and how you can learn from them.

We will show a demonstration of a distributed evolutionary algorithm on the browser with the help of the audience.

JJ Merelo
He is professor at the university of Granada. He has been involved in evolutionary computation for 20 years and not missed a PPSN since 2000, including the organisation of PPSN 2002 in Granada. He's the author of Algorithm::Evolutionary, a Perl evolutionary computation library and has given tutorials in GECCO, PPSN and CEC conferences. He's also been plenary speaker in ECTA 2013 and IDC 2014.


Intelligent Systems for Smart Cities

The concept of Smart Cities can be understood as a holistic approach to improve the level of development and management of the city in a broad range of services by using information and communication technologies.

It is common to recognize six axes of work in them: i) Smart Economy, ii) Smart People, iii) Smart Governance, iv) Smart Mobility, v) Smart Environment, and vi) Smart Living. In this tutorial we first focus on a capital issue: smart mobility. European citizens and economic actors need a transport system which provides them with seamless, high-quality door-to-door mobility. At the same time, the adverse effects of transport on the climate, the environment and human health need to be reduced. We will show many new systems based in the use of bio-inspired techniques to ease the road traffic flow in the city, as well as allowing a customized smooth experience for travellers (private and public transport).

This tutorial will then discuss on potential applications of intelligent systems for energy (like adaptive lighting in streets), environmental applications (like mobile sensors for air pollution), smart building (intelligent design), and several other applications linked to smart living, tourism, and smart municipal governance.

Enrique Alba
Prof. Enrique Alba had his degree in engineering and PhD in Computer Science in 1992 and 1999, respectively, by the University of M‡laga (Spain). He works as a Full Professor in this university with different teaching duties: data communications, distributed programing, software quality, and also evolutionary algorithms, bases for R+D+i and smart cities at graduate and master programs. Prof. Alba leads an international team of researchers in the field of complex optimization/learning with applications in smart cities, bioinformatics, software engineering, telecoms, and others. In addition to the organization of international events (ACM GECCO, IEEE IPDPS-NIDISC, IEEE MSWiM, IEEE DS-RT, É) Prof. Alba has offered dozens postgraduate courses, multiple seminars in more than 20 international institutions, and has directed several research projects (6 with national funds, 5 in Europe, and numerous bilateral actions). Also, Prof. Alba has directed 7 projects for innovation and transference to the industry (OPTIMI, Tartessos, ACERINOX, ARELANCE, TUO, INDRA, ZED) and presently he also works as invited professor at INRIA, the Univ. of Luxembourg, and Univ. of Ostrava. He is editor in several international journals and book series of Springer-Verlag and Wiley, as well as he often reviews articles for more than 30 impact journals. He has published 77 articles in journals indexed by Thomson ISI, 17 articles in other journals, 40 papers in LNCS, and more than 250 refereed conferences. Besides that, Prof. Alba has published 11 books, 39 book chapters, and has merited 6 awards to his professional activities. Pr. Alba's H index is 39, with more than 7500 cites to his work.



Computational Intelligence in Games

Games are a rather young but fast growing application domain where methods of computational intelligence, and thus evolutionary computation, are widely applied. This tutorial gives a brief overview of the area and highlights the several research and application opportunities for computational intelligence and evolutionary computation methods. It also provides in-depth perspective on specific classes of problems where evolutionary computation has proved very successful. In particular, it covers the automatic/assisted design of game intelligence (NPC), the procedural generation of game contents (PCG), interactive evolution of personalized content, etc.

Pier Luca Lanzi
He is associate professor at the Dipartimento di Elettronica, Informazione e Bioingegneria of the Politecnico di Milano. He has been working in field of evolutionary computation and machine learning since 1996. He is interested in applications to data mining and computer games. He served as the general chair for ACM GECCO-2011 and IEEE CIG 2009. He is associate editor for the Evolutionary Computation Journal, the IEEE Transactions on Computational Intelligence and Artificial Intelligence in Games, and Applied Soft Computing.



Synergies between Evolutionary Algorithms and Reinforcement Learning

A recent trend in evolutionary algorithms (EAs) transfers expertise from and to other areas of machine learning. An interesting novel symbiosis considers: i) reinforcement learning (RL), which learns on-line and off-line difficult dynamic elaborated tasks requiring lots of computational resources, and ii) EAs with the main strengths in its eloquence and computational efficiency. These two techniques are addressing the same problem of maximization of the agent' reward in difficult stochastic environments that can include partial observations. Sometimes, they exchange techniques in order to improve their theoretical and empirical efficiency, like computational speed for on-line learning, and robust behaviour for the off-line optimisation algorithms. For example, multi-objective RL uses tuples of rewards instead of a single reward value and techniques from multi-objective EAs should be integrated to improve the exploration/exploitation trade-off for complex and large multi-objective environments. The problem of selecting the best genetic operator is similar to the problem that an agent faces when choosing between alternatives in achieving its goal of maximising its cumulative expected reward. Practical approaches select the RL method that solve best the online operator selection problem.

Madalina M. Drugan
She is researcher at the Artificial Intelligence Lab, Vrije Universiteit Brussels, Belgium. She received a Diploma Engineer degree in Computer Science from the Technical University of Cluj-Napoca, Romania, and, in 2006, she received a PhD from the Computer Science Department, University of Utrecht, The Netherlands. For more than 13 years, she did research in several fields of Machine Learning and Optimisation and related topics like Evolutionary Computation, Bioinformatics, Multi-objective optimisation, Meta-heuristics, Operational Research, and Reinforcement Learning. She has experience with research grants, reviewing services and a strong publication record in international peer-reviewed journals and conferences. During last three years, she initiated and organised several special sessions, tutorials and workshops on Reinforcement Learning and Optimisation at IEEE WCCI, IEEE SSCI, PPSN, ESANN, CEC, IJCNN, SIMCO, EVOLVE.




Introductory Tutorials
Advanced Tutorials
Specialized Tutorials




GECCO 2014 site    GECCO 2013 site   GECCO 2012 site      GECCO 2011 site    GECCO 2010 site    GECCO 2009 site    GECCO 2008 site       GECCO 2007 site     GECCO 2006 site    GECCO 2005 site        GECCO 2004 site     GECCO 2003 site       GECCO 2002 site      GECCO 2001 site      GECCO 2000 site      GECCO 1999 site