Planned Free Tutorials
   Introductory Tutorials
   Advanced Tutorials
   Specialized Techniques and Applications


Planned Free Tutorials
Two days of free tutorials and workshops (included with conference registration) presented by some of the world's foremost experts in topics of interest to genetic and evolutionary computation researchers and practitioners.

Workshops and Tutorials Schedule

Introductory Tutorials

Genetic algorithms
more info:
Erik Goodman    -
Genetic programming
more info:
John R. Koza    -
Evolution strategies: Basic introduction

Thomas Bäck    -

Evolutionary computation: A unified view
more info:
Kenneth De Jong    -
Probabilistic model-building genetic algorithms
more info:
Martin Pelikan    -
Learning Classifier Systems
more info:
Martin V. Butz    -
Ant colony optimization
more info:
Manuel López-Ibáñez    -
Grammatical evolution Conor Ryan    -
Statistical analysis for evolutionary computation: Introduction Mark Wineberg    -

Steffen Christensen    - 
Evolving neural networks
more info:
Risto Miikkulainen    -
Financial Evolutionary Computation
more info:
Christopher D. Clack    -    


Advanced Tutorials

Genetic Programming Theory Riccardo Poli    -
Bioinformatics Jason Moore    -
Representations for evolutionary algorithms
more info:
Franz Rothlauf    -
Foundations of Evolutionary Multi-Objective Optimization
more info:
Tobias Friedrich    -
Frank Neumann    -

Evolutionary Multi-Criterion Optimization
more info:
Kalyanmoy Deb    -
Constraint handling techniques using EAs
more info:
Carlos Coello Coello    -
Tuning and Experimental Analysis in EC: What We Still Have Wrong
more info:
Thomas Bartz-Beielstein

Mike Preuss   -

Genetic Algorithms Theory
more info:

Michael D. Vose   -

Statistical analysis for evolutionary computation: Advanced Techniques Mark Wineberg    -

Steffen Christensen    - 
Computational Complexity and EC
more info:
Frank Neumann    -

Thomas Jansen    -
Evolutionary Computation on GPUs (with NVIDIA presentation)
more info:
Pierre Collet      -   

Wolfgang Banzhaf    -   

Simon Harding      -

Evolution Strategies and Covariance Matrix Adaptation
more info:
Anne Auger      -   

Nikolaus Hansen      -   


Specialized Techniques and Applications

Fitness Landscapes and Problem Hardness in Genetic Programming
more info:
Leonardo Vanneschi    -

Evolution of Quantum Algorithms
more info:

Lee Spector    -
Handling Bloat in GP
more info:
Sara Silva    -
Theory of Randomized Search Heuristics in Combinatorial Optimization
more info:
Carsten Witt    -   
Synthetic Biology
more info:
Natalio Krasnogor   -   
Generative and Developmental Systems
more info:
Kenneth O. Stanley   -   
Real-World Data Modeling
more info:
Mark Kotanchek    -
Experimental Optimization by Evolutionary Algorithms
more info:

Ofer Shir    -

Thomas Bäck    -

Joshua Knowles    -

Evolutionary Computer Vision
more info:
Gustavo Olague    -
Digital Evolution with Avida
more info:
Benjamin Beckmann    -  

Jeff Clune   -   

Cartesian Genetic Programming
more info:
Julian Miller    -

Simon Harding    -



Introductory Tutorials


Introduction to Genetic Algorithms

The Introduction to Genetic Algorithms Tutorial is aimed at GECCO attendees with limited knowledge of genetic algorithms, and will start "at the beginning," describing first a "classical" genetic algorithm in terms of the biological principles on which it is loosely based, then present some of the fundamental results that describe its performance, described using the schema concept. It will cover some variations on the classical model, some successful applications of genetic algorithms, and advances that are making genetic algorithms more useful.

Erik Goodman

He is a Professor of Electrical and Computer Engineering and of Mechanical Engineering at Michigan State University. He studied genetic algorithms under John Holland at the University of Michigan, before they had been named. His use of a genetic algorithm in 1971 to solve for 40 coefficients of a highly nonlinear model of a bacterial cell was the first known GA application on a real-world problem, and ran for nearly a year on a dedicated computer. He has used genetic algorithms in his research ever since, including for parameterization of complex ecosystem models, for evolution of cooperative behavior in artificial life, for factory layout and scheduling, for protein folding and ligand docking, for design of shape and layup of composite structures, and for data mining and pattern classification. Much of his recent research has been in development of new types of parallel genetic algorithms and in design methods for mechatronic systems using genetic programming. He is co-founder and VP Technology of Red Cedar Technology, which provides tools for automated engineering design based on evolutionary computation. He chaired ICGA-97 and GECCO-2001, chaired GECCO´s sponsoring organization, ISGEC, from 2001-2004, and was the founding chair of ACM SIGEVO, its successor. He was named Michigan Distinguished Professor of the Year in 2009 by the Presidents Council, State Universities of Michigan.


Introduction to Genetic Programming: From the Basics to Human-Competitive Results

The tutorial will start with a description of the problem addressed by genetic programming, a description of the basic genetic programming algorithm, and examples of applications. The tutorial will also describe advanced topics, such as use of a developmental process within genetic programming; implementations of automatically defined functions (subroutines), memory, iterations, recursions; parallel processing; the connection between Moore's Law and the results produced by genetic programming; and a brief survey of over 80 examples of human-competitive results using genetic programming.

John R. Koza

He is consulting professor in Electrical Engineering Department at Stanford University. He is author of four books and over 300 papers on genetic programming. In 1972, he received his PhD in Computer Science from the University of Michigan. Between 1973 and 1987, he was chairman and Chief Executive Officer of Scientific Games Inc., a company that provided rub-off instant lottery tickets and computer systems to state lotteries.He taught a course in genetic algorithms and genetic programming at Stanford University in most years since 1988.


Evolutionary Computation: A Unified Approach

The field of Evolutionary Computation has experienced tremendous growth over the past 20 years, resulting in a wide variety of evolutionary algorithms and applications. The result poses an interesting dilemma for many practitioners in the sense that, with such a wide variety of algorithms and approaches, it is often hard to see the relationships between them, assess strengths and weaknesses, and make good choices for new application areas.

This tutorial is intended to give an overview of a general EC framework that can help compare and contrast approaches, encourages crossbreeding, and facilitates intelligent design choices. The use of this framework is then illustrated by showing how traditional EAs can be compared and contrasted with it, and how new EAs can be effectively designed using it.

Finally, the framework is used to identify some important open issues that need further research.

Kenneth A. De Jong

He received his Ph.D. in computer science from the University of Michigan in 1975. He joined George Mason University in 1984 and is currently a Professor of Computer Science, head of the Evolutionary Computation laboratory, and the Associate Director of the Krasnow Institute. His research interests include evolutionary computation, machine learning, and adaptive systems. He is currently involved in research projects involving the development of new evolutionary algorithm (EA) theory, the use of EAs as heuristics for NP-hard problems, and the application of EAs to the problem of learning task programs in domains such as robotics, diagnostics, navigation and game playing. He is also interested in experience-based learning in which systems must improve their performance while actually performing the desired tasks in environments not directly their control or the control of a benevolent teacher. He is an active member of the Evolutionary Computation research community and has been involved in organizing many of the workshops and conferences in this area. He is the founding editor-in-chief of the journal Evolutionary Computation (MIT Press), and a member of the board of the ACM SIGEVO. He is the recipient of an IEEE Pioneer award in the field of Evolutionary Computation and a lifetime achievement award from the Evolutionary Programming Society.


Probabilistic Model-Building Algorithms (PMBGAs)

Probabilistic model-building algorithms (PMBGAs) replace traditional variation of genetic and evolutionary algorithms by (1) building a probabilistic model of promising solutions and (2) sampling the built model to generate new candidate solutions. PMBGAs are also known as estimation of distribution algorithms (EDAs) and iterated density-estimation algorithms (IDEAs).

Replacing traditional crossover and mutation operators by building and sampling a probabilistic model of promising solutions enables the use of machine learning techniques for automatic discovery of problem regularities and exploitation of these regularities for effective exploration of the search space. Using machine learning in optimization enables the design of optimization techniques that can automatically adapt to the given problem. There are many successful applications of PMBGAs, for example, Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature subset selection, forest management, groundwater remediation design, telecommunication network design, antenna design, and scheduling.

The tutorial Probabilistic Model-Building GAs will provide a gentle introduction to PMBGAs with an overview of major research directions in this area. Strengths and weaknesses of different PMBGAs will be discussed and suggestions will be provided to help practitioners to choose the best PMBGA for their problem.

Martin Pelikan

He received Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 2002. He is now an associate professor at the Department of Mathematics and Computer Science at the University of Missouri in St. Louis. In 2006 Pelikan founded the Missouri Estimation of Distribution Algorithms Laboratory (MEDAL). Prior to joining the University of Missouri, he worked at the Illinois Genetic Algorithms Laboratory (IlliGAL), the German National Center for Information Technology in Sankt Augustin, the Slovak University of Technology in Bratislava, and the Swiss Federal Institute of Technology (ETH) in Zurich.

Pelikan has worked as a researcher in genetic and evolutionary computation since 1995. His most important contributions to genetic and evolutionary computation are the Bayesian optimization algorithm (BOA), the hierarchical BOA (hBOA), the scalability theory for BOA and hBOA, and the efficiency enhancement techniques for BOA and hBOA. His current research focuses on extending BOA and hBOA to other problem domains, applying genetic and evolutionary algorithms to real-world problems with the focus on physics, bioinformatics and machine learning, and designing new efficiency enhancement techniques for BOA and other evolutionary algorithms.


Learning Classifier Systems

In the 1970s, John H. Holland designed Learning Classifier Systems (LCSs) as highly adaptive, cognitive systems. Since then, LCSs came a long way. Interest strongly decreased in the late 80s and early 90s due the complex interactions of several learning mechanisms. However, since the introduction of the accuracy-based XCS classifier system by Stewart W. Wilson in 1995 and the modular analysis of several LCSs thereafter, interest re-gained momentum. Current research has shown that LCSs can effectively solve data-mining problems, reinforcement learning problems, other predictive problems, and cognitive control problems. It was shown that performance is machine learning competitive, but learning is taking place online and is often more flexible, applicable to a larger range of problems, and highly adaptive. Moreover, system knowledge can be easily extracted.
The Learning Classifier System tutorial provides a gentle introduction to LCSs and their general functioning. It then surveys the current theoretical understanding of the systems and their proper application to various problem domains. Finally, we provide a suite of current successful LCS applications and discuss the most promising areas for future applications.

Martin V. Butz

Dr. Butz received his PhD in computer science at the University of Illinois at Urbana-Champaign in October 2004 under the supervision of David E. Goldberg. His thesis "Rule-based evolutionary online learning systems: Learning Bounds, Classification, and Prediction" puts forward a modular, facet-wise system analysis for Learning Classifier Systems (LCSs) and analyzes and enhances the XCS classifier system. Until September 2007, Butz was working at the University of Würzburg at the Department of Cognitive Psychology III on the interdisciplinary cognitive systems project "MindRACES: From reactive to anticipatory cognitive embodied systems". In October 2007 he founded his own cognitive systems laboratory: "Cognitive Bodyspaces: Learning and Behavior" (COBOSLAB), funded by the German research foundation under the Emmy Noether framework.


Ant Colony Optimization:

Ant colony optimization (ACO) is a metaheuristic for tackling combinatorial as well as continuous optimization problems. The biological inspiration of ACO is the foraging behaviour of some species of ant colonies. Since the first ACO algorithm was proposed at the beginning of the 1990s, there has been a continuous and growing interest in the technique, with many successful applications to both traditional combinatorial problems and real-world applications.

The tutorial is divided in two parts. The first part is a detailed introduction of what is (and what is not) ant colony optimization, with emphasis on its biological inspiration, modern ACO algorithms, and successful applications to various problems.

The second part of the tutorial will cover several ongoing research topics in ACO. First, hybridizations of ACO algorithms with other optimization techniques will be discussed: Many successful applications of ACO algorithms are actually ACO hybrids, and the number and variety of such combinations is growing strongly. A second topic covered in this part of the tutorial will be parameter tuning/adaptation, recent experimental results will be presented and pointers for pursuing this research direction will be discussed. A third part of the tutorial will review the current state of the art in ACO algorithms for multi-objective optimization problems, with a focus on the different design choices that have been made in the literature. Finally, the tutorial will conclude with a review of recent advances on ACO theory, providing many references for the interested reader.

Manuel López-Ibáñez

He is a post-doctoral researcher at the IRIDIA laboratory in the Université Libre de Bruxelles, Brussels, Belgium. He received a M.S. degree in Computer Science from the University of Granada (Spain) in 2004, and Ph.D. from Edinburgh Napier University, U.K. in 2009. His current research is on ant colony optimization, multi-objective optimization problems, and the experimental analysis and automatic configuration of stochastic optimization algorithms.

More information at:



Evolving Neural Networks

Neuroevolution, i.e. evolution of artificial neural networks, has recently emerged as a powerful technique for solving challenging reinforcement learning problems. Compared to traditional (e.g. value-function based) methods, neuroevolution is especially strong in domains where the state of the world is not fully known: the state can be disambiguated through recurrency, and novel situations handled through pattern matching. In this tutorial, we will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) ways of combining traditional neural network learning algorithms with evolutionary methods, and (3) applications of neuroevolution to control, robotics, artificial life, and games.

Risto Miikkulainen
He is a Professor of Computer Sciences at the University of Texas at Austin. He received an M.S. in Engineering from the Helsinki University of Technology, Finland, in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His recent research focuses on methods for evolving neural networks and applying these methods to game playing, robotics, and intelligent control. He is an author of over 250 articles on neuroevolution, connectionist natural language processing, and the computational neuroscience of the visual cortex. He is an editor of Machine Learning, Neural Networks, IEEE Transactions on Computational Intelligence and AI in Games, Cognitive Systems Research, and IEEE Transactions on Autonomous Mental Development.


Financial Evolutionary Computation

Financial investment and trading have been transformed through the application of mathematical analysis and computer technology. The research problems posed by financial computing are extremely challenging, taxing both mathematicians and computer scientists. While traditional computational techniques have yet to provide an efficient means for numerical evaluation of the complex equations produced by financial mathematicians, evolutionary computing has been remarkably effective and Financial Evolutionary Computing is currently a fertile area of research.

The Introduction to Financial Evolutionary Computing (FEC) Tutorial is aimed at GECCO attendees with limited knowledge of finance. The tutorial will introduce the area of FEC, provide a basic understanding of trading and investment, identify some of the main research challenges, who is working in the area, and how to get started on FEC research. Topics will include for example stock selection, calculation of value at risk, and modelling financial markets.

Christopher D. Clack
He is Director of Financial Computing at UCL. He has recently been awarded the Doctor of Science (ScD) by the University of Cambridge (their highest award in science). He founded UCL's Virtual Trading Floor and UCL's MSc in Financial Computing, and has attracted funds exceeding £4 million in the last three years. He is CEO and Founding Principal of the Centre for Financial Computing and Director of the new Financial Services Knowledge Transfer Network. He regularly chairs and presents at both industry and academic conferences and has previously presented a tutorial on Financial Evolutionary Computing at both GECCO 2008 and GECCO 2009.

His research team focuses on applying Genetic Computation and multi-agent systems to real-world problems in finance, and his work on GP robustness in highly volatile markets won a Best Paper award at GECCO 2007. He has twenty years' experience of consulting in Financial Computing, from settlement systems to portfolio optimization, and has established very close partnerships with Credit Suisse, Goldman Sachs, Bank of America Merrill Lynch, Morgan Stanley, Thomson Reuters, and the London Stock Exchange. This provides unrivalled exposure to the most pressing technology problems in finance, coupled with invaluable access to real data.


Advanced Tutorials


Representations for Evolutionary Algorithms

Successful and efficient use of evolutionary algorithms (EAs) depends on the choice of the genotype, the problem representation (mapping from genotype to phenotype) and on the choice of search operators that are applied to the genotypes. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation.

In EA practice one can distinguish two complementary approaches. The first approach uses indirect representations where a solution is encoded in a standard data structure, such as strings, vectors, or discrete permutations, and standard off-the-shelf search operators are applied to these genotypes. To evaluate the solution, the genotype needs to be mapped to the phenotype space. The proper choice of this genotype-phenotype mapping is important for the performance of the EA search process. The second approach, the direct representation, encodes solutions to the problem in its most 'natural' space and designs search operators to operate on this representation.

Research in the last few years has identified a number of key concepts to analyse the influence of representation-operator combinations on EA performance.

These concepts are
• locality and
• redundancy.

Locality is a result of the interplay between the search operator and the genotype-phenotype mapping. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Furthermore, redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes. Finally, a bias need not be the result of the representation but can also be caused by the search operator.

The tutorial gives a brief overview about existing guidelines for representation design, illustrates the different aspects of representations, gives a brief overview of theoretical models describing the different aspects, and illustrates the relevance of the aspects with practical examples.

It is expected that the participants have a basic understanding of EA principles.

Franz Rothlauf

He received a Diploma in Electrical Engineering from the University of Erlangen, Germany, a Ph.D. in Information Systems from the University of Bayreuth, Germany, a Habilitation from the university of Mannheim, Germany, in 1997, 2001, and 2007, respectively.

He is currently chair of Information Systems, Mainz University, Germany. He has published more than 50 technical papers in the context of evolutionary computation, co-edited several conference proceedings, and is author of the book "Representations for Genetic and Evolutionary Algorithms" which is published in a second edition in 2006.

His main research interests are problem representations for heuristic search approaches especially evolutionary algorithms. He is a member of the Editorial Board of Evolutionary Computation Journal and Journal of Artificial Evolution and Applications. He has been organizer of several workshops on representations issues, chair of EvoWorkshops in 2005 and 2006, co-organizer of the European workshop series on "Evolutionary Computation in Communications, Networks, and Connected Systems", co-organizer of the European workshop series on "Evolutionary Computation in Transportation and Logistics", and co-chair of the program commitee of the GA track at GECCO 2006. He was conference chair of GECCO 2009.


Foundations of Evolutionary Multi-Objective Optimization

Evolutionary algorithms have been shown to be especially successful when dealing with multi-objective problems. The area of evolutionary multi-objective optimization has grown rapidly during the last years.

In this advanced tutorial, we give an overview on the different results that have been achieved on the theoretical aspects of evolutionary multi-objective optimization.
The tutorial will complement the tutorial on ``Evolutionary Multi-Objective Optimization'' by Eckart Zitzler.

There has been a lot of progress on the foundational side of evolutionary multi-objective optimization. The first part of the tutorial will cover convergence and runtime analysis of evolutionary multi-objective optimization. We will also discuss how multi-objective models of single-objective optimization problems can provably lead to faster evolutionary algorithms for different combinatorial optimization problems.

How to achieve a good spread over the Pareto front is one of the big questions in evolutionary multi-objective optimization. Therefore the second part is dedicated to several diversity mechanisms, particularly the hypervolume indicator. This indicator plays an important role and has found many applications. We present recent results on complexity and approximability of the hypervolume and discuss what they imply for applications.

Tobias Friedrich

He received his MSc in computer science from the University of Sheffield, UK, in 2002 and his diploma in mathematics from the University of Jena, Germany, in 2005. In 2007 he obtained a PhD in theoretical computer science from the Saarland University, Germany.

Currently, he is a member of the Algorithms Group in the International Computer Science Institute Berkeley, USA.
The central topic of his work are randomized algorithms (both classical and evolutionary) and randomized methods in mathematics and computer science in general. The last two years, Tobias has been in the program committee of GECCO. He also won best paper awards in the tracks ``Genetic Algorithms'' (GECCO 2008) and ``Evolutionary Multiobjective Optimization'' (GECCO 2009).

More information can be found at


Frank Neumann

He studied Technical Computer Science in Dortmund and Kiel, Germany. He received his diploma and Ph.D. from the Christian-Albrechts-University of Kiel in 2002 and 2006, respectively. Currently, he is the coordinator of the research area ``Bio-inspired Computation'' within the Algorithms and Complexity Group at the Max Planck Institute for Informatics in Saarbruecken, Germany. In his work, he considers theoretical aspects of evolutionary computation methods in particular for combinatorial and multi-objective optimization problems and has received six best paper awards at leading international conferences such as GECCO and PPSN. Frank has been a co-chair of the track "Formal Theory" at GECCO 2007 and the chaired of the track ``Evolutionary Combinatorial Optimization'' at GECCO 2008. He has given tutorials on ``Computational Complexity and Evolutionary Computation'' (together with Thomas Jansen) at GECCO (2007-2009) and a tutorial on ``Computational Complexity of Evolutionary Computation in Combinatorial Optimization'' at PPSN 2008 (together with Carsten Witt).

More information can be found at:


Evolutionary Multi-Criterion Optimization

Many real-world search and optimization problems are naturally posed as non-linear programming problems having multiple conflicting objectives. Due to lack of suitable solution techniques, such problems are usually artificially converted into a single-objective problem and solved. The difficulty arises because multi-objective optimization problems give rise to a set of Pareto-optimal solutions, each corresponding to a certain trade-off among the objectives. It then becomes important to find not just one Pareto-optimal solution but as many of them as possible. Classical a posteriori MCDM methods are found to be not efficient because they require repetitive applications to find multiple Pareto-optimal solutions and in some occasions repetitive applications do not guarantee finding distinct Pareto-optimal solutions. The population approach of evolutionary algorithms (EAs) allows an efficient way to find multiple Pareto-optimal solutions simultaneously in a single simulation run.

In this tutorial, we shall contrast the differences in philosophies between classical and evolutionary multi-objective methodologies and provide adequate fundamentals needed to understand and use both methodologies in practice. Particularly, major state-of-the-art evolutionary multi-objective optimization (EMO) methodologies will be discussed in detail in the context of their computer implementations.

Thereafter, three main application areas of EMO will be discussed with adequate case studies from practice --

(i) applications showing better decision-making abilities through EMO,
(ii) applications exploiting the multitude of trade-off solutions of EMO in extracting useful information in a problem, and
(iii) applications showing better problem-solving abilities in various other tasks (such as, reducing bloating, solving single-objective constraint handling, and others).

Clearly, EAs have a niche in solving multi-objective optimization problems compared to classical methods. This is why EMO methodologies are getting a growing attention in the recent past. Since this is a comparatively new field of research, in this tutorial, a number of future challenges in the research and application of multi-objective optimization will also be discussed.

This tutorial is aimed for both novices and users of EMO. Those without any knowledge in EMO will have adequate ideas of the procedures and their importance in computing and problem-solving tasks. Those who have been practicing EMO will also have enough ideas and materials for future research, know state-of-the-art results and techniques, and make a comparative evaluation of their research.

Kalyanmoy Deb

He holds Deva Raj Chair Professor at Indian Institute of Technology Kanpur in India. He is the recipient of the prestigious MCDM Edgeworth-Pareto award by the Multiple Criterion Decision Making (MCDM) Society, one of the highest awards given in the field of multi-criterion optimization and decision making. He has also received prestigious Shanti Swarup Bhatnagar Prize in Engineering Sciences for the year 2005 from Govt. of India.

He has also received the `Thomson Citation Laureate Award' from Thompson Scientific for having highest number of citations in Computer Science during the past ten years in India. He is a fellow of Indian National Academy of Engineering (INAE), Indian National Academy of Sciences, and International Society of Genetic and Evolutionary Computation (ISGEC). He has received Fredrick Wilhelm Bessel Research award from Alexander von Humboldt Foundation in 2003. His main research interests are in the area of computational optimization, modeling and design, and evolutionary algorithms. He has written two text books on optimization and more than 240 international journal and conference research papers. He has pioneered and a leader in the field of evolutionary multi-objective optimization. He is associate editor of two major international journals and an editorial board members of five major journals.

More information about his research can be found from


Constraint-Handling Techniques used with Evolutionary Algorithms

Evolutionary Algorithms (EAs), when used for global optimization, can be seen as unconstrained optimization techniques. Therefore, they require an additional mechanism to incorporate constraints of any kind (i.e., inequality, equality, linear, nonlinear) into their fitness function. Although the use of penalty functions (very popular with mathematical programming techniques) may seem an obvious choice, this sort of approach requires a careful fine tuning of the penalty factors to be used. Otherwise, an EA may be unable to reach the feasible region (if the penalty is too low) or may reach quickly the feasible region but being unable to locate solutions that lie in the boundary with the infeasible region (if the penalty is too severe). This has motivated the development of a number of approaches to incorporate constraints into the fitness function of an EA. This tutorial will cover the main proposals in current use, including novel approaches such as the use of tournament rules based on feasibility, multiobjective optimization concepts, hybrids with mathematical programming techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial immune systems, among others. Other topics such as the importance of maintaining diversity, current benchmarks and the use of alternative search engines (e.g., particle swarm optimization, differential evolution, evolution strategies, etc.) will be also discussed (as time allows).

Carlos Artemio Coello Coello :

He received a BSc in Civil Engineering from the Universidad Autonoma de Chiapas in Mexico in 1991 (graduating Summa Cum Laude). Then, he was awarded a scholarship from the Mexican government to pursue graduate studies in Computer Science at Tulane University (in the USA). He received a MSc and a PhD in Computer Science in 1993 and 1996, respectively. Dr. Coello has been a Senior Research Fellow in the Plymouth Engineering Design Centre (in England) and a Visiting Professor at DePauw University (in the USA). He is currently full professor at CINVESTAV-IPN in Mexico City, Mexico. He has published over 200 papers in international peer-reviewed journals, book chapters, and conferences. He has also co-authored the book "Evolutionary Algorithms for Solving Multi-Objective Problems", which is now in its Second Edition (Springer, 2007) and has co-edited the book "Applications of Multi-Objective Evolutionary Algorithms" (World Scientific, 2004). He has delivered invited talks, keynote speeches and tutorials at international conferences held in Spain, USA, Canada, Switzerland, UK, Chile, Colombia, Brazil, Argentina, India, Uruguay and Mexico. Dr. Coello has served as a technical reviewer for over 50 international journals and for more than 60 international conferences and actually serves as associate editor of the journals "IEEE Transactions on Evolutionary Computation", "Evolutionary Computation", "Computational Optimization and Applications", "Pattern Analysis and Applications" and the "Journal of Heuristics". He is also a member of the editorial boards of the journals "Soft Computing", "Engineering Optimization", the "Journal of Memetic Computing" and the "International Journal of Computational Intelligence Research". He received the 2007 National Research Award (granted by the Mexican Academy of Science) in the area of "exact sciences". He is member of the Mexican Academy of Science, Senior Member of the IEEE, and member of Sigma Xi, The Scientific Research Society. His current research interests are: evolutionary multiobjective optimization, constraint-handling techniques for evolutionary algorithms and evolvable hardware.


Tuning and Experimental Analysis in EC: What We Still Have Wrong

In a nutshell, this tutorial aims at:

  1. Making your algorithms faster (and more reliable)
  2. Helping you to understand your algorithms (so that forthcoming versions will run even much faster)

However, we will also discuss drawbacks of the tuning procedure. More detailed, the tutorial will treat the following topics:

After giving a short overview onto new experimental approaches in other fields and especially in the philosopy of science ('New Experimentalism'), we discuss questions of experimental setup, reporting, visualization, and analysis of results in the context of non-deterministic optimization methods. In the last years, considerable developments (e.g. the BBOB competitions) have led to increasing knowledge concerning the strengths and weaknesses of specific algorithms. In this respect, methodological aspects as McGeoch's question: "What to measure?"  have been treated by a wider audience. However, in spite of Hooker's warning words ("we have it all wrong") issued more than a decade ago, many current experimental investigations still provide much less insight than they easily could. We discuss these pitfalls and provide hints how to circumvent many of them, also giving an overview on simple techniques of experimental design (e.g. space-filling designs).    

Tuning techniques and their benefit are presented, especially emphasizing the sequential parameter optimization (SPO) but with a look on available alternatives. Based on several examples from theory and practice we demonstrate how SPO improves the performance of many search heuristics significantly. However, this performance gain is not available for free. Therefore, costs of this tuning process are discussed as well as its limitations. Next to performance, one may also be interested in the adaptability of algorithms (how far they can be stretched to meet specific problem requirements), a widely open issue but also a step towards algorithm supported design of optimization methods, which in turn requires a good tuning technique.

Finally, as well-known benchmark problems can be employed to analyze existing algorithms, this may also be applied in the other direction, leading to the experimental analysis of problems/landscapes by means of optimization, tuning and visualization.
We conclude by summarizing the resolved issues as guidelines, and pointing to the unresolved issues which call for further attention. 

Thomas Bartz-Beielstein :

Dr. Thomas Bartz-Beielstein is a professor for Applied Mathematics at Cologne University of Applied Sciences. He has published more than several dozen research papers, presented tutorials about tuning, and has edited several books in the field of Computational Intelligence, e.g., "Experimental Research in EC". His research interests include optimization, simulation, and statistical analysis of complex real-world problems. Prof. Bartz-Beielstein and Mike Preuss invented the sequential parameter optimization, which was applied as a tuner for numerous optimization algorithms such as evolution strategies, differential evolution, or particle swarm optimization.

Mike Preuss:

He is Research Associate at the Computer Science Department, University of Dortmund, Germany (since 2000), where he also received his Diploma degree in 1998. 

His research interests focus on the field of evolutionary algorithms for real-valued problems, namely on multimodal and multiobjective niching and the experimental methodology for (non-deterministic) optimization algorithms.

He is currently working on the adaptability and applicability of computational intelligence techniques for various engineering domains and computer games.


Genetic Algorithms Theory

There are many interesting things we can study about genetic algorithms from a theoretical point of view. There are structural issues, such as how to design operators for different search spaces, and there are dynamical issues, such as working out the probability of future generations and where a GA will spend most of its time. We will touch on a number of these points from a general perspective, asking what can be said about how to design GAs and how they behave, independent of any particular optimization problem.


Michael D. Vose:

He has PhDs in Mathematics and Computer Science from the University of Texas at Austin. He is now a member of the Department of Electrical Engineering and Computer Science at the University of Tennessee, Knoxville. He has, amongst other things, contributed to the theory of evolutionary algorithms by publishing papers on the subject, and authoring the graduate-level text book "The Simple Genetic Algorithm: Foundations and Theory". He has also helped organize several workshops and conferences in the field.


Computational Complexity and Evolutionary Computation

Evolutionary algorithms and other nature-inspired search heuristics like ant colony optimization have been shown to be very successful when dealing with real-world applications or problems from combinatorial optimization. In recent years, analyses have shown that these general randomized search heuristics can be analyzed like "ordinary" randomized algorithms and that such analyses of the expected optimization time yield deeper insights in the functioning of evolutionary algorithms in the context of approximation and optimization. This is an important research area where a lot of interesting questions are still open.

The tutorial enables attendees to analyze the computational complexity of evolutionary algorithms and other search heuristics in a rigorous way. An overview of the tools and methods developed within the last 15 years is given and practical examples of the application of these analytical methods are presented.

Frank Neumann

He studied Technical Computer Science in Dortmund and Kiel, Germany. He received his diploma and Ph.D. from the Christian-Albrechts-University of Kiel in 2002 and 2006, respectively. Currently, he is the coordinator of the research area ``Bio-inspired Computation'' within the Algorithms and Complexity Group at the Max Planck Institute for Informatics in Saarbruecken, Germany. In his work, he considers theoretical aspects of evolutionary computation methods in particular for combinatorial and multi-objective optimization problems and has received six best paper awards at leading international conferences such as GECCO and PPSN. Frank has been a co-chair of the track "Formal Theory" at GECCO 2007 and the chaired of the track ``Evolutionary Combinatorial Optimization'' at GECCO 2008. He has given tutorials on ``Computational Complexity and Evolutionary Computation'' (together with Thomas Jansen) at GECCO (2007-2009) and a tutorial on ``Computational Complexity of Evolutionary Computation in Combinatorial Optimization'' at PPSN 2008 (together with Carsten Witt).

More information can be found at:


Thomas Jansen

He was born in 1969, and studied Computer Science at the University of Dortmund, Germany. He received his diploma (1996) and Ph.D. (2000) there. From September 2001 to August 2002 he stayed as a Post-Doc at Kenneth De Jong's EClab at the George Mason University in Fairfax, VA. He was Juniorprofessor for Computational Intelligence from September 2002 to February 2009 at the Technical University Dortmund. Since March 2009 he is Stokes Lecturer at the Department of Computer Science at the University College Cork, Ireland. His research is centered around design and theoretical analysis of evolutionary algorithms and other randomized search heuristics. He has published 14 journal papers, 29 conference papers and contributed six book chapters. He is associate editor of Evolutionary Computation (MIT Press), member of the steering committee of the Theory of Randomized Search Heuristics workshop series, was program chair at PPSN 2008, co-organized FOGA 2009, a workshop on Bridging Theory and Practice (PPSN 2008) and two Dagstuhl workshops on Theory of Evolutionary Computation (2004 and 2006).


Computation on GPUs (with NVIDIA presentation)

GPGPU (General Purpose Graphic Processing Unit) cards are about to revolutionize evolutionary computation because of their monstrous computing power that evolutionary algorithms can directly use.

In order to obtain fluid games with realistic rendering, the games industry has a considerable need for powerful graphic processors, and enough money to create chips of their own.

Rendering algorithms have a specificity: they need to execute the same function on thousands of different pixels, meaning that it is possible to save on silicon space by grouping ALUs together so that many cores execute the same instruction at the same time, on different data (Single Instruction Multiple Data): where Intel's or AMD's general purpose multi-core CPUs can only hold 4 to 8 totally independent cores on the same chip, a GPU chip with an equivalent number of transistors can host up to several hundred cores with the restriction that they are more or less linked together, as they have been designed to execute the same function in parallel on different data.

Now, by pure chance, the flowchart of a rendering algorithm is virtually the same as that of an evolutionary algorithm, meaning that it is possible for EAs to run more or less directly on GPUs: an identical rendering algorithm (resp. evaluation function) must be executed on thousands of different pixels/vertices (resp. genomes).

The story is a bit different for Genetic Programming, where, to the opposite of standard Evolutionary Algorithms, thousands of different functions must be evaluated on a small constant data set. Fortunately, all cores on a GPU are not linked together as in a Single Instruction Multiple Data parallel machine. In NVidia cards, cores are grouped by bunches of 32 virtual SIMD processors, but different bunches can run on different parts of the code, meaning that if at least 32 fitness cases are available as a training set for a GP problem, it is possible to get the 32 virtual cores of one bunch to evaluate one individual in parallel on the 32 different training cases.

This tutorial is divided in three parts:
• A representative from nVidia will present the different cards made
available from this company, along with their characteristics.
• A second part will explain how nVidia cards can yield speedups of
x100 and more on both EAs and GP on general problems.
• A third part will explain how Evolutionary Computation can be
run on different machines hosting different GPU cards.

NVidia presenter: TBA

Pierre Collet

After a PhD thesis of the Orsay University in Computer Aided Surgery through virtual reality obtained in 1997, Pierre Collet discovered evolutionary computation in 1998 through a 2 years post-doc at Inria Rocquencourt in the Fractales Project. Then, from 2000 to 2003, he was a member of the DREAM (Distributed Resource Evolutionary Algorithm Machine) European Project at the Center of Applied Maths of the French école Polytechnique. He became assistant Professor at the Université du Littoral Côte d'Opale in Calais in 2003, and was appointed full Professor in Strasbourg in 2007.

There, he leads the Datamining and Theoretical Bioinformatics team of the Image Sciences, Computer Sciences, and Remote Sensing Laboratory of the University of Strasbourg where his current research is on adapting evolutionary computation to be run on GPU chips.

More info:


Wolfgang Banzhaf

He is a is a Professor of Computer Science at the Department of Computer Science at Memorial University of Newfoundland, St. John's, Canada, where he served as head of department from 2003-2009. He received a diploma in Physics from Ludwig-Maximilians University in Munich, Germany in 1982 and a Ph.D. in Physics from the Department of Physics at the University of Karlsruhe, Germany in 1985. His areas of teaching and research include genetic and evolutionary computation, artificial life, self-organization, and applications of computer science to economics, and the natural and social sciences. He was named a Senior Fellow of the International Society for Genetic and Evolutionary Computation (2003), is a recipient of the EvoStar award for outstanding contributions to this field (2007), a member of the SIGEVO executive committee (since its foundation) and is SIGEVO's current treasurer.

He is an active author in the field of genetic programming, served as the first editor-in-chief of the journal Genetic Programming and Evolvable Machines, and is member of a number of journal editorial and advisory boards. He spends 2009-2010 as a sabbatical in France, currently at the University of Strasbourg.

More info:


Simon Harding

He is a post doctoral research fellow in the Department of Computer Science at Memorial University, Canada. His research interests include genetic programming, unconventional computation and parallel computing on consumer grade hardware. He has been co-chair of the Computational Intelligence on Consumer Games and Graphics Hardware track at WCCI 2008 and GECCO 2009/2010. He is also co-organiser of the GECCO GPU programming competition.


Evolution Strategies and Covariance Matrix Adaptation

Evolution Strategies (ESs) and many continuous domain Estimation of Distribution Algorithms (EDAs) are stochastic optimization procedures based on sampling multivariate Gaussian (normal) distributions. Many of them can be formulated in a simple, unified framework.

This tutorial focuses on the most relevant question: how should the parameters of the sample distribution be chosen, and in particular, how should they be updated in the generation sequence? First, the most common and successful approaches for step-size control are reviewed, including self-adaptation, success rules, and path length control. Then, Covariance Matrix Adaptation (CMA) is discussed in depth and also compared with the EDA principle.

In the beginning, specific difficulties in solving continuous domain non-linear, non-convex optimization problems are discussed, for example ill-conditioning and non-separability. The methods are motivated with respect to these difficulties. In the end, the performance of state-of-the-art evolution strategies is related to other well-known evolutionary and non-evolutionary optimization algorithms, namely BFGS, DE, PSO,...

Anne Auger

Dr. Auger received her diploma in mathematics from the University of Paris VI, France, in 2001. She also obtained the French highest diploma for teaching mathematics, "Agregation de mathematiques". She received the doctoral degree from the university Paris VI in 2004. Afterwards, she worked for two years (2004-2006) as a postdoctoral researcher at ETH (in Zurich) in the Computational Laboratory (CoLab). Since October 2006, she holds a permanent research position at INRIA (French National Research Institute in Computer Science and Applied Mathematics). Her research interests are stochastic continuous optimization, including theoretical analyses of randomized search heuristics. She published more than fifteen articles at top conferences and journals in this area. She organized (and will organize) the biannual Dagstuhl seminar "Theory of Evolutionary Computation" in 2008 (and 2010).


Nikolaus Hansen

He is a senior researcher at The French National Institute for Research in Computer Science and Control (INRIA). Educated in medicine and mathematics, he received a Ph.D. in civil engineering in 1998 from the Technical University Berlin under Ingo Rechenberg. Before he joined INRIA, he has been working in applied artificial intelligence and genomics, and he has done research in evolutionary computation and computational science at the Technical University Berlin and the ETH Zurich. His main research interests are learning and adaptation in evolutionary computation and the development of algorithms applicable in practice. He has been the main driving force behind the development of CMA-ES over many years.



Specialized Techniques and Applications




Fitness Landscapes and Problem Hardness in Genetic Programming

The performance of searching agents, or metaheuristics, like evolutionary algorithms (genetics algorithms, genetic programming, etc.) or local search algorithms (simulated annealing, tabu search, etc.) depend on some properties of the search space structure. One concept that allows us to analyse the search space is the fitness landscape. In the case of Genetic Programming, defining and handling fitness landscapes is a particularly hard task, given the complexity of the structures being evolved of the genetic operators used. This tutorial presents some general definitions of fitness landscape. Subsequently, we will try to instantiate the concept of fitness landscape to Genetic Programming, discussing problems. The concept of landcsape geometry will be introduced and some of the most common landscape geometries and the dynamics of Genetic Programming on those landscapes will be discussed.

After that, the binding between fitness landscapes and problem difficulty will be discussed and a set of measures that characterize the difficulty of a metaheuristic in searching solutions in a fitness landscape are analysed. Among those measures, particular relevance will be given to Fitness Distance Correlation (FDC), Negative Slope Coefficient (NSC), a set of measures bound to the concept of Neutrality and some distance metrics and/or similarity measures that are consistent with the most commonly used genetic operators (in particular the recently defined subtree crossover based distance). Finally, some open questions about fitness landscapes are discussed.

Web page:

Leonardo Vanneschi

Born in Florence (Italy) on October 3rd, 1970.

• Laurea (maitrise) "summa cum laude" in Computer Science at the University of Pisa, achieved on February 23rd, 1996.
• PhD in Computer Science at the University of Lausanne (Switzerland), achieved on September 06th, 2004.
• Current Position: Assistant Professor (Ricercatore) by the Dipartimento di Informatica, Sistemistica e Comunicazione (D.I.S.Co.) of the Science Faculty of the University of Milano-Bicocca (Italy).

Main scientific contributions:

• Problem Hardness in Genetic Programming (GP). I have proposed some new hardness indicators for GP and other measures to characterize fitness landscapes.
• Parallel and Distributed GP. I have studied how the island GP model enables to maintain a higher diversity in the whole GP system, thus producing better quality solutions for many applications.
• Other Techniques to Improve GP Performance. The techniques that I have studied are: (1) reducing the number of test cases to evaluate fitness by means of statistics, (2) using populations which dynamically adapt their size according to some events happening during the evolution, (3) coevolving GP terminal symbols by means of Genetic Algorithms (GAs).
• Drug Discovery by Genetic Programming: I have realized a GP framework to mine large datasets of molecule aimed at discovering the model explaining the relationship between molecular descriptors and some important drug features, like toxicity, bioavailability or docking energy.
• Traffic Control Applications using Machine Learning. Using image sequences taken by a fixed camera, I have realized a system to recognize, classify and track the vehicles traveling along a street.
Techniques used are Neural Networks, GAs and IPAN Tracker. The system has been embedded with a television camera system built by the Comerson Srl and it is presently in use in some city crossroads and motorways.
• Genetic Algorithms for the Chemical Formulation Problem. This problem is very important to build new and performing chemical compounds and drugs. I have formalized it as an optimization problem and solved it in efficient way by means of Evolutionary Algorithms.



Evolution of Quantum Algorithms

Computer science will be radically transformed if ongoing efforts to build large-scale quantum computers eventually succeed and if the properties of these computers meet optimistic expectations.
Nevertheless, computer scientists still lack a thorough understanding of the power of quantum computing, and it is not always clear how best to utilize the power that is understood. This dilemma exists because quantum algorithms are difficult to grasp and even more difficult to write. Despite large-scale international efforts, only a few important quantum algorithms are documented, leaving many essential questions about the potential of quantum algorithms unanswered.

These unsolved problems are ideal challenges for the application of automatic programming technologies. Genetic programming techniques, in particular, have already produced several new quantum algorithms and it is reasonable to expect further discoveries in the future. These methods will help researchers to discover how additional practical problems can be solved using quantum computers, and they will also help to guide theoretical work on both the power and limits of quantum computing.

This tutorial will provide an introduction to quantum computing and an introduction to the use of evolutionary computation for automatic quantum computer programming. No background in physics or in evolutionary computation will be assumed. While the primary focus of the tutorial will be on general concepts, specific results will also be presented, including human-competitive results produced by genetic programming. Follow-up material is available from the presenter's book, Automatic Quantum Computer Programming: A Genetic Programming Approach, published by Springer and Kluwer Academic Publishers.

More info:

Lee Spector

He is a Professor of Computer Science in the School of Cognitive Science at Hampshire College in Amherst, Massachusetts, and an adjunct professor in the Department of Computer Science at the University of Massachusetts, Amherst. He received a B.A. in Philosophy from Oberlin College in 1984 and a Ph.D. from the Department of Computer Science at the University of Maryland in 1992. His areas of teaching and research include genetic and evolutionary computation, quantum computation, and a variety of intersections between computer science, cognitive science, evolutionary biology, and the arts. He is the Editor-in-Chief of the journal Genetic Programming and Evolvable Machines (published by Springer) and a member of the editorial board of Evolutionary Computation (published by MIT Press). He is also a member of the SIGEVO executive committee and he was named a Fellow of the International Society for Genetic and Evolutionary Computation.


Handling Bloat in GP

Bloat can be defined as an excess of code growth without a corresponding improvement in fitness. This problem has been one of the most intensively studied subjects since the beginnings of Genetic Programming. After briefly addressing past bloat research, this tutorial concentrates on the new Crossover Bias theory and the bloat control method it inspired, Operator Equalisation. Although still recent and requiring improvements, Operator Equalisation has already proven to be more than just a bloat control method. It reveals novel evolutionary dynamics that allow a successful search without code growth, and shows great potential to be extended and integrated into several different elements of the evolutionary process. We will explain in detail how to implement the two variants of Operator Equalisation developed so far, discussing the pros and cons of each. We will also look at some of the evolutionary dynamics of Operator Equalisation, and discuss how they can help us solve the many open questions that still surround such a new technique. Finally, we will address the possible integration of the Operator Equalisation ideas into different elements of the evolution.

Sara Silva

She is a senior researcher of the Knowledge Discovery and Bioinformatics (KDBIO) group at INESC-ID Lisboa, Portugal, and an invited researcher of the Evolutionary and Complex Systems (ECOS) group at CISUC, Portugal. She has a BSc (5 years, finished in 1996) and a MSc (finished in 1999) in Informatics by the Faculty of Sciences of the University of Lisbon, Portugal, and a PhD (finished in 2008) in Informatics Engineering by the Faculty of Sciences and Technology of the University of Coimbra, Portugal. After graduation she used neural networks and genetic algorithms in several interdisciplinary projects ranging from remote sensing and forest science to epidemiology and medical informatics. She started her research on Genetic Programming
(GP) in 2002, studying the bloat problem. Her main contributions to this field were the Dynamic Limits and Resource-Limited GP bloat control methods, and the developments that put into practice the new Operator Equalisation method. Her current main research interests are bloat and overfitting in GP, and how they relate to each other, and the effective and efficient usage of GP in real life problems within the earth sciences and bioinformatics domains. She is a member of the editorial board of Genetic Programming and Evolvable Machines, and the creator and developer of GPLAB - A Genetic Programming Toolbox for MATLAB.




Theory of Randomized Search Heuristics in Combinatorial Optimization

The rigorous mathematical analysis of randomized search heuristics
(RSHs) with respect to their expected runtime is a growing research area where many results have been obtained in recent years. This class of heuristics contains well-known approaches such as Randomized Local Search (RLS), the Metropolis Algorithm (MA), Simulated Annealing (SA), and Evolutionary Algorithms (EAs) as well as more recent approaches such as Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO).
Such heuristics are often applied to problems whose structure is not known or if there are not enough resources such as time, money, or knowledge to obtain good specific algorithms. It is widely acknowledged that a solid mathematical foundation for such heuristics is needed.

Most designers of RSHs, however, rather focused on mimicking processes in nature (such as evolution) rather than making the heuristics amenable to a mathematical analysis. This is different to the classical design of (randomized) algorithms which are developed with their theoretical analysis of runtime (and proof of correctness) in mind. Despite these obstacles, research from the last about 15 years has shown how to apply the methods for the probabilistic analysis of randomized algorithms to RSHs. Mostly, the expected runtime of RSHs on selected problems is analzyed. Thereby, we understand why and when RSHs are efficient optimizers and, conversely, when they cannot be efficient.

The tutorial will give an overview on the analysis of RSHs for solving combinatorial optimization problems. Starting from the first toy examples such as the OneMax function, we approach more realistic problems and arrive at analysis of the runtime and approximation quality of RSHs even for NP-hard problems. Our studies treat not only simple RLS algorithms and SA but also more complex population-based EAs and even ACO algorithms. The combinatorial optimization problems that we discuss include the maximum matching problem, the partition problem and, in particular, the minimum spanning tree problem as an example where Simulated Annealing beats the Metropolis algorithm in combinatorial optimization. Important concepts of the analyses will be described as well.

Carsten Witt

Carsten Witt studied Computer Science at the Technical University of Dortmund, Germany, where he received his diploma and Ph.D. in 2000 and 2004, respectively. Since spring 2009, he is an assistant professor at the Technical University of Denmark. Carsten's main research interests are the theoretical aspects of randomized search heuristics, in particular evolutionary algorithms, ant colony optimization and particle swarm optimization. He co-organized the biannual Dagstuhl seminar "Theory of Evolutionary Algorithms" in 2008 and 2010.



Synthetic Biology

The ultimate goal of systems biology is the development of executable in silico models of cells and organisms. Systems biology attempts to provide an integrative methodology, which while able to cope with -on the one hand- the data deluge that is being generated through high throughput experimental technologies -and on the other hand- emerging technologies that produce scarce often noisy data, would allow to capture within human-understandable models and simulations novel biological knowledge.

In its more modest instantiations, Systems Biology seeks to *clarify* current biological understandings by formalizing what the constitutive elements of a biological system are and how they interact with each other and also it seeks to aid in the *testing* of current understandings against experimental data. In its most ambitious incarnations, however, it aims at *predicting* the behavior of biological systems beyond current understanding and available data thus shedding light onto possible new experimental routes that could lead to better theoretical insights.

Synthetic biology, on the other hand, aims to implement, in vitro/vivo, organisms whose behavior is engineered. The field of synthetic biology holds a great promise for the design, construction and development of artificial (i.e. man-made) biological (sub)systems thus offering viable new routes to genetically modified organisms, smart drugs as well as model systems to examine artificial genomes and proteomes. The informed manipulation of such biological (sub)systems could have an enormous positive impact on our societies, with its effects being felt across a range of activities such as the provision of healthcare, environmental protection and remediation, etc. The basic premise of synthetic biology is that methods commonly used to design and construct non-biological systems, such as those employed in the computational sciences and the engineering disciplines, could also be used to model and program novel synthetic biosystems. Synthetic biology thus lies at the interface of a variety of disciplines ranging from biology through chemistry, physics, computer science, mathematics and engineering.

In this tutorial I will provide an entry level understanding to Systems and Synthetic Biology, it goals, methods and limitations. Furthermore I will describe the many potential applications of evolutionary computation to these two fields. Indeed, I believe that the EC community has a beautiful new application domain in which its methods could be both valued and challenged.

More information is available at

Natalio Krasnogor

He is an Associate Professor in the School of Computer Science at the University of Nottingham. He is a member of the Automated Scheduling, Optimisation and Planning Research Group (ASAP) and also leads the Interdisciplinary Optimisation Laboratory (IOL). Krasnogor's research activities lie at the interface of Computer Science and the Natural Sciences, e.g. Biology, Physics, Chemistry. In particular, he develops innovative and competitive search methodologies and intelligent decision support systems for transdisciplinary optimisation, modelling of complex systems and very-large datasets processing.

He has applied his expertise to Bioinformatics, Systems Biology, Synthetic Biology, Nanoscience and Chemistry. He is member of the editorial boards for the journal Modelling and Simulation in Engineering and the journal Artificial Evolution and Applications. He is associate editor of the Evolutionary Computation journal and founding technical editor-in-chief of the new journal Memetic Computing. Krasnogor has acted as grant reviewer for the EPSRC (UK), BBSRC(UK), European Science Foundation, European Union, The Israel Science Foundation, DOE Computational Biology Programme (USA), CNRS (France), etc.He co-chairs the 2nd European Conference on Synthetic Biology (ECSB II).

More details are available at


Generative and Developmental Systems

In evolutionary computation it is common for the genotype to map directly to the phenotype such that each gene in the genotype represents a single discrete component of the phenotype. While this convention is widespread, in nature the mapping is not direct. Instead, the genotype maps to the phenotype through a process of development, which means that each gene may be activated multiple times and in multiple places in the process of constructing the phenotype.

This tutorial will examine why such indirect mapping may be critical to the continuing success of evolutionary computation. Rather than just an artifact of nature, indirect mapping means that vast phenotypic spaces (e.g. the 100 trillion connection human brain) can be searched effectively in a space of far fewer genes (e.g. the 30,000 gene human genome). The tutorial will introduce this new research area, called Generative and Developmental Systems (GDS), by surveying milestones in the field and introducing its most profound puzzles. Most importantly, what is the right abstraction of natural development to capture its essential advantages without introducing unnecessary overhead into the search?

Kenneth O. Stanley

He is an assistant professor in the School of Electrical Engineering and Computer Science at the University of Central Florida. He received a B.S.E. from the University of Pennsylvania in 1997 and received a Ph.D. in 2004 from the University of Texas at Austin. He is an inventor of the Neuroevolution of Augmenting Topologies (NEAT) and HyperNEAT algorithms for evolving complex artificial neural networks. His main research contributions are in neuroevolution (i.e. evolving neural networks), generative and developmental systems, coevolution, machine learning for video games, and interactive evolution. He has won best paper awards for his work on NEAT, NERO, NEAT Drummer, HyperNEAT, novelty search, and Galactic Arms Race. He is the chair of the IEEE Task Force on Computational Intelligence and Video Games, an associate editor of IEEE Transactions on Computational Intelligence and AI in Games, and chaired the Generative and Developmental Systems track at GECCO from 2007 to 2009.



Real-World Data Modeling

There are many industrial and corporate data sources: financial, manufacturing, experimental, processing, marketing, quality control, etc. Capturing the value in that data requires more than fitting trivial models or visually exploring the data. Rather, we must efficiently isolate driving variables, confirm or reject potential outliers and build models which are both accurate and trustable. Fortunately, multi-objective genetic programming (aka, ParetoGP) allows us to achieve this objective. ParetoGP will be the foundation technology in this tutorial; however, we will address the entire modeling process including data balancing, outlier detection and model usage/exploitation - as well as the model development.

Conventional data modeling assumes an a priori model form and then tweaks coefficients to get the best fit. Conversely, symbolic regression hypothesizes and assesses many different model forms. Many are rejected. Some are deemed interesting and explored and refined. The net result is that the data defines appropriate model structures (and coefficients).

"Appropriate" in this case means a balance between accuracy and simplicity. Considering both of these objectives avoids models which over-fit or under-fit the data. We can also extract meaningful results from fat data arrays (more variables than records) or data sets having correlated inputs. We can also gain insight by examining the selected variables, variable combinations and structures of the developed model expressions. Furthermore, because diverse model structures are explored during the model search, we can assemble ensembles of high-quality but diverse models; these enable building trustable models capable of detecting extrapolation into new operating regions or fundamental changes in the underlying system.

In addition to covering the basic theory of ParetoGP, we illustrate key points using real-world industrial data modeling case studies as well as review best practices of industrial data modeling. Current economic conditions demand maximum efficiency in developing and exploiting maximal quality models; ParetoGP has been used for applications ranging from energy trading to active design-of-experiments to plant trouble-shooting to patent litigation modeling to ...


Mark Kotanchek:

Mark's diverse academic background (B.S. in Engineering Science, M.Eng. in Acoustics, Ph.D. in Aerospace Engineering, Senior Member of IEEE) is consistent with the diversity of professional experience (ladder design, defense system design, signal processing research, technology trend analysis, IT project management, chemical plant troubleshooting, high-throughput biology, energy trading system development, etc.). He is recognized as a global leader in genetic programming theory and application. He founded Evolved Analytics in 2005 with a goal of developing tools and systems to address the data deluge of the modern world and the need to convert that data into actionable insight and understanding.


Experimental Optimization by Evolutionary Algorithms

This tutorial addresses applications of evolutionary algorithms to optimization tasks where the function evaluation cannot be done through a computer simulation, but requires the execution of an experiment in the real world (i.e., cosmetics, detergents, wind tunnel experiments, taste experiments, to mention a few).

The use of EAs for experimental optimization is placed in its historical context with an overview of the landmark studies in this area carried out in the 1960s at the Technical University of Berlin. Statistical design of experiments (DoE) methods from the late 50s are also reviewed, and it is shown how relatives of these approaches are converging in modern sequential DoE/EA hybrid methods.

The main characteristics of experimental optimization work, in comparison to optimization of simulated systems, are discussed, and practical guidelines for real-world experiments with EAs are given. For example, experimental problems can constrain the evolution due to overhead considerations, interruptions, changes of variables, and population sizes that are determined by the experimental platform.

A series of modern-day case studies shows the persistence of experimental optimization problems today. These cover experimental quantum control, real DNA and RNA evolution, combinatory drug discovery, coffee and chocolate processing, and others. These applications can push EA methods outside of their normal operating envelope, and raise research questions in a number of different areas ranging across constrained EAs, multiobjective EAs, robust and reliable methods for noisy problems, and metamodeling methods for expensive cost functions.

Ofer Shir:

PhD, is currently a Postdoctoral Research Associate at Princeton University, conducting research at the Rabitz Group, within the Department of Chemistry.

Ofer received his B.Sc. in Physics and Computer Science from the Hebrew University of Jerusalem in June 2003.
His graduate studies were in Leiden University, The Netherlands, where he studied Niching in Evolution Strategies under Prof. Thomas Bäck. He also played a role of a Teaching Assistant in the Computer Science Department of Leiden University during the years 2005-2007.

His research activities included a joint project with Amolf-Amsterdam, where he collaborated with the XUV group of Prof. Marc Vrakking. His PhD in Computer Science from Leiden University was conferred in June 2008. His current topics of interest include Natural Computing, Evolutionary Algorithms, Experimental Optimization, Quantum Control and Quantum Information.


Thomas Bäck:

PhD, is Professor for Natural Computing at the Leiden Institute for Advanced Computer Science (LIACS) at Leiden University, The Netherlands, and Chief Scientist of the Qe3 group at Netezza Corporation. Thomas received his PhD in Computer Science from Dortmund University, Germany, in 1994.

Thomas Bäck has more than 150 publications on natural computing technologies, as well as a book on evolutionary algorithms, entitled Evolutionary Algorithms: Theory and Practice. He is editorial board member and associate editor of a number of journals on evolutionary and natural computation, and has served as program chair for all major conferences in evolutionary computation.

His expertise lies in adaptive technologies for optimization and data-driven modeling, predictive analytics, and bioinformatics. He received the best dissertation award from the Gesellschaft für Informatik (GI) in 1995 and is an elected fellow of the International Society for Genetic and Evolutionary Computation for his contributions to the field.


Joshua Knowles:

He is a Research Fellow at the School of Computer Science, University of Manchester, UK. He graduated with a PhD from University of Reading, UK, in 2002, following degrees in Physics and Information Systems Engineering.

Joshua has approximately 70 scientific publications in conferences, journals and books, and he has twice won the Outstanding Paper Award of the IEEE Transactions on Evolutionary Computation. He is an editorial board member of Evolutionary Computation journal, an associate editor of Swarm Intelligence journal, and a member of the Steering Committee of the EMO series of conferences.

His main research themes are multiobjective optimization, algorithms for expensive cost functions, and applications in interdisciplinary biology. He has more than five years' experience using evolutionary algorithms for experimental optimization.


Evolutionary Computer Vision:

The term Evolutionary Computer Vision is a recent research subject in genetic and evolutionary algorithms. This tutorial aims to provide a survey about the main topics that have been developed in the literature. The goal is to introduce the genetic and evolutionary community to the research activities of endowing a machine with visual abilities through the paradigm of evolutionary computing. The tutorial will be focused in giving examples of real working systems and how the paradigm could be implemented to solve problems such as: recognition, feature extraction, 3D measurement, and segmentation.

The tutorial addresses all the above mentioned issues and is articulated in three parts. The introduction summarizes the new idea of evolving structures that solves computer vision problems in terms of optimization using the framework of evolutionary computation. The second part is focused in details to successfully implement an Evolutionary Algorithm to solve computer vision problems. In particular, it illustrates the crucial role of encoding the solution, the definition of the fitness function, and how the structures are evolved by the system. Examples are provided to show aspects related to multimodal fitness functions, synthesis of feature detectors, and problem decomposition. Finally, a discussion about the methodological issues addressed in evolutionary computer vision should give the participants a clear picture that a successful application requires a balance between computer vision and evolutionary algorithms knowledge.

Scientists, Engineers, Mathematicians, Programmers, and Technical Managers will all benefit by learning about how to evaluate this new computational research field.

Further information:

Expected enrollment: 30 people

Gustavo Olague:

He is a research scientist at the CICESE Research Center working within the Computer Science Department of the Applied Physics Division. He hold a Bachelor's degree (Honors) in Electronics Engineering and a Master's degree in Computer Science from the Instituto Tecnológico de Chihuahua, Mexico. He received the Ph.D. (Diplôme de Doctorat en Imagerie, Vision et Robotique) in Computer Graphics, Vision and Robotics from Institut National Polytechnique de Grenoble, France, working at the INRIA Rhône Alpes within the MOVI Research team.   He is presently a Professor of Computer Science at Centro de Investigación Científica y de Educación Superior de Ensenada, B.C.

Olague's research focuses on the principles of computational intelligence applied to close-range photogrammetry and computer vision. He is a member of the EvoNET, RSPSoc, ASPRS, SIGEVO, IEEE, IEEE Computer Society, and is listed in Who's Who in Science and Engineering. Dr. Olague is recipient of the "2003 First Honorable Mention for the Talbert Abrams Award", offered by the American Society for Photogrammetry and Remote Sensing. He has been awarded the Bronze Medal at the Human Competitive Awards for his work on synthesis of interest points and region detectors in 2006 and 2009.



Digital Evolution with Avida

Digital evolution enables researchers to observe and study evolution in action at time scales and at levels of detail not possible with natural organisms. This tutorial introduces and explains how to use Avida, the most widely used digital evolution platform. In Avida a population of self-replicating computer programs, or digital organisms, compete for resources and evolve in a computational environment. Avida has been used to study many evolutionary topics, such as evolvability, historical contingency, altruism, cooperation, and sex, in journals such as Nature, Science, Proceedings of the National Academy of Science, and Evolution.

In addition to introducing participants to Avida, this tutorial will engage participants through hands-on experiments. After we describe research projects that have been conducted with Avida, participants will perform experiments from those projects. The experiments have been chosen to be both educational and fun.

Avida is a freely available, open-source platform. Participants are not required to download Avida before the tutorial; however doing so will enhance your experience. We will provide precompiled executables for recent versions of Microsoft Windows and Apple OS X. Note that Avida is also compatible with Linux platforms, however we are unable to provide precompiled executables for the many different flavors of Linux. The hand-on exercises are designed to be collaborative, so please attend even if you cannot bring your own computer. Upon completion of the tutorial, participants will have first-hand knowledge of the capabilities of Avida along with practical experience regarding how to conduct experiments using Avida.

Benjamin Beckmann

He is a doctoral candidate in the Department of Computer Science and Engineering at Michigan State University in East Lansing, Michigan. As a graduate researcher in the Digital Evolution Laboratory and Software Engineering and Networked Systems Laboratory, Mr. Beckmann combines the study of evolution in biological systems with the application of evolutionary computation in computer science. Specifically, his work focuses on evolving collections of agents to perform distributed, cooperative, energy conserving behavior. Mr. Beckmann's work has been featured in IEEE Computer, New Scientist magazine, and World of Intelligence (a French language magazine). Mr. Beckmann holds M.S. and B.S. degrees in Computer Science from Western Michigan University.

Jeff Clune

He studies generative encodings, which enhance evolutionary algorithms by augmenting them with concepts from developmental biology. Such concepts enable the assembly of complex forms from compact genomes. Jeff combines these generative encodings with neuroevolution, a technology that harnesses the power of evolution to construct artificial neural networks (ANNs). Evolving ANNs with generative encodings creates large-scale, structurally organized ANNs that produce sophisticated, modular behaviors. Jeff demonstrates the efficacy of such ANNs in robotic control problems.

Jeff also develops evolutionary algorithms to investigate open questions in biology, and has published work on the evolution of altruism, phenotypic plasticity, and evolvability. This work contributes to biology, because it improves our knowledge of evolving systems, and enhances computer science, because it helps design better evolutionary algorithms.

Jeff is a Ph.D. candidate in the Digital Evolution Laboratory at Michigan State University. He is also the co-chair of the Generative and Developmental Systems track at GECCO 2010. Jeff has a master's degree in philosophy from Michigan State University and a bachelor's degree in philosophy from the University of Michigan.


Cartesian Genetic Programming

Cartesian Genetic Programming is a form of genetic programming. It is increasing in popularity. It originated from some work done by Julian Miller with Peter Thomson in 1997. However, the term Cartesian Genetic Programming first appeared in the inaugural GECCO in 1999.

In its classic form it uses a very simple integer based genetic representation of a program in the form of a directed graph. In a number of studies, it has been shown to be comparatively efficient to other GP techniques.

The classical form of CGP has been enhanced in various ways by to include automatically defined functions. Most recently it has been developed by Julian Miller, Wolfgang Banzhaf and Simon Harding to include self-modification operators. This again has increased its efficiency.

The tutorial will cover the basic technique, advanced developments and applications to a variety of problem domains.

For further info on Cartesian GP see:

Julian Miller

He is an academic in the Department of Electronics at the University of York. His main research interests are genetic programming (GP), and computational development. He has published over 140 refereed papers on evolutionary computation, genetic programming, evolvable hardware, and computational development. He has been chair or co-chair of fourteen conferences or workshops in genetic programming, computational development, evolvable hardware and evolutionary techniques.

Dr. Miller chaired of the Evolvable Hardware tracks at the Genetic and Evolutionary Computation Conference in 2002-2003 and was Genetic Programming track chair in 2008. He was co-chair the Generative and Developmental Systems(GDS) track in 2007, 2009 and is GDS track co-chair in 2010. He is an associate editor of the Genetic Programming and Evolvable Machines and a former associate editor of IEEE Transactions on Evolutionary Computation. He is an editorial board member of the journals Evolutionary Computation and Unconventional Computing.

Dr. Miller’s book on Cartesian Genetic Programming is in preparation and is scheduled for publication in 2010.

Simon Harding

He is a post doctoral research fellow in the Department of Computer Science at Memorial University, Canada. His research interests include genetic programming, unconventional computation and parallel computing on consumer grade hardware. He has been co-chair of the Computational Intelligence on Consumer Games and Graphics Hardware track at WCCI 2008 and GECCO 2009/2010. He is also co-organiser of the GECCO GPU programming competition.


To Propose a Tutorial or Workshop
To propose a tutorial, contact Una-May O‘Reilly at 
To propose a workshop, contact Jaume Bacardit at  
Please include “GECCO” in your subject line.

Genetic and Evolutionary Computation Conference (GECCO-2010)
GECCO 2009 site   GECCO 2008 site       GECCO 2007 site     GECCO 2006 site      GECCO 2005 site         GECCO 2004 site    
GECCO 2003 site       GECCO 2002 site         GECCO 2001 site      GECCO 2000 site      GECCO 1999 site