Workshops and Tutorials Schedule
Planned Free Tutorials
Two days of free tutorials and workshops (included with conference registration) presented by some of
the world’s foremost experts in topics of interest to genetic and evolutionary computation researchers and practitioners.
Introductory Tutorials
Advanced Tutorials
Specialized Techniques and Applications
Introductory Tutorials

Introduction to Genetic Algorithms
The Introduction to Genetic Algorithms Tutorial is aimed at GECCO attendees with limited knowledge of genetic algorithms, and will start "at the beginning," describing first a "classical" genetic algorithm in terms of the biological principles on which it is loosely based, then present some of the fundamental results that describe its performance, described using the schema concept. It will cover some variations on the classical model, some successful applications of genetic algorithms, and advances that are making genetic algorithms more useful.

Erik Goodman
He is a Professor of Electrical and Computer Engineering and of Mechanical Engineering at Michigan State University. He studied genetic algorithms under John Holland at the University of Michigan, before they had been named. His use of a genetic algorithm in 1971 to solve for 40 coefficients of a highly nonlinear model of a bacterial cell was the first known GA application on a realworld problem, and ran for nearly a year on a dedicated computer. He has used genetic algorithms in his research ever since, including for parameterization of complex ecosystem models, for evolution of cooperative behavior in artificial life, for factory layout and scheduling, for protein folding and ligand docking, for design of shape and layup of composite structures, and for data mining and pattern classification. Much of his recent research has been in development of new types of parallel genetic algorithms and in design methods for mechatronic systems using genetic programming. He is cofounder and VP Technology of Red Cedar Technology, which provides tools for automated engineering design based on evolutionary computation. He chaired ICGA97 and GECCO2001, chaired GECCO´s sponsoring organization, ISGEC, from 20012004, and was the founding chair of ACM SIGEVO, its successor. He was named Michigan Distinguished Professor of the Year in 2009 by the Presidents Council, State Universities of Michigan.




Introduction to Genetic Programming
Genetic programming emerged in the early 1990's as one of the most exciting new evolutionary algorithm paradigms. It has rapidly grown into a thriving area of research and application. While sharing the evolutionary inspired algorithm principles of a genetic algorithm, it differs by exploiting an executable genome.: genetic programming evolves a "program" to solve a problem rather than a single solution.
This tutorial introduces the basic genetic programming framework: It will explain how the powerful capability of genetic programming is derived from modular algorithmic components:
• executable representations  e.g. parsetree, linear and graphbased variation operators that preserve syntax and explore a variable length, hierarchical solution space
• appropriately chosen programming functions
• fitness function specification

UnaMayO'Reilly
Dr. O'Reilly is a 10+ year veteran in the Evolutionary Algorithm community. Her doctoral thesis was the first comparative analysis of Genetic Programming (with respect to simulated annealing and hill climbing). She developed the first schemabased theoretical analysis of genetic programming. She now specializes in applications of evolutionary algorithms in tandem with machine learning and optimization. She leads the Evolutionary Design and Optimization group at the Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA. Her research currently focuses on evolutionary computation projects related to wind energy, sensory evaluation (symbolic regression) and computer systems (multicore compilers and cloud virtualization technology)
Dr. O'Reilly developed a genetic programming based design system named Genr8 that is opensource and used throughout the world by researchers in Architecture. As VP of Technology at a Cambridge consulting firm, she led the development of a webbased name advisor based upon (patented) genetic algorithm, statistical learning and knowledge based technology. She has collaborated with State Street Global Advisors in their development of GP portfolio models that are used in investment strategies. Her current research sponsors include USA Dept of Energy (EXAScale project), DARPA (Angstrom project), VMWare, Givaudan Flavors PLC and Raytheon.
Dr. O'Reilly was a founding member of the International Society for Genetic and Evolutionary Computation executive board and is founding member and secretary of the ACM SIGEVO Executive Board. She edited (with Peter Angeline) the first special issue on Genetic Programming in the journal Evolutionary Computation and is a founding editorial board member of the journal Genetic Programming and Evolvable Machines. She coedited the second volume of Advances in Genetic Programming and the second and 5th volumes of Genetic Programming: From Theory to Practice. She has chaired GECCO (in 2005), the world's premier conference on Genetic and Evolutionary Computation. She serves on the editorial board of Evolutionary Computatino and as an action editor for Journal of Machine Learning Research. 



Evolution Strategies: Basic Introduction
This tutorial introduces the basics of evolution strategies, a branch of evolutionary
computation which focused already very early on the selfadaptation of strategy
parameters such as mutation variances. Based on the original (1+1)evolution
strategy (using the 1/5 success rule for step size adaptation), the generalized
(1,l) and (m,l)strategies use the principle of selfadaptation mutation variances
extensively. With both mutation and recombination, the (m,l)strategy is a
well known and understood algorithm for mixedinteger optimization (including
the integeronly and the realvaluedonly cases).
The tutorial will present all basic algorithmic variants as well
as guidelines for their implementation. Moreover, extensions such
as mixedinteger optimization and some derandomized variants of
evolution strategies are also introduced briefly.
Concerning the comparison to genetic algorithms and evolutionary
programming, we will use the concept of a generalized evolutionary
algorithm as introduced in [1], and outline the similarities and
differences of those classes of algorithms.
The tutorial will conclude with some examples of applications
of evolution strategies.
References
[1] Th. Bäck (1996): Evolutionary
Algorithms in Theory and Practice, Oxford University Press, NY.

Thomas Bäck
PhD, heis Professor for Natural Computing at the Leiden Institute
for Advanced Computer Science (LIACS) at Leiden University,
The Netherlands, and Chief Scientist of the Qe3 group at
Netezza Corporation. Thomas received his PhD in Computer
Science from Dortmund University, Germany, in 1994.
Thomas Bäck has more than
150 publications on natural computing technologies, as well
as a book on evolutionary algorithms, entitled Evolutionary
Algorithms: Theory and Practice. He is editorial board member
and associate editor of a number of journals on evolutionary
and natural computation, and has served as program chair
for all major conferences in evolutionary computation.
His expertise lies in adaptive technologies
for optimization and datadriven modeling, predictive analytics,
and bioinformatics. He received the best dissertation award from
the Gesellschaft für Informatik (GI) in 1995 and is an elected
fellow of the International Society for Genetic and Evolutionary
Computation for his contributions to the field. 

Evolutionary Computation: A Unified Approach
The field of Evolutionary Computation has experienced tremendous growth over the past 20 years, resulting in a wide variety of evolutionary algorithms and applications. The result poses an interesting dilemma for many practitioners in the sense that, with such a wide variety of algorithms and approaches, it is often hard to see the relationships between them, assess strengths and weaknesses, and make good choices for new application areas.
This tutorial is intended to give an overview of a general EC framework that can help compare and contrast approaches, encourages crossbreeding, and facilitates intelligent design choices. The use of this framework is then illustrated by showing how traditional EAs can be compared and contrasted with it, and how new EAs can be effectively designed using it.
Finally, the framework is used to identify some important open issues that need further research.

Kenneth A. De Jong
He received his Ph.D. in computer science from the University of Michigan in 1975. He joined George Mason University in 1984 and is currently a Professor of Computer Science, head of the Evolutionary Computation laboratory, and the Associate Director of the Krasnow Institute. His research interests include evolutionary computation, machine learning, and adaptive systems. He is currently involved in research projects involving the development of new evolutionary algorithm (EA) theory, the use of EAs as heuristics for NPhard problems, and the application of EAs to the problem of learning task programs in domains such as robotics, diagnostics, navigation and game playing. He is also interested in experiencebased learning in which systems must improve their performance while actually performing the desired tasks in environments not directly their control or the control of a benevolent teacher. He is an active member of the Evolutionary Computation research community and has been involved in organizing many of the workshops and conferences in this area. He is the founding editorinchief of the journal Evolutionary Computation (MIT Press), and a member of the board of the ACM SIGEVO. He is the recipient of an IEEE Pioneer award in the field of Evolutionary Computation and a lifetime achievement award from the Evolutionary Programming Society. 



Probabilistic ModelBuilding Algorithms (PMBGAs)
Probabilistic modelbuilding algorithms (PMBGAs) replace traditional variation of genetic and evolutionary algorithms by (1) building a probabilistic model of promising solutions and (2) sampling the built model to generate new candidate solutions. PMBGAs are also known as estimation of distribution algorithms (EDAs) and iterated densityestimation algorithms (IDEAs).
Replacing traditional crossover and mutation operators by building and sampling a probabilistic model of promising solutions enables the use of machine learning techniques for automatic discovery of problem regularities and exploitation of these regularities for effective exploration of the search space. Using machine learning in optimization enables the design of optimization techniques that can automatically adapt to the given problem. There are many successful applications of PMBGAs, for example, Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature subset selection, forest management, groundwater remediation design, telecommunication network design, antenna design, and scheduling.
The tutorial Probabilistic ModelBuilding GAs will provide a gentle introduction to PMBGAs with an overview of major research directions in this area. Strengths and weaknesses of different PMBGAs will be discussed and suggestions will be provided to help practitioners to choose the best PMBGA for their problem.

Martin Pelikan
He received Ph.D. in Computer Science from the University of Illinois at UrbanaChampaign in 2002. He is now an associate professor at the Department of Mathematics and Computer Science at the University of Missouri in St. Louis. In 2006 Pelikan founded the Missouri Estimation of Distribution Algorithms Laboratory (MEDAL). Prior to joining the University of Missouri, he worked at the Illinois Genetic Algorithms Laboratory (IlliGAL), the German National Center for Information Technology in Sankt Augustin, the Slovak University of Technology in Bratislava, and the Swiss Federal Institute of Technology (ETH) in Zurich.
Pelikan has worked as a researcher in genetic and evolutionary computation since 1995. His most important contributions to genetic and evolutionary computation are the Bayesian optimization algorithm (BOA), the hierarchical BOA (hBOA), the scalability theory for BOA and hBOA, and the efficiency enhancement techniques for BOA and hBOA. His current research focuses on extending BOA and hBOA to other problem domains, applying genetic and evolutionary algorithms to realworld problems with the focus on physics, bioinformatics and machine learning, and designing new efficiency enhancement techniques for BOA and other evolutionary algorithms.




Learning Classifier Systems
In the 1970s, John H. Holland designed Learning Classifier Systems (LCSs) as highly adaptive, cognitive systems. Since the introduction of the accuracybased XCS classifier system by Stewart W. Wilson in 1995 and the modular analysis of several LCSs thereafter, LCSs have become a stateoftheart machine learning system. Various publications have shown LCSs can effectively solve datamining problems, reinforcement learning problems, other predictive problems, and even cognitive, robotics control problems. In comparison to other, nonevolutionary machine learning techniques, it was shown that performance is competitive or superior, dependent on the setup and problem. Advantages are that LCSs are learning online, are very plastic and flexible, are applicable to a larger range of problems, and are highly adaptive. Moreover, system knowledge can be easily extracted, visualized, or even used to focus the progressive search on particular interesting subspaces.
The Learning Classifier System tutorial provides a gentle introduction to LCSs and their general functionality. It then surveys the current theoretical understanding of the systems. Finally, we provide a suite of current successful LCS applications and discuss the most promising areas for future applications and research directions.

Martin V. Butz
PD Dr. Butz received his PhD in computer science at the University of Illinois at UrbanaChampaign in October 2004 under the supervision of David E. Goldberg. His thesis "Rulebased evolutionary online learning systems: Learning Bounds, Classification, and Prediction" puts forward a modular, facetwise system analysis for Learning Classifier Systems (LCSs) and analyzes and enhances the XCS classifier system. Until September 2007, Butz was working at the University of Würzburg at the Department of Cognitive Psychology III on the interdisciplinary cognitive systems project "MindRACES: From reactive to anticipatory cognitive embodied systems". In October 2007 he founded his own cognitive systems laboratory: "Cognitive Bodyspaces: Learning and Behavior" (COBOSLAB), which is funded by the German research foundation under the Emmy Noether framework.


Ant colony optimization
Ant colony optimization (ACO) is an approximate method for tackling combinatorial as well as continuous optimization problems. From an artificial intelligence point of view ACO is a swarm intelligence method, while from the operations research point of view ACO belongs to the class of algorithms known as metaheuristics. The inspiring source of ACO is the foraging behaviour of ant colonies. The aims of this tutorial are twofold. First, we explain the basics of ACO algorithms in order to satisfy participants that have no previous knowledge of this technique. Second, we also deal with more advanced features of ACO with the aim of presenting useful material for participants with prior knowledge of ACO algorithms.
The tutorial starts off by introducting the swarm intelligence origins of ACO algorithms. Then, we will show how to translate the biological inspiration into a technical algorithm. Some of the most successful ACO variants are presented in this context. More advanced features of ACO algorithms are presented in the context of appealing applications. The tutorial concludes with an overview on recent hybridizations of ACO algorithms with other optimization techniques.

Christian Blum
He received the doctoral degree in 2004 from the Free University of Brussels (Belgium). After spending one year at the Advanced Computation Laboratory of the Imperial Cancer Research Fund (ICRF) in London, he worked at IRIDIA at the Free University of Brussels under the supervision of Marco Dorigo. He currently is a "Ramon y Cajal" tenure track research fellow at the ALBCOM research group of the Universitat Politècnica de Catalunya in Barcelona. Current subject of his research is the use of swarm intelligence techniques for the management of largescale mobile adhoc and sensor networks, as well as the hybridization of metaheuristics with more classical artificial intelligence and operations research methods.
He is a member of the editorial boards of international journals such as Computers & Operations Research and Swarm Intelligence. As a cofounder, he initiated the series of International Workshops on Hybrid Metaheuristics (HM). He presented invited tutorials at conferences such as PPSN, GECCO, and HIS, and keynote talks at international conferences such as ANTS 2008 and BIOMA 2010.




Evolution Strategies and Covariance Matrix Adaptation
Evolution Strategies (ESs) and many continuous domain Estimation of
Distribution Algorithms (EDAs) are stochastic optimization procedures
that sample a multivariate normal (Gaussian) distribution in the
continuous search space, R^n. Many of them can be formulated in a
unified and comparatively simple framework.
This introductory tutorial focuses on the most relevant algorithmic
question: how should the parameters of the sample distribution be
chosen and, in particular, updated in the generation sequence? First,
two common approaches for stepsize control are reviewed, onefifth
success rule and path length control. Then, Covariance Matrix
Adaptation (CMA) is discussed in depth: rankone update, the
evolution path, rankmu update. Invariance properties and the
interpretation as natural gradient descent are touched upon.
In the beginning, general difficulties in solving nonlinear,
nonconvex optimization problems in continuous domain are revealed,
for example nonseparability, illconditioning and ruggedness.
Algorithmic design aspects are related to these difficulties. In the
end, the performance of the CMAES is related to other wellknown
evolutionary and nonevolutionary optimization algorithms, namely
BFGS, DE, PSO,...

Anne Auger
Dr. Auger received her diploma in mathematics from the University
of Paris VI, France, in 2001. She also obtained the French highest
diploma for teaching mathematics, "Agregation de
mathematiques". She received the doctoral degree from the university
Paris VI in 2004. Afterwards, she worked for two years (20042006) as
a postdoctoral researcher at ETH (in Zurich) in the Computational
Laboratory (CoLab). Since October 2006, she holds a permanent research
position at INRIA (French National Research Institute in Computer
Science and Applied Mathematics). Her research interests are
stochastic continuous optimization, including theoretical analyses of
randomized search heuristics. She published more than fifteen articles
at top conferences and journals in this area. She organized (and will
organize) the biannual Dagstuhl seminar "Theory of Evolutionary Computation" in 2008 (and 2010).


Nikolaus Hansen
He is a senior researcher at The French National
Institute for Research in Computer Science and Control (INRIA).
Educated in medicine and mathematics, he received a Ph.D. in civil
engineering in 1998 from the Technical University Berlin under Ingo
Rechenberg. Before he joined INRIA, he has been working in applied
artificial intelligence and genomics, and he has done research in
evolutionary computation and computational science at the Technical
University Berlin and the ETH Zurich. His main research interests
are learning and adaptation in evolutionary computation and the
development of algorithms applicable in practice. He has been the
main driving force behind the development of CMAES over many years. 

Evolving Neural Networks
Neuroevolution, i.e. evolution of artificial neural networks, has recently emerged as a powerful technique for solving challenging reinforcement learning problems. Compared to traditional (e.g. valuefunction based) methods, neuroevolution is especially strong in domains where the state of the world is not fully known: the state can be disambiguated through recurrency, and novel situations handled through pattern matching. In this tutorial, we will review (1) neuroevolution methods that evolve fixedtopology networks, network topologies, and network construction processes, (2) ways of combining traditional neural network learning algorithms with evolutionary methods, and (3) applications of neuroevolution to control, robotics, artificial life, and games.

Risto Miikkulainen
He is a Professor of Computer Sciences at the University of Texas at Austin. He received an M.S. in Engineering from the Helsinki University of Technology, Finland, in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His recent research focuses on methods for evolving neural networks and applying these methods to game playing, robotics, and intelligent control. He is an author of over 250 articles on neuroevolution, connectionist natural language processing, and the computational neuroscience of the visual cortex. He is an editor of Machine Learning, Neural Networks, IEEE Transactions on Computational Intelligence and AI in Games, Cognitive Systems Research, and IEEE Transactions on Autonomous Mental Development.



Advanced Tutorials

Genetic Algorithms Theory
There are many interesting things we can study about genetic algorithms from a theoretical point of view. There are structural issues, such as how to design operators for different search spaces, and there are dynamical issues, such as working out the probability of future generations and where a GA will spend most of its time. We will touch on a number of these points from a general perspective, asking what can be said about how to design GAs and how they behave, independent of any particular optimization problem.

Jonathan Rowe
He is a Reader in Natural Computation in the School of Computer Science at the University of Birmingham. He got his PhD from the University of Exeter in 1991 in Artificial Intelligence, and then worked in industry for British Maritime Technology. Returning to academia to complete postdoctoral studies (studying the evolution of neural networks in parallel genetic
algorithms) he then became a Senior Researcher at De Montfort University, specialising in the theory of evolutionary algorithms. He joined the University of Birmingham in 2000 as a lecturer in Computer Science. As well as contributing to the theory of evolutionary algorithms, Rowe has many interdisciplinary research interests including complex systems, biological and social modelling, medical image understanding and quantum computation. He is on the editorial board of Evolutionary Computation (MIT Press) and Theoretical Computer Science (Elsevier). 

Computational Complexity and Evolutionary Computation
Evolutionary algorithms and other natureinspired search heuristics like ant colony optimization have been shown to be very successful when dealing with realworld applications or problems from combinatorial optimization. In recent years, analyses have shown that these general randomized search heuristics can be analyzed like "ordinary" randomized algorithms and that such analyses of the expected optimization time yield deeper insights in the functioning of evolutionary algorithms in the context of approximation and optimization. This is an important research area where a lot of interesting questions are still open.
The tutorial enables attendees to analyze the computational complexity of evolutionary algorithms and other search heuristics in a rigorous way. An overview of the tools and methods developed within the last 15 years is given and practical examples of the application of these analytical methods are presented.

Frank Neumann
He studied Technical Computer Science in
Dortmund and Kiel, Germany. He received his diploma and Ph.D. from the
ChristianAlbrechtsUniversity of Kiel in 2002 and 2006, respectively.
Currently, he is a Senior Lecturer at the School of Computer Science, University of Adelaide, Australia.
In his work, he considers evolutionary computation methods in particular for combinatorial and multiobjective optimization problems. Together with Carsten Witt he has written the textbook "Bioinspired Computation in Combinatorial Optimization  Algorithms and Their Computational Complexity" published by Springer. Frank has given several tutorials at GECCO and PPSN during the last years. 

Thomas Jansen
He was born in 1969, and studied Computer Science at the University of Dortmund, Germany. He received his diploma (1996) and Ph.D. (2000) there. From September 2001 to August 2002 he stayed as a PostDoc at Kenneth De Jong's EClab at the George Mason University in Fairfax, VA. He was Juniorprofessor for Computational Intelligence from September 2002 to February 2009 at the Technical University Dortmund. Since March 2009 he is Stokes Lecturer at the Department of Computer Science at the University College Cork, Ireland. His research is centered around design and theoretical analysis of evolutionary algorithms and other randomized search heuristics. He has published 17 journal papers, 34 conference papers and contributed six book chapters. He is associate editor of Evolutionary Computation (MIT Press), member of the steering committee of the Theory of Randomized Search Heuristics workshop series, was program chair at PPSN 2008, coorganized FOGA 2009, a workshop on Bridging Theory and Practice (PPSN 2008), tw Dagstuhl workshops on Theory of Evolutionary Computation (2004 and 2006) and one on Artificial Immune Systems (AIS). 



Evolving Quantum Computer Algorithms
Computer science will be radically transformed if ongoing efforts to build largescale quantum computers eventually succeed and if the properties of these computers meet optimistic expectations. Nevertheless, computer scientists still lack a thorough understanding of the power of quantum computing, and it is not always clear how best to utilize the power that is understood. This dilemma exists because quantum algorithms are difficult to grasp and even more difficult to write. Despite largescale international efforts, only a few important quantum algorithms are documented, leaving many essential questions about the potential of quantum algorithms unanswered.
These unsolved problems are ideal challenges for the application of automatic programming technologies. Genetic programming techniques, in particular, have already produced several new quantum algorithms and it is reasonable to expect further discoveries in the future. These methods will help researchers to discover how additional practical problems can be solved using quantum computers, and they will also help to guide theoretical work on both the power and limits of quantum computing.
This tutorial will provide an introduction to quantum computing and an introduction to the use of evolutionary computation for automatic quantum computer programming. No background in physics or in evolutionary computation will be assumed. While the primary focus of the tutorial will be on general concepts, specific results will also be presented, including humancompetitive results produced by genetic programming. Followup material is available from the presenter's book, Automatic Quantum Computer Programming: A Genetic Programming Approach, published by Springer and Kluwer Academic Publishers.

Lee Spector
He is a Professor of Computer Science in the School of Cognitive Science at Hampshire College in Amherst, Massachusetts, and an adjunct professor in the Department of Computer Science at the University of Massachusetts, Amherst. He received a B.A. in Philosophy from Oberlin College in 1984 and a Ph.D. from the Department of Computer Science at the University of Maryland in 1992. His areas of teaching and research include genetic and evolutionary computation, quantum computation, and a variety of intersections between computer science, cognitive science, evolutionary biology, and the arts. He is the EditorinChief of the journal Genetic Programming and Evolvable Machines (published by Springer) and a member of the editorial board of Evolutionary Computation (published by MIT Press). He is also a member of the SIGEVO executive committee and he was named a Fellow of the International Society for Genetic and Evolutionary Computation.
More info: http://hampshire.edu/lspector 

Evolutionary Multiobjective Optimization
Many optimization problems are multiobjective in nature in the sense that multiple, conflicting criteria need to be optimized simultaneously. Due to the conflict between objectives, usually, no single optimal solution exists. Instead, the optimum corresponds to a set of socalled Paretooptimal solutions for which no other solution has better function values in all objectives.
Evolutionary Multiobjective Optimization (EMO) algorithms are widely used in practice for solving multiobjective optimization problems due to several reasons. As randomized blackbox algorithms, EMO approaches allow to tackle problems with nonlinear, nondifferentiable, or noisy objective functions. As setbased algorithms, they allow to compute or approximate the full set of Paretooptimal solutions in one algorithm runopposed to classical solutionbased techniques from the multicriteria decision making (MCDM) field. Using EMO approaches in practice has two other advantages: they allow to learn about a problem formulation, for example, by automatically revealing common design principles among (Paretooptimal) solutions (innovization) and it has been shown that certain singleobjective problems become easier to solve with randomized search heuristics if the problem is reformulated as a multiobjective one (multiobjectivization).
This tutorial aims at giving a broad introduction to the EMO field and at presenting some of its recent research results in more detail. More specifically, I am going to (i) introduce the basic principles of EMO algorithms in comparison to classical solutionbased approaches, (ii) show a few practical examples which motivate the use of EMO in terms of the mentioned innovization and multiobjectivization principles, and (iii) present a general overview of stateoftheart algorithms and techniques. Moreover, I will present some of the most important research results in areas such as indicatorbased EMO, preference articulation, and performance assessment.
Though classified as advanced, this tutorial is intended for both novices and regular users of EMO. Those without any knowledge will learn about the foundations of multiobjective optimization and the basic working principles of stateoftheart EMO algorithms. Open questions, presented throughout the tutorial, can serve for all participants as a starting point for future research and/or discussions during the conference.

Dimo Brockhoff
After obtaining his diploma in computer science (Dipl.Inform.) from University of Dortmund, Germany in 2005, Dimo Brockhoff
received his PhD (Dr. sc. ETH) from ETH Zurich, Switzerland in 2009. Between June 2009 and November 2010 he was a postdoctoral researcher at INRIA Saclay IledeFrance in Orsay, France. Since November 2010 he has been a postdoctoral researcher at LIX, Ecole Polytechnique within the CNRSMicrosoft chair "Optimization for Sustainable Development (OSD)" in Palaiseau, France. His research interests are focused on evolutionary multiobjective optimization (EMO), in particular on manyobjective optimization and theoretical aspects of indicatorbased search. 

ConstraintHandling Techniques used with Evolutionary Algorithms
Evolutionary Algorithms (EAs), when used for global optimization,
can be seen as unconstrained optimization techniques. Therefore,
they require an additional mechanism to incorporate constraints of
any kind (i.e., inequality, equality, linear, nonlinear) into their fitness
function. Although the use of penalty functions (very popular with
mathematical programming techniques) may seem an obvious choice,
this sort of approach requires a careful fine tuning of the penalty
factors to be used. Otherwise, an EA may be unable to reach the
feasible region (if the penalty is too low) or may reach quickly
the feasible region but being unable to locate solutions that lie in
the boundary with the infeasible region (if the penalty is too severe).
This has motivated the development of a number of approaches to
incorporate constraints into the fitness function of an EA. This tutorial
will cover the main proposals in current use, including novel approaches
such as the use of tournament rules based on feasibility, multiobjective
optimization concepts, hybrids with mathematical programming
techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial
immune systems, among others. Other topics such as the importance of
maintaining diversity, current benchmarks and the use of alternative
search engines (e.g., particle swarm optimization, differential evolution,
evolution strategies, etc.) will be also discussed (as time allows).

Carlos Artemio Coello Coello
He received a BSc in Civil
Engineering from the Universidad Autonoma de Chiapas
in Mexico in 1991 (graduating Summa Cum Laude). Then,
he was awarded a scholarship from the Mexican
government to pursue graduate studies in Computer
Science at Tulane University (in the USA). He received
a MSc and a PhD in Computer Science in 1993 and 1996,
respectively.
Dr. Coello has been a Senior Research Fellow in the
Plymouth Engineering Design Centre (in England) and
a Visiting Professor at DePauw University (in the USA).
He is currently full professor at CINVESTAVIPN in
Mexico City, Mexico.
He has published over 250 papers in international
peerreviewed journals, book chapters, and conferences.
He has also coauthored the book "Evolutionary
Algorithms for Solving MultiObjective Problems",
which is now in its Second Edition (Springer, 2007)
and has coedited the book "Applications of
MultiObjective Evolutionary Algorithms" (World
Scientific, 2004).
He has delivered invited talks, keynote speeches and
tutorials at international conferences held in Spain,
USA, Canada, Switzerland, UK, Chile, Colombia, Brazil,
Argentina, India, Uruguay and Mexico.
Dr. Coello has served as a technical reviewer for over
50 international journals and for more than 100
international conferences and actually serves as
associate editor of the journals "IEEE Transactions
on Evolutionary Computation", "Evolutionary Computation",
"Computational Optimization and Applications", "Pattern
Analysis and Applications" and the "Journal of Heuristics".
He is also a member of the editorial boards
of the journals "Soft Computing", "Engineering
Optimization", the "Journal of Memetic Computing"
and the "International Journal of Computational
Intelligence Research".
He received the 2007 National Research Award (granted
by the Mexican Academy of Science) in the area of
"exact sciences".
He is member of the Mexican Academy of Science,
a member of Sigma Xi, The Scientific Research
Society, and an IEEE Fellow. He is also a member
of the Council of Authors of the "International
Society for Genetic and Evolutionary Computation".
His current research interests are: evolutionary
multiobjective optimization and constrainthandling
techniques for evolutionary algorithms. 



Representations for Evolutionary Algorithms
Successful and efficient use of evolutionary algorithms (EAs) depends on the choice of the genotype, the problem representation (mapping from genotype to phenotype) and on the choice of search operators that are applied to the genotypes. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation.
In EA practice one can distinguish two complementary approaches. The first approach uses indirect representations where a solution is encoded in a standard data structure, such as strings, vectors, or discrete permutations, and standard offtheshelf search operators are applied to these genotypes. To evaluate the solution, the genotype needs to be mapped to the phenotype space. The proper choice of this genotypephenotype mapping is important for the performance of the EA search process. The second approach, the direct representation, encodes solutions to the problem in its most 'natural' space and designs search operators to operate on this representation.
Research in the last few years has identified a number of key concepts to analyse the influence of representationoperator combinations on EA performance.
These concepts are
• locality and
• redundancy.
Locality is a result of the interplay between the search operator and the genotypephenotype mapping. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Furthermore, redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes. Finally, a bias need not be the result of the representation but can also be caused by the search operator.
The tutorial gives a brief overview about existing guidelines for representation design, illustrates the different aspects of representations, gives a brief overview of theoretical models describing the different aspects, and illustrates the relevance of the aspects with practical examples.
It is expected that the participants have a basic understanding of EA principles.

Franz Rothlauf
He received a Diploma in Electrical Engineering from the University of Erlangen, Germany, a Ph.D. in Information Systems from the University of Bayreuth, Germany, a Habilitation from the university of Mannheim, Germany, in 1997, 2001, and 2007, respectively.
He is currently chair of Information Systems, Mainz University, Germany. He has published more than 50 technical papers in the context of evolutionary computation, coedited several conference proceedings, and is author of the book "Representations for Genetic and Evolutionary Algorithms" which is published in a second edition in 2006.
His main research interests are problem representations for heuristic search approaches especially evolutionary algorithms. He is a member of the Editorial Board of Evolutionary Computation Journal and Journal of Artificial Evolution and Applications. He has been organizer of several workshops on representations issues, chair of EvoWorkshops in 2005 and 2006, coorganizer of the European workshop series on "Evolutionary Computation in Communications, Networks, and Connected Systems", coorganizer of the European workshop series on "Evolutionary Computation in Transportation and Logistics", and cochair of the program commitee of the GA track at GECCO 2006. He was conference chair of GECCO 2009. 



Foundations of Evolutionary MultiObjective Optimization
Evolutionary algorithms have been shown to be especially
successful when dealing with multiobjective problems. The area of evolutionary
multiobjective optimization has grown rapidly during the last years.
In this advanced tutorial, we give an overview on the different results that
have been achieved on the theoretical aspects of evolutionary multiobjective
optimization. The tutorial will complement the introductory tutorial on
“Evolutionary MultiObjective Optimization” by Eckart Zitzler.
There has been a lot of progress on the foundational side of evolutionary
multiobjective optimization. The first part of the tutorial will cover
convergence and runtime analysis of evolutionary multiobjective optimization.
We will also discuss how multiobjective models of singleobjective
optimization problems can provably lead to faster evolutionary algorithms
for different combinatorial optimization problems.
How to achieve a good spread over the Pareto front is one of the big
questions in evolutionary multiobjective optimization. Therefore the second
part is dedicated to several diversity mechanisms, particularly the hypervolume
indicator. This indicator plays an important role and has found
many applications. We present recent results on complexity and approximability
of the hypervolume and discuss what they imply for applications.

Tobias Friedrich
He received his MSc in computer science from the University
of Sheffield, UK, in 2002 and his diploma in mathematics from the University
of Jena, Germany, in 2005. In 2007 he obtained a PhD in theoretical
computer science from the Saarland University, Germany. In 2008/2009 he
was a postdoc in the Algorithms Group of the International Computer Science
Institute Berkeley, USA. Currently, he is a Research Associate at the
MaxPlanckInstitut für Informatik in Saarbrücken, Germany. The central
topic of his work are randomized algorithms (both classical and evolutionary)
and randomized methods in mathematics and computer science in general.
Tobias has won best paper awards in the tracks “Genetic Algorithms” (GECCO 2008) and “Evolutionary Multiobjective Optimization” (GECCO
2009, 2010).
More information can be found at http://www.mpiinf.mpg.de/~tfried 

Frank Neumann
He studied Technical Computer Science in
Dortmund and Kiel, Germany. He received his diploma and Ph.D. from the
ChristianAlbrechtsUniversity of Kiel in 2002 and 2006, respectively.
Currently, he is a Senior Lecturer at the School of Computer Science, University of Adelaide, Australia.
In his work, he considers evolutionary computation methods in particular for combinatorial and multiobjective optimization problems. Together with Carsten Witt he has written the textbook "Bioinspired Computation in Combinatorial Optimization  Algorithms and Their Computational Complexity" published by Springer. Frank has given several tutorials at GECCO and PPSN during the last years. 
Expected enrollment: The tutorial is for people interested in evolutionary multiobjective optimization as well as for people interested in theoretical aspects of evolutionary computation in general. We expect a number of approximately 50 participants.

An Introduction
to Statistical Analysis for Evolutionary
Computation
Although the
situation has been improving over
the past few years, many researchers
in the EC field are still using an
unfortunate lack
of statistical rigor when performing and analyzing experiments.
Experimental results that lack a proper statistical analysis should be
considered as anecdotal at best, and may even be wholly inaccurate.
This tutorial aims at providing the basic statistical framework that
all experimenters in the EC field should follow.
The tutorial will cover introductory
topics in Statistics such as the
Z test, T test, confidence intervals,
the Central Limit theorem (its
power and limitations), nonparametric
statistics, and will touch on multivariate
and multifactorial analysis as
well as proper experimental design.
The tutorial will be presented
at an introductory level and is
meant for any EC researchers who
want to compare their newly designed
EC system with established EC systems
to see if there is an improvement,
or who want to determine which
EC system performs better on their
chosen problem; i.e. nearly everyone.
It is vital, if our community
is to be taken seriously, for us
to continue to educate ourselves,
and especially new Graduate students,
on the statistical techniques that
are considered de rigor for experiments
performed in any field that is
stochastic in nature, as is EC.

Mark Wineberg
He has
been actively researching
the field
of Evolutionary Computation
(EC) and Genetic Algorithms
(GA) since 1993 while
he was still a graduate
student at Carlton University,
Ottawa, Canada. Over
the years he has published
on various topics including:
the intersection of Genetic
Algorithms and Genetic
Programming, enhancing
the GA for improved behavior
in dynamic environments
through specialized multiple
populations, and exploring
the concept of distances
and diversity in GA populations.
He is currently continuing
his research in dynamic
environments and studying
the properties that allow
one population to evolve
more readily than another.
Prof. Wineberg also teaches an undergraduate
course at Guelph on computer simulation
and modeling of discrete stochastic
systems, which emphasizes proper
statistical analysis, as well as
a graduate course on experimental
design and analysis for computer
science, which is an outgrowth of
the statistical analysis tutorial
given at GECCO.



Specialized Techniques and Applications

Theory of Randomized Search Heuristics in Combinatorial Optimization
The rigorous mathematical analysis of randomized search heuristics
(RSHs) with respect to their expected runtime is a growing research area where many results have been obtained in recent years. This
class of heuristics contains wellknown approaches such as Randomized Local Search (RLS), the Metropolis Algorithm (MA),Simulated Annealing (SA), and Evolutionary Algorithms (EAs) as well as more recent approaches such as Ant Colony Optimization (ACO) and Particle Swarm Optimization(PSO). Such heuristics are often applied to problems whose structure is not known or if there are not enough resources such as time, money, or knowledge to obtain good specific algorithms. It is widely acknowledged that a solid mathematical foundation for such heuristics is needed.
Most designers of RSHs, however, rather focused on mimicking
processes in nature (such as evolution) rather than making the
heuristics amenable to a mathematical analysis. This is different to
the classical design of (randomized) algorithms which are developed
with their theoretical analysis of runtime (and proof of
correctness) in mind. Despite these obstacles, research from the
last about 15 years has shown how to apply the methods for the
probabilistic analysis of randomized algorithms to RSHs. Mostly, the
expected runtime of RSHs on selected problems is analzyed. Thereby,
we understand why and when RSHs are efficient optimizers and,
conversely, when they cannot be efficient.
The tutorial will give an overview on the analysis of RSHs for
solving combinatorial optimization problems. Starting from the first
toy examples such as the OneMax function, we approach more realistic problems and arrive at analysis of the runtime and approximation
quality of RSHs even for NPhard problems. Our studies treat not only simple RLS algorithms and SA but also more complex populationbased
EAs. The combinatorial optimization problems that we discuss include
the maximum matching problem, the partition problem and, in
particular, the minimum spanning tree problem as an example where
Simulated Annealing beats the Metropolis algorithm in combinatorial
optimization. Important concepts of the analyses will be described
as well.

Carsten Witt
He studied Computer Science at the Technical University of
Dortmund, Germany, where he received his diploma and Ph.D. in 2000
and 2004, respectively. In spring 2009, he moved to the Technical
University of Denmark, where he now works as an associate professor
in algorithms. Carsten's main research interests are the theoretical
aspects of randomized search heuristics, in particular evolutionary
algorithms, ant colony optimization and particle swarm optimization.
He coorganized the biannual Dagstuhl seminar "Theory of
Evolutionary Algorithms" in 2008 and 2010 and has given tutorials on
the computational complexity of bioinspired search heuristics in
combinatorial optimization at GECCO 2008, PPSN 2008 (together with
Frank Neumann), ThRaSH 2009 (together with Thomas Jansen) and at
GECCO 2010. Together with Frank Neumann, he has authored the
textbook "Bioinspired Computation in Combinatorial Optimization 
Algorithms and Their Computational Complexity". 

Cartesian Genetic Programming
Cartesian Genetic Programming (CGP) is an increasingly popular and efficient
form of Genetic Programming.
Cartesian Genetic Programming is a highly cited technique that was developed by
Julian Miller in 1999 and 2000 from some earlier joint work of Julian Miller with
Peter Thomson in 1997. In its classic form, it uses a very simple integer based
genetic representation of a program in the form of a directed graph. Graphs are
very useful program representations and can be applied to many domains (e.g.
electronic circuits, neural networks). In a number of studies, CGP has been shown
to be comparatively efficient to other GP techniques. It is also very simple to
program.
Since then, the classical form of CGP has been developed made more efficient in
various ways. Notably by including automatically defined functions (modular CGP)
and selfmodification operators (selfmodifying CGP). SMCGP was developed by
Julian Miller, Simon Harding and Wolfgang Banzhaf. It uses functions that cause
the evolved programs to change themselves as a function of time. Using this
technique it is possible to find general solutions to classes of problems and
mathematical algorithms (e.g. arbitrary parity, nbit binary addition, sequences
that provably compute pi and e to arbitrary precision, and so on).
This tutorial is will cover the basic technique, advanced developments and
applications to a variety of problem domains. It is given by two of the leading
researchers in CGP.
The first edited book on CGP is due to be published in the first half of 2011.

Julian Miller
He is an academic in the Department of Electronics at the University of York. His main research interests are genetic programming (GP), and computational development. He has published over 180 refereed papers on evolutionary computation, genetic programming, evolvable hardware, and computational development and other topics. He has been chair or cochair of sixteen conferences or workshops in genetic programming, computational development, evolvable hardware and evolutionary techniques. Dr. Miller chaired of the Evolvable Hardware tracks at the Genetic and Evolutionary Computation Conference in 20022003 and was Genetic Programming track chair in 2008. He was cochair the Generative and Developmental Systems(GDS) track in 2007, 2009 and 2010. He is an associate editor of the Genetic Programming and Evolvable Machines and a former associate editor of IEEE Transactions on Evolutionary Computation. He is an editorial board member of the journals Evolutionary Computation and Unconventional Computing. 

Simon Harding
He is a post doctoral research fellow in the Department of
Computer Science at Memorial University, Canada. His
research interests include genetic programming,
unconventional computation and parallel computing on
consumer grade hardware. He has been cochair of the
Computational Intelligence on Consumer Games and
Graphics Hardware track at WCCI 2008 and GECCO
2009/2010. He is also coorganizer of the GECCO GPU
programming competition. 

Large scale data mining using GeneticsBased Machine Learning
We are living in the petabyte era. We have larger and larger data to analyze, process and transform into useful answers for the domain experts. Robust data mining tools, able to cope with petascale volumes and/or high dimensionality producing humanunderstandable solutions are key on several domain areas.
Geneticsbased machine learning (GBML) techniques are perfect candidates for this task. Recent advances in representations, learning paradigms, and theoretical modeling have show the competitiveness of non EC techniques in herding large scale data analysis. If evolutionary learning techniques aspire to be a relevant player in this context, they need to have the capacity of processing these vast amounts of data and they need to process this data within reasonable time. Moreover, massive computation cycles are getting cheaper and cheaper every day, allowing researchers to have access to unprecedented computational resources on the edge of petascale computing. Several topics are interlaced in these two requirements: (1) having the proper learning paradigms and knowledge representations, (2) understanding them and knowing when are they suitable for the problem at hand, (3) using efficiency enhancement techniques, and (4) transforming and visualizing the produced solutions to give back as much insight as possible to the domain experts are few of them.
This tutorial will try to shed light to the above mentioned questions, following a roadmap that starts exploring what large scale means, and why large is a challenge and opportunity for GBML methods. As we will show later, opportunity has multiple facets: Efficiency enhancement techniques, representations able to cope with large dimensionality spaces, scalability of learning paradigms, and alternative programming models, each of them helping to make GBML very attractive for largescale data mining. Given these building blocks, we will continue to unfold how can we model the scalability of the components of GBML systems targeting a better engineering effort that will make embracing large datasets routine. Finally, we will illustrate how all these ideas fit by reviewing real applications of GBML systems and what further directions will require serious consideration.

Jaume Bacardit
He received his Ph.D. in 2004 from the Ramon Llull University in Barcelona, Spain. His thesis studied the adaptation of the Pittsburgh approach of Learning Classifier Systems (LCS) to Data Mining tasks. In 2005 he joined the University of Nottingham, UK as a postdoctoral researcher to work on the application of LCS to data mine largescale bioinformatics datasets and extract interpretable explanations from the learning process. In 2008 he was appointed as a Lecturer in Bioinformatics at the University of Nottingham. This is a joint post between the schools of Biosciences and Computer Science with the aim of developing interdisciplinary research at the interface of both disciplines. His research interests include the application of Learning Classifier Systems and other kinds of Evolutionary Learning to data mine largescale challenging datasets and, in a general sense, the use of data mining and knowledge discovery for biological domains. He has been in the program committee, among other conferences and workshops, of the Genetic and Evolutionary Computation Conference (GECCO), the International Workshop on Learning Classifier Systems (IWLCS), the IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB) and the IEEE International Joint Conference on Neural Networks (IJCNN).
Bacardit has reviewed articles, among others, for the following journals: IEEE Transactions on System Man and Cybernetics Part B, IEEE Transactions on Evolutionary Computation, Evolutionary Computation, Evolutionary Intelligence, Natural Computation and Soft Computing. He has 28 refereed international publications between journal papers, conference papers and book chapters, and he has given 7 invited talks. Together with Dr. Martin Butz and Dr. Ester Bernadónsilla he coorganized the 2007 and 2008 editions of the IWLCS and coedited a book with extended versions of the papers presented at IWLCS2006 and IWLCS2007.


Xavier
Llorà
His interests
and work on geneticsbased
machine learning (GBML)
have earned him a spot
among the leaders of
the renaissance of
learning classifier systems (LCSs). His 2002 PhD dissertation challenged
traditional datamining techniques by showing the effectiveness of GBML
approaches. He has help organize two edition of the International
Workshop of Learning Classifier Systems books and their proceedings. He
served as LCS/GBML track chair in the ACM SIGEVO organized GECCO 2005
conference and also organized the NCSA/IlliGAL Gathereing on
Evolutionary Learning in 2006. Llorà, awarded with a young research
fellowship by the Catalan Government, started his academic career at the
Ramon Llull University (Barcelona, Spain) where he earned his PhD degree
with honors. In 2003 he moved to the University of Illinois at
UrbanaChampaign where he joined the Illinois Genetic Algorithms
Laboratory.
Since then,
Llorà has headed the
DISCUS project
(http://wwwdiscus.ge.uiuc.edu/),
a project to support collaborative humaninnovation and creativity. In summer
2005, Llorà was named research assistant professor at the University of
Illinois at UrbanaChampaign and in 2006 a NCSA affiliate where he pursues the
development of dataintensive computing infrastructure for largescale data and
text mining under the SEASR project (http://seasr/org).




Drift analysis
Drift analysis, introduced to the field of evolutionary computation by He and Yao ten years ago, quickly became one of the strongest tools to prove upper and lower bounds on the runtimes of evolutionary algorithms. It has, however, the reputation of being difficult to use, both because it relies on deeper mathematical tools and because it needs a clever guess of a potential function.
In this tutorial, after presenting the classical results, I will focus on the recently developed multiplicative drift analysis method. It often is easier to employ and yields stronger results, e.g., runtime bounds that hold with high probability. I will end with a number of open problems of different difficulties.
The intended audience of the tutorial has some basic experience in theory, though no particular prerequisites are required.

Benjamin Doerr
He is a senior researcher at the MaxPlanckInstitut for Computer Science and a professor at Saarland University. He recieved his diploma (1998), PhD (2000) and habilitation (2005) in mathematics from Kiel University. Together with Frank Neumann and Ingo Wegener, he founded the theory track at GECCO and served as its cochair 20072009.
He is a member of the editorial boards of Evolutionary Computation and Information Processing Letters. 

Automated Heuristic Design
This tutorial will discuss stateoftheart techniques for automating the design of heuristic search methods, in order to remove or reduce the need for a human expert in the process of designing an effective algorithm to solve a search problem. Using machine learning or metalevel search, several approaches have been proposed in computer science, artificial intelligence and operational research. The aim is to develop methodologies which can adapt to different environments without manually having to customise the search, or its parameters, for each particular problem domain. This can be seen as one of the drawbacks of many current metaheuristic and evolutionary implementations, which tend to have to be customised for a particular class of problems or even specific problem instances. We have identified two main types of approaches to this challenge: heuristic selection, and heuristic generation. In heuristic selection the idea is to automatically combine fixed preexisting simple heuristics or neighbourhood structures to solve the problem at hand; whereas in heuristic generation the idea is to automatically create new heuristics (or heuristic components) suited to a given problem or class of problems. This latter approach is typically achieved by combining, through the use of genetic programming for example, subcomponents or buildingblocks of human designed heuristics. This tutorial will go over the intellectual roots and origins of both automated heuristic selection and generation, before discussing work carried out to date in these two directions and then focusing on some observations and promising research directions.

Gabriela Ochoa
She is a senior research fellow in the Automated Scheduling, Optimisation and Planning (ASAP) Research Group, School of computer Science, University of Nottingham, UK. She holds a BEng and a Master degree in Computer Engineering from the Simon Bolivar University, Venezuela; and a PhD in Artificial intelligence from the University of Sussex, UK. Her research interests lie in the foundations and application of evolutionary algorithms and heuristic search methods, with emphasis in automated heuristic design, self* search heuristics, fitness landscape analysis, and applications in combinatorial optimization, medicine and biology. Her articles on these topics have been published in Evolutionary Computation, IEEE Transactions on Evolutionary Computation, Genetic Programming and Evolvable Machines, Journal of Theoretical Biology, Physical Review E, Artificial Intelligence in Medicine, Journal of the Operations Research Society, GECCO, PPSN, etc. She has organized several workshops, special sessions and tutorials in hyperheuristics and self* search (GECCO, PPSN, LION, CEC), and edited two journal special issues in these topics (Journal of Heuristics, and Evolutionary Computation). She conceived and is currently coorganizing the first “Crossdomain Heuristic Search Challenge” (http://www.asap.cs.nott.ac.uk/chesc2011/), an international research competition that aims to measure the performance of single search strategies across multiple problem domains rather than specifically selected ones.


Matthew Hyde
He received the PhD in Computer Science from the University of Nottingham, U.K., in 2009. He is currently a Senior Research Fellow within the Automated Scheduling, Optimisation, and Planning (ASAP) Research Group at Nottingham. His research interests include evolutionary computation, hyperheuristics, metaheuristics, and operational research. He now works within the LANCS initiative, a collaborative project between four U.K. universities to build operational research theory for practice. He has previously worked on two EPSRC (Engineering and Physical Sciences Research Council) funded projects. The first was to investigate genetic programming as a hyperheuristic, and the second was a £2.6M project which aims to investigate methodologies suitable to automate the heuristic design process. He has published seven refereed papers and two book chapters on the subject of automatic heuristic generation. 

Evolution of digital circuits
Since the early 1990's researchers have begun to apply evolutionary algorithms to design electronic circuits. Nowadays it is evident that the evolutionary design approach can automatically create efficient electronic circuits in many domains. This tutorial surveys fundamental concepts of evolutionary circuit design. It introduces relevant search algorithms and basics of digital circuit design principles. Several case studies will be presented to demonstrate strength and weakness of the method, including evolutionary synthesis of gatelevel circuits, image filter evolution in FPGA and evolution of benchmark circuits for evaluation of testability analysis methods. FPGAs will be presented as accelerators for evolutionary circuit design and circuit adaptation. Finally, it will be shown how to cope with the socalled scalability problem of evolutionary design which has been identified as the most important problem from the point of view of applications.

Lukás Sekanina
(MSc  1999, PhD  2002) He is a professor of computer engineering and computer science at the Faculty of Information Technology, Brno University of Technology. He was awarded the Fulbright scholarship and worked on the evolutionary circuit design with NASA Jet Propulsion Laboratory in 2004. He was a visiting lecturer with Pennsylvania State University and a visiting researcher with Department of Informatics, University of Oslo in 2001. His research focuses on evolutionary circuit design and evolvable hardware. Web page: http://www.fit.vutbr.cz/~sekanina 

Automatic and Interactive Tuning of Algorithms
Providing tools for algorithm tuning (and the related statistical analysis) is the main topic of this tutorial. This tutorial provides the necessary background for performing algorithm tuning with stateoftheart tools. We will discuss pros and cons of manual, interactive, and automatic tuning of randomized algorithms such as Genetic Algorithms, Differential Evolution, Particle Swarm, and Evolution Strategies.
Moreover, we highlight the important components of experimental work such as proper setup, visualization, and reporting and refer to the most prominent mistakes that may occur, giving examples for failed and successful experiments.
The Sequential Parameter Optimization Toolbox (SPOT) is introduced as an example, being freely available via CRAN (free R package server network), see
http://cran.rproject.org/web/packages/SPOT/index.html
Other tuning approaches such as FRace, REVAC and ParamILS will be discussed as well.
Proper experimentation is not trivial and can only be learned via practizing it. Thus we are planning to present a handsonworkshop in addition to this tutorial, helping users to apply its key knowledge to algorithms of their choice. See
http://advm1.gm.fhkoeln.de/~bartz/geccoworkshop.html for information about last year's workshop.
The book "Experimental methods for the analysis of optimization algorithms" (http://www.springer.com/9783642025372) provides some background.

Thomas BartzBeielstein :
Dr. Thomas BartzBeielstein is a professor for Applied Mathematics at Cologne University of Applied Sciences. He has published more than several dozen research papers, presented tutorials about tuning, and has edited several books in the field of Computational Intelligence, e.g., "Experimental Research in EC". His research interests include optimization, simulation, and statistical analysis of complex realworld problems. Prof. BartzBeielstein and Mike Preuss invented the sequential parameter optimization, which was applied as a tuner for numerous optimization algorithms such as evolution strategies, differential evolution, or particle swarm optimization. 

Mike Preuss:
He is Research Associate at the Computer Science Department, University of Dortmund, Germany (since 2000), where he also received his Diploma degree in 1998.
His research interests focus on the field of evolutionary algorithms for realvalued problems, namely on multimodal and multiobjective niching and the experimental methodology for (nondeterministic) optimization algorithms.
He is currently working on the adaptability and applicability of computational intelligence techniques for various engineering domains and computer games. 

Theory of Swarm Intelligence
The theory of swarm intelligence has made rapid progress in the last 5 years. Following a very successful line of research in evolutionary computation, various results on the computational complexity of swarm intelligence algorithms have appeared. These results shed light on the working principles of swarm intelligence algorithms, help to identify the impact of parameters and other design choices on performance, and contribute to a solid theoretical foundation of swarm intelligence.
This tutorial will give a comprehensive overview of theoretical results on swarm intelligence algorithms, with an emphasis on their computational complexity. In particular, it will be shown how techniques for the analysis of evolutionary algorithms can be used to analyze swarm intelligence algorithms and how the performance of swarm intelligence algorithms compares to that of evolutionary algorithms. The tutorial will be divided into a first, larger part on ant colony optimization (ACO) and a second, smaller part on particle swarm optimization (PSO).
For ACO we will consider simple variants of the MAXMIN ant system. Investigations of example functions in pseudoBoolean optimization demonstrate that the choice of the pheromone update strategy and the evaporation rate has a drastic impact on the running time, even for very simple functions like ONEMAX. We will also elaborate on the effect of using local search within the ACO framework. In terms of combinatorial optimization problems, we will look at the performance of ACO for minimum spanning trees, shortest path problems, and the TSP.
For particle swarm optimization, the tutorial will cover results on PSO for pseudoBoolean optimization as well as a discussion of theoretical results in continuous spaces.

Dirk Sudholt
He obtained his Diplom (Master's) degree in 2004 and his PhD in computer science in 2008 from the Technische Universitaet Dortmund, Germany, under the supervision of Prof. Ingo Wegener. Afterwards, he spent 12 months as visiting researcher at the International Computer Science Institute in Berkeley, California, working in the Algorithms group led by Prof. Richard M. Karp. Since September 2010 he is a Research Fellow at the University of Birmingham, UK, working with Prof. Xin Yao.
His research focuses on the computational complexity of randomized search heuristics such as evolutionary algorithms, in particular memetic and parallel variants, and swarm intelligence algorithms like ant colony optimization and particle swarm optimization. He has won 4 best paper awards at leading conferences, GECCO and PPSN. 

Algorithm and Experiment Design with HeuristicLab  An Open Source Optimization Environment for Research and Education
The tutorial demonstrates how to apply and analyze metaheuristics using HeuristicLab, an open source optimization environment. It will be shown how to parameterize and execute evolutionary algorithms to solve combinatorial optimization problems (traveling salesman, vehicle routing) as well as data analysis problems (regression, classification). The attendees will learn how to assemble different algorithms and parameter settings to a large scale optimization experiment and how to execute such experiments on multicore or cluster systems. Furthermore, the experiment results will be compared using HeuristicLab's interactive charts for visual and statistical analysis to gain knowledge from the executed test runs. To complete the tutorial, it will be sketched briefly how HeuristicLab can be extended with further optimization problems and how custom optimization algorithms can be modeled using the graphical algorithm designer. Additional details on HeuristicLab can be found at http://dev.heuristiclab.com.

Stefan Wagner
He received his MSc in computer science in 2004 and his PhD in technical sciences in 2009, both from the Johannes Kepler University Linz, Austria. From 2005 to 2009 he worked as an associate professor for software project engineering and since 2009 as a full professor for complex software systems at the Upper Austria University of Applied Sciences, Campus Hagenberg, Austria. Dr. Wagner is one of the founders of the research group Heuristic and Evolutionary Algorithms Laboratory (HEAL) and is the project manager and head developer of the HeuristicLab optimization environment. 

Geometry of Evolutionary Algorithms
The various flavors of Evolutionary Algorithms look very similar when cleared of algorithmically irrelevant differences such as domain of application and phenotype interpretation. Representationindependent algorithmic characteristics like the selection scheme can be freely exchanged between algorithms. Ultimately, the origin of the differences of the various flavors of Evolutionary Algorithms is rooted in the solution representation and relative genetic operators. Are these differences only superficial? Is there a deeper unity encompassing all Evolutionary Algorithms beyond the specific representation? Is a general mathematical framework unifying search operators for all solution representations at all possible? The aim of the tutorial is to introduce a formal, but intuitive, unified point of view on Evolutionary Algorithms across representations based on geometric ideas, which provides a possible answer to the above questions. It also presents the benefits for both theory and practice brought by this novel perspective.
The key idea behind the geometric framework is that search operators have a dual nature. The same search operator can be defined (i) on the underlying solution representations and, equivalently, (ii) on the structure of the search space by means of simple geometric shapes, like balls and segments. These shapes are used to delimit the region of space that includes all possible offspring with respect to the location of their parents. The geometric definition of a search operator is of interest because it can be applied  unchanged  to different search spaces associated with different representations. This, in effect, allows us to define exactly the same search operator across representations in a rigorous way.
The geometric view on search operators has a number of interesting consequences of which this tutorial will give a comprehensive overview. These include (i) a straightforward view on the fitness landscape seen by recombination operators, (ii) a formal unification of many preexisting search operators across representations, (iii) a principled way of designing crossover operators for new representations, (iv) a principled way of generalizing search algorithms from continuous to combinatorial spaces, and (v) the potential for a unified theory of evolutionary algorithms across representations.

Alberto Moraglio
He is currently a Postdoctoral Research Associate affiliated with the School of Computing and the Centre for Reasoning at the University of Kent, UK (since 2009). He holds a PhD in Computer Science from the University of Essex, UK (2007) and a Master in Computer Engineering from the Polytechnic University of Turin, Italy (2000). Before the current appointment, he worked as a researcher for HewlettPackard Research Laboratories in Bristol, UK (20012002), and as an Assistant Professor at the University of Coimbra, Portugal (20072009). He has 45 peerreviewed publications in journals and conference proceedings almost exclusively on the geometric view of Evolutionary Algorithms. His current research interest is on any subject in Evolutionary Computation, Machine Learning, Mathematical Optimization and Artificial Intelligence on which he can successfully apply his geometric theory. 

Evolutionary Game Theory
Evolutionary game theory has been introduced essentially by biologists in the seventies and has immediately diffused into economical and sociological circles. Today, it is a main pillar of the whole edifice of game theory and widely used both in theory and in applications. This tutorial aims at presenting evolutionary game theory in an easy, yet rigorous way, and to relate it with other approaches in game theory. The material presented does not require a previous acquaintance with standard game theory: these fundamentals will be developed in the first part of the tutorial, which is selfcontained.
In the second part the main concepts of the evolutionary and dynamical approach will be introduced, namely the concept of an evolutionarily stable strategy and the replicator dynamics. The analogies between Nash equilibria, evolutionarily stable strategies, and rest points of the dynamics will be explained. The concept of strategy errors and stochastically stable states as well as the relationships with mean dynamics and stability in the long run will also be briefly introduced. The main ideas will be illustrated using simple well known paradigmatic games such as the Prisoner's Dilemma, Hawks and Doves, and coordination games among others. Finally, some recent exciting trends in evolutionary games on networks will be introduced and discussed.

Marco Tomassini
He is a professor of Computer Science at the Information Systems Department of the Business and Economics Faculty of the University of Lausanne, Switzerland. He graduated in physical and chemical sciences and got a Doctor's degree in theoretical chemistry from the University of Perugia, Italy, working on computer simulations of condensed matter systems. After some years doing research on crystal and molecular physics, he switched to computer science, contributing especially in the fields of parallel computing and cellular automata modeling. His current research interests are centered around the application of biological ideas to artificial systems and the network dimension of complex systems.
He is active in evolutionary computation, especially spatially structured systems, genetic programming, and the structure of hard combinatorial search spaces. He is also active in evolutionary games on networks and the evolution of coordination and cooperation, including the dynamical properties of networked complex systems. He has been Program Chairman of several international events and has published more than 210 scientific papers and several authored and edited books in these fields. In 2010 he was awarded the EvoStar Award in recognition of outstanding contributions to Evolutionary Computation. 



Handling Bloat in GP
Bloat can be defined as an excess of code growth without a corresponding improvement in fitness. This problem has been one of the most intensively studied subjects since the beginnings of Genetic Programming. After briefly addressing past bloat research, this tutorial concentrates on the new Crossover Bias theory and the bloat control method it inspired, Operator Equalisation. Although still recent and requiring improvements, Operator Equalisation has already proven to be more than just a bloat control method. It reveals novel evolutionary dynamics that allow a successful search without code growth, and shows great potential to be extended and integrated into several different elements of the evolutionary process. We will explain in detail how to implement the two variants of Operator Equalisation developed so far, discussing the pros and cons of each. We will also look at some of the evolutionary dynamics of Operator Equalisation, and discuss how they can help us solve the many open questions that still surround such a new technique. Finally, we will address the possible integration of the Operator Equalisation ideas into different elements of the evolution.

Sara Silva
She
is a senior researcher of the Knowledge Discovery and Bioinformatics (KDBIO) group at INESCID Lisboa, Portugal, and an invited researcher of the Evolutionary and Complex Systems (ECOS) group at CISUC, Portugal. She has a BSc (5 years, finished in 1996) and a MSc (finished in 1999) in Informatics by the Faculty of Sciences of the University of Lisbon, Portugal, and a PhD (finished in 2008) in Informatics Engineering by the Faculty of Sciences and Technology of the University of Coimbra, Portugal. After graduation she used neural networks and genetic algorithms in several interdisciplinary projects ranging from remote sensing and forest science to epidemiology and medical informatics. She started her research on Genetic Programming
(GP) in 2002, studying the bloat problem. Her main contributions to this field were the Dynamic Limits and ResourceLimited GP bloat control methods, and the developments that put into practice the new Operator Equalisation method. Her current main research interests are bloat and overfitting in GP, and how they relate to each other, and the effective and efficient usage of GP in real life problems within the earth sciences and bioinformatics domains. She is a member of the editorial board of Genetic Programming and Evolvable Machines, and the creator and developer of GPLAB  A Genetic Programming Toolbox for MATLAB.
Webpage: http://kdbio.inescid.pt/~sara/




Synthetic Biology
The ultimate goal of systems biology is the development of executable in silico models of cells and organisms. Systems biology attempts to provide an integrative methodology, which while able to cope with on the one hand the data deluge that is being generated through high throughput experimental technologies and on the other hand emerging technologies that produce scarce often noisy data, would allow to capture within humanunderstandable models and simulations novel biological knowledge.
In its more modest instantiations, Systems Biology seeks to *clarify* current biological understandings by formalizing what the constitutive elements of a biological system are and how they interact with each other and also it seeks to aid in the *testing* of current understandings against experimental data. In its most ambitious incarnations, however, it aims at *predicting* the behavior of biological systems beyond current understanding and available data thus shedding light onto possible new experimental routes that could lead to better theoretical insights.
Synthetic biology, on the other hand, aims to implement, in vitro/vivo, organisms whose behavior is engineered. The field of synthetic biology holds a great promise for the design, construction and development of artificial (i.e. manmade) biological (sub)systems thus offering viable new routes to genetically modified organisms, smart drugs as well as model systems to examine artificial genomes and proteomes. The informed manipulation of such biological (sub)systems could have an enormous positive impact on our societies, with its effects being felt across a range of activities such as the provision of healthcare, environmental protection and remediation, etc. The basic premise of synthetic biology is that methods commonly used to design and construct nonbiological systems, such as those employed in the computational sciences and the engineering disciplines, could also be used to model and program novel synthetic biosystems. Synthetic biology thus lies at the interface of a variety of disciplines ranging from biology through chemistry, physics, computer science, mathematics and engineering.
In this tutorial I will provide an entry level understanding to Systems and Synthetic Biology, it goals, methods and limitations. Furthermore I will describe the many potential applications of evolutionary computation to these two fields. Indeed, I believe that the EC community has a beautiful new application domain in which its methods could be both valued and challenged.
More information is available at www.cs.nott.ac.uk/~nxk

Natalio Krasnogor
He
is an Associate Professor in the School of Computer Science at the University of Nottingham. He is a member of the Automated Scheduling, Optimisation and Planning Research Group (ASAP) and also leads the Interdisciplinary Optimisation Laboratory (IOL). Krasnogor's research activities lie at the interface of Computer Science and the Natural Sciences, e.g. Biology, Physics, Chemistry. In particular, he develops innovative and competitive search methodologies and intelligent decision support systems for transdisciplinary optimisation, modelling of complex systems and verylarge datasets processing.
He has applied his expertise to Bioinformatics, Systems Biology, Synthetic Biology, Nanoscience and Chemistry. He is member of the editorial boards for the journal Modelling and Simulation in Engineering and the journal Artificial Evolution and Applications. He is associate editor of the Evolutionary Computation journal and founding technical editorinchief of the new journal Memetic Computing. Krasnogor has acted as grant reviewer for the EPSRC (UK), BBSRC(UK), European Science Foundation, European Union, The Israel Science Foundation, DOE Computational Biology Programme (USA), CNRS (France), etc.He cochairs the 2nd European Conference on Synthetic Biology (ECSB II).
More details are available at www.cs.nott.ac.uk/~nxk 




