CONFERENCE PROGRAM (PDF file)
Planned Free Tutorials
Two days of free tutorials and workshops (included with conference registration) presented by some of
the world’s foremost experts in topics of interest to genetic and evolutionary computation researchers and practitioners.
Introductory Tutorials
Advanced Tutorials
Specialized Techniques and Applications

Artificial Immune Systems for Optimization
more info: 
Thomas Jansen 
Christine Zarges  
BlackBox Complexity: From Complexity Theory to Playing Mastermind
more info: 
Benjamin Doerr  
Cartesian Genetic Programming
more info: 
Julian Miller 
Simon Harding 

Evolutionary Algorithms and Genetic Programming on Graphic Processing Units (GPU)
more info: 
Pierre Collet 
Simon Harding  
Computational Intelligence in Games
more info: 
Daniele Loiacono 
Mike Preuss 

Introduction to Bioinformatics and Computational Biology
more info: 
William S. Bush  
Large scale data mining using GeneticsBased Machine Learning
more info:

Jaume Bacardit 
Xavier Llorà  
Hyperheuristics and Crossdomain Optimization
more info: 
Gabriela Ochoa  
Theory of Swarm Intelligence
more info: 
Dirk Sudholt  
Drift Analysis
more info: 
Per Kristian Lehre  
Statistical Analysis of Optimization Algorithms with R
more info: 
Thomas BartzBeielstein 
Mike Preuss  
Algorithm and Experiment Design with HeuristicLab  An Open Source Optimization Environment for Research and Education
more info: 
Stefan Wagner  
The Geometry of Evolutionary Algorithms
more info: 
Alberto Moraglio  
Evolutionary Software Repair

Stephanie Forrest  
Industrial Data Modeling
more info: 
Mark Kotanchek  



Introductory Tutorials

Introduction to Genetic Algorithms
The Introduction to Genetic Algorithms Tutorial is aimed at GECCO attendees with limited knowledge of genetic algorithms, and will start “at the beginning,” describing first a “classical” genetic algorithm in terms of the biological principles on which it is loosely based, then present some of the fundamental results that describe its performance, described using the schema concept. It will cover some variations on the classical model, some successful applications of genetic algorithms, and advances that are making genetic algorithms more useful.

Erik Goodman
Erik Goodman is a Professor of Electrical & Computer Engineering, of Mechanical Engineering, and of Computer Science & Engineering at Michigan State University. He is Director of the BEACON Center for the Study of Evolution in Action, an NSF Science and Technology Center funded at $25 million in 2010. He studied genetic algorithms under John Holland at the University of Michigan, before they had been named. His use of a genetic algorithm in 1971 to solve for 40 coefficients of a highly nonlinear model of a bacterial cell was the first known GA application on a realworld problem, and required nearly a year for one run on a dedicated computer. He has developed and used evolutionary algorithms ever since, including for parameterization of complex ecosystem models, for evolution of cooperative behavior in artificial life, for factory layout and scheduling, for protein folding and docking, for design of composite structures, and for data mining and pattern classification. His recent research has centered on development of new types of parallel genetic algorithms and in design of mechatronic systems using genetic programming. He is cofounder and VP Technology of Red Cedar Technology, which provides tools for automated engineering design based on evolutionary computation. He chaired ICGA97 and GECCO2001, chaired GECCO's sponsoring organization, ISGEC, from 20012004, and was the founding chair of ACM SIGEVO, 20052007.




Genetic Programming
Genetic programming emerged in the early 1990's as one of the most exciting new evolutionary algorithm paradigms. It has rapidly grown into a thriving area of research and application. While sharing the evolutionary inspired algorithm principles of a genetic algorithm, it differs by exploiting an executable genome.: genetic programming evolves a "program" to solve a problem rather than a single solution.
This tutorial introduces the basic genetic programming framework: It will explain how the powerful capability of genetic programming is derived from modular algorithmic components:
• executable representations  e.g. parsetree, linear and graphbased variation operators that preserve syntax and explore a variable length, hierarchical solution space
• appropriately chosen programming functions
• fitness function specification

UnaMayO'Reilly
Dr. O'Reilly is a 10+ year veteran in the Evolutionary Algorithm community. Her doctoral thesis was the first comparative analysis of Genetic Programming (with respect to simulated annealing and hill climbing). She developed the first schemabased theoretical analysis of genetic programming. She now specializes in applications of evolutionary algorithms in tandem with machine learning and optimization. She leads the Evolutionary Design and Optimization group at the Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA. Her research currently focuses on evolutionary computation projects related to wind energy, sensory evaluation (symbolic regression) and computer systems (multicore compilers and cloud virtualization technology)
Dr. O'Reilly developed a genetic programming based design system named Genr8 that is opensource and used throughout the world by researchers in Architecture. As VP of Technology at a Cambridge consulting firm, she led the development of a webbased name advisor based upon (patented) genetic algorithm, statistical learning and knowledge based technology. She has collaborated with State Street Global Advisors in their development of GP portfolio models that are used in investment strategies. Her current research sponsors include USA Dept of Energy (EXAScale project), DARPA (Angstrom project), VMWare, Givaudan Flavors PLC and Raytheon.
Dr. O'Reilly was a founding member of the International Society for Genetic and Evolutionary Computation executive board and is founding member and secretary of the ACM SIGEVO Executive Board. She edited (with Peter Angeline) the first special issue on Genetic Programming in the journal Evolutionary Computation and is a founding editorial board member of the journal Genetic Programming and Evolvable Machines. She coedited the second volume of Advances in Genetic Programming and the second and 5th volumes of Genetic Programming: From Theory to Practice. She has chaired GECCO (in 2005), the world's premier conference on Genetic and Evolutionary Computation. She serves on the editorial board of Evolutionary Computatino and as an action editor for Journal of Machine Learning Research. 



Evolution Strategies: Basic Introduction
This tutorial introduces the basics of evolution strategies, a branch of evolutionary
computation which focused already very early on the selfadaptation of strategy
parameters such as mutation variances. Based on the original (1+1)evolution
strategy (using the 1/5 success rule for step size adaptation), the generalized
(1,l) and (m,l)strategies use the principle of selfadaptation mutation variances
extensively. With both mutation and recombination, the (m,l)strategy is a
well known and understood algorithm for mixedinteger optimization (including
the integeronly and the realvaluedonly cases).
The tutorial will present all basic algorithmic variants as well
as guidelines for their implementation. Moreover, extensions such
as mixedinteger optimization and some derandomized variants of
evolution strategies are also introduced briefly.
Concerning the comparison to genetic algorithms and evolutionary
programming, we will use the concept of a generalized evolutionary
algorithm as introduced in [1], and outline the similarities and
differences of those classes of algorithms.
The tutorial will conclude with some examples of applications
of evolution strategies.
References
[1] Th. Bäck (1996): Evolutionary
Algorithms in Theory and Practice, Oxford University Press, NY.

Thomas Bäck
PhD, heis Professor for Natural Computing at the Leiden Institute
for Advanced Computer Science (LIACS) at Leiden University,
The Netherlands, and Chief Scientist of the Qe3 group at
Netezza Corporation. Thomas received his PhD in Computer
Science from Dortmund University, Germany, in 1994.
Thomas Bäck has more than
150 publications on natural computing technologies, as well
as a book on evolutionary algorithms, entitled Evolutionary
Algorithms: Theory and Practice. He is editorial board member
and associate editor of a number of journals on evolutionary
and natural computation, and has served as program chair
for all major conferences in evolutionary computation.
His expertise lies in adaptive technologies
for optimization and datadriven modeling, predictive analytics,
and bioinformatics. He received the best dissertation award from
the Gesellschaft für Informatik (GI) in 1995 and is an elected
fellow of the International Society for Genetic and Evolutionary
Computation for his contributions to the field. 

Evolutionary Computation: A Unified View
The field of Evolutionary Computation has experienced tremendous growth over the past 20 years, resulting in a wide variety of evolutionary algorithms and applications. The result poses an interesting dilemma for many practitioners in the sense that, with such a wide variety of algorithms and approaches, it is often hard to see the relationships between them, assess strengths and weaknesses, and make good choices for new application areas.
This tutorial is intended to give an overview of a general EC framework that can help compare and contrast approaches, encourages crossbreeding, and facilitates intelligent design choices. The use of this framework is then illustrated by showing how traditional EAs can be compared and contrasted with it, and how new EAs can be effectively designed using it.
Finally, the framework is used to identify some important open issues that need further research.

Kenneth A. De Jong
He received his Ph.D. in computer science from the University of Michigan in 1975. He joined George Mason University in 1984 and is currently a Professor of Computer Science, head of the Evolutionary Computation laboratory, and the Associate Director of the Krasnow Institute. His research interests include evolutionary computation, machine learning, and adaptive systems. He is currently involved in research projects involving the development of new evolutionary algorithm (EA) theory, the use of EAs as heuristics for NPhard problems, and the application of EAs to the problem of learning task programs in domains such as robotics, diagnostics, navigation and game playing. He is also interested in experiencebased learning in which systems must improve their performance while actually performing the desired tasks in environments not directly their control or the control of a benevolent teacher. He is an active member of the Evolutionary Computation research community and has been involved in organizing many of the workshops and conferences in this area. He is the founding editorinchief of the journal Evolutionary Computation (MIT Press), and a member of the board of the ACM SIGEVO. He is the recipient of an IEEE Pioneer award in the field of Evolutionary Computation and a lifetime achievement award from the Evolutionary Programming Society. 



Evolutionary Multiobjective Optimization
Many optimization problems are multiobjective in nature in the sense that multiple, conflicting criteria need to be optimized simultaneously. Due to the conflict between objectives, usually, no single optimal solution exists. Instead, the optimum corresponds to a set of socalled Paretooptimal solutions for which no other solution has better function values in all objectives.
Evolutionary Multiobjective Optimization (EMO) algorithms are widely used in practice for solving multiobjective optimization problems due to several reasons. As randomized blackbox algorithms, EMO approaches allow to tackle problems with nonlinear, nondifferentiable, or noisy objective functions. As setbased algorithms, they allow to compute or approximate the full set of Paretooptimal solutions in one algorithm runopposed to classical solutionbased techniques from the multicriteria decision making (MCDM) field. Using EMO approaches in practice has two other advantages: they allow to learn about a problem formulation, for example, by automatically revealing common design principles among (Paretooptimal) solutions (innovization) and it has been shown that certain singleobjective problems become easier to solve with randomized search heuristics if the problem is reformulated as a multiobjective one (multiobjectivization).
This tutorial aims at giving a broad introduction to the EMO field and at presenting some of its recent research results in more detail. More specifically, we are going to (i) introduce the basic principles of EMO algorithms in comparison to classical solutionbased approaches, (ii) show a few practical examples which motivate the use of EMO in terms of the mentioned innovization and multiobjectivization principles, and (iii) present a general overview of stateoftheart algorithms and techniques. Moreover, we will present some of the most important research results in areas such as indicatorbased EMO, preference articulation, and performance assessment.
Though classified as introductory, this tutorial is intended for both novices and regular users of EMO. Those without any knowledge will learn about the foundations of multiobjective optimization and the basic working principles of stateoftheart EMO algorithms. Open questions, presented throughout the tutorial, can serve for all participants as a starting point for future research and/or discussions during the conference.

Dimo Brockhoff
After obtaining his diploma in computer science (Dipl.Inform.) from University of Dortmund, Germany in 2005, Dimo Brockhoff received his PhD (Dr. sc. ETH) from ETH Zurich, Switzerland in 2009. Between June 2009 and October 2011 he held postdoctoral research positionsfirst at INRIA Saclay IledeFrance in Orsay and then at Ecole Polytechnique in Palaiseau, both in France. Since November 2011 he has been a junior researcher (CR2) at INRIA Lille  Nord Europe in Villeneuve d'Ascq, France . His research interests are focused on evolutionary multiobjective optimization (EMO), in particular on manyobjective optimization, benchmarking, and theoretical aspects of indicatorbased search. 

Kalyanmoy Deb
He holds Deva Raj Chair Professor at
Indian Institute of Technology Kanpur in India. He is the recipient of
the prestigious MCDM EdgeworthPareto award by the Multiple Criterion
Decision Making (MCDM) Society, one of the highest awards given in the
field of multicriterion optimization and decision making.
He has also received prestigious Shanti Swarup Bhatnagar Prize in Engineering Sciences
for the year 2005 from Govt. of India.
He has also received the `Thomson Citation Laureate
Award' from Thompson Scientific for having highest number of citations
in Computer Science during the past ten years in India.
He is a fellow of Indian National Academy of
Engineering (INAE), Indian National Academy of Sciences,
and International Society of Genetic and Evolutionary
Computation (ISGEC). He has received Fredrick Wilhelm Bessel Research
award from Alexander von Humboldt Foundation in 2003. His main research
interests are in the area of computational optimization, modeling and
design, and evolutionary algorithms. He has written two text books
on optimization and more than 240 international journal and conference
research papers. He has pioneered and a leader in the field of
evolutionary multiobjective optimization. He is associate editor of two
major international journals and an editorial board members of five
major journals.
More information about his research can be found from
http://www.iitk.ac.in/kangal/deb.htm




Probabilistic ModelBuilding Algorithms (PMBGAs)
Probabilistic modelbuilding algorithms (PMBGAs) replace traditional variation of genetic and evolutionary algorithms by (1) building a probabilistic model of promising solutions and (2) sampling the built model to generate new candidate solutions. PMBGAs are also known as estimation of distribution algorithms (EDAs) and iterated densityestimation algorithms (IDEAs).
Replacing traditional crossover and mutation operators by building and sampling a probabilistic model of promising solutions enables the use of machine learning techniques for automatic discovery of problem regularities and exploitation of these regularities for effective exploration of the search space. Using machine learning in optimization enables the design of optimization techniques that can automatically adapt to the given problem. There are many successful applications of PMBGAs, for example, Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature subset selection, forest management, groundwater remediation design, telecommunication network design, antenna design, and scheduling.
The tutorial Probabilistic ModelBuilding GAs will provide a gentle introduction to PMBGAs with an overview of major research directions in this area. Strengths and weaknesses of different PMBGAs will be discussed and suggestions will be provided to help practitioners to choose the best PMBGA for their problem.

Martin Pelikan
He received Ph.D. in Computer Science from the University of Illinois at UrbanaChampaign in 2002. He is now an associate professor at the Department of Mathematics and Computer Science at the University of Missouri in St. Louis. In 2006 Pelikan founded the Missouri Estimation of Distribution Algorithms Laboratory (MEDAL). Prior to joining the University of Missouri, he worked at the Illinois Genetic Algorithms Laboratory (IlliGAL), the German National Center for Information Technology in Sankt Augustin, the Slovak University of Technology in Bratislava, and the Swiss Federal Institute of Technology (ETH) in Zurich.
Pelikan has worked as a researcher in genetic and evolutionary computation since 1995. His most important contributions to genetic and evolutionary computation are the Bayesian optimization algorithm (BOA), the hierarchical BOA (hBOA), the scalability theory for BOA and hBOA, and the efficiency enhancement techniques for BOA and hBOA. His current research focuses on extending BOA and hBOA to other problem domains, applying genetic and evolutionary algorithms to realworld problems with the focus on physics, bioinformatics and machine learning, and designing new efficiency enhancement techniques for BOA and other evolutionary algorithms.




Evolving Neural Networks
Neuroevolution, i.e. evolution of artificial neural networks, has recently emerged as a powerful technique for solving challenging reinforcement learning problems. Compared to traditional (e.g. valuefunction based) methods, neuroevolution is especially strong in domains where the state of the world is not fully known: the state can be disambiguated through recurrency, and novel situations handled through pattern matching. In this tutorial, we will review (1) neuroevolution methods that evolve fixedtopology networks, network topologies, and network construction processes, (2) ways of combining traditional neural network learning algorithms with evolutionary methods, and (3) applications of neuroevolution to control, robotics, artificial life, and games.

Risto Miikkulainen
He is a Professor of Computer Sciences at the University of Texas at Austin. He received an M.S. in Engineering from the Helsinki University of Technology, Finland, in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His recent research focuses on methods for evolving neural networks and applying these methods to game playing, robotics, and intelligent control. He is an author of over 250 articles on neuroevolution, connectionist natural language processing, and the computational neuroscience of the visual cortex. He is an editor of Machine Learning, Neural Networks, IEEE Transactions on Computational Intelligence and AI in Games, Cognitive Systems Research, and IEEE Transactions on Autonomous Mental Development.


Kenneth O. Stanley
He
is an associate professor in the Department of Electrical Engineering and Computer Science at the University of Central Florida. He received a B.S.E. from the University of Pennsylvania in 1997 and received a Ph.D. in 2004 from the University of Texas at Austin. He is an inventor of the Neuroevolution of Augmenting Topologies (NEAT), HyperNEAT, and novelty search algorithms for evolving complex artificial neural networks. His main research contributions are in neuroevolution (i.e. evolving neural networks), generative and developmental systems, coevolution, machine learning for video games, and interactive evolution. He has won best paper awards for his work on NEAT, NERO, NEAT Drummer, HyperNEAT, novelty search, FSMC, and Galactic Arms Race. He is an associate editor of IEEE Transactions on Computational Intelligence and AI in Games, on the editorial board of Evolutionary Computation journal, and on the ACM SIGEVO Executive Committee.



Advanced Tutorials

Evolution Strategies and Covariance Matrix Adaptation
Evolution Strategies (ESs) and many continuous domain Estimation of
Distribution Algorithms (EDAs) are stochastic optimization procedures
that sample a multivariate normal (Gaussian) distribution in the
continuous search space, R^n. Many of them can be formulated in a
unified and comparatively simple framework.
This introductory tutorial focuses on the most relevant algorithmic
question: how should the parameters of the sample distribution be
chosen and, in particular, updated in the generation sequence? First,
two common approaches for stepsize control are reviewed, onefifth
success rule and path length control. Then, Covariance Matrix
Adaptation (CMA) is discussed in depth: rankone update, the
evolution path, rankmu update. Invariance properties and the
interpretation as natural gradient descent are touched upon.
In the beginning, general difficulties in solving nonlinear,
nonconvex optimization problems in continuous domain are revealed,
for example nonseparability, illconditioning and ruggedness.
Algorithmic design aspects are related to these difficulties. In the
end, the performance of the CMAES is related to other wellknown
evolutionary and nonevolutionary optimization algorithms, namely
BFGS, DE, PSO,...

Anne Auger
Dr. Auger received her diploma in mathematics from the University
of Paris VI, France, in 2001. She also obtained the French highest
diploma for teaching mathematics, "Agregation de
mathematiques". She received the doctoral degree from the university
Paris VI in 2004. Afterwards, she worked for two years (20042006) as
a postdoctoral researcher at ETH (in Zurich) in the Computational
Laboratory (CoLab). Since October 2006, she holds a permanent research
position at INRIA (French National Research Institute in Computer
Science and Applied Mathematics). Her research interests are
stochastic continuous optimization, including theoretical analyses of
randomized search heuristics. She published more than fifteen articles
at top conferences and journals in this area. She organized (and will
organize) the biannual Dagstuhl seminar "Theory of Evolutionary Computation" in 2008 (and 2010).


Nikolaus Hansen
He is a senior researcher at The French National
Institute for Research in Computer Science and Control (INRIA).
Educated in medicine and mathematics, he received a Ph.D. in civil
engineering in 1998 from the Technical University Berlin under Ingo
Rechenberg. Before he joined INRIA, he has been working in applied
artificial intelligence and genomics, and he has done research in
evolutionary computation and computational science at the Technical
University Berlin and the ETH Zurich. His main research interests
are learning and adaptation in evolutionary computation and the
development of algorithms applicable in practice. He has been the
main driving force behind the development of CMAES over many years. 

ConstraintHandling Techniques used with
Evolutionary Algorithms
Evolutionary Algorithms (EAs), when used for global optimization, can be seen as unconstrained optimization techniques. Therefore, they require an additional mechanism to incorporate constraints of any kind (i.e., inequality, equality, linear, nonlinear) into their fitness function. Although the use of penalty functions (very popular with mathematical programming techniques) may seem an obvious choice, this sort of approach requires a careful fine tuning of the penalty factors to be used. Otherwise, an EA may be unable to reach the feasible region (if the penalty is too low) or may reach quickly the feasible region but being unable to locate solutions that lie in the boundary with the infeasible region (if the penalty is too severe).
This has motivated the development of a number of approaches to incorporate constraints into the fitness function of an EA. This tutorial will cover the main proposals in current use, including novel approaches such as the use of tournament rules based on feasibility, multiobjective optimization concepts, hybrids with mathematical programming techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial immune systems, among others. Other topics such as the importance of maintaining diversity, current benchmarks and the use of alternative search engines (e.g., particle swarm optimization, differential evolution, evolution strategies, etc.) will be also discussed (as time allows).

Carlos Artemio Coello Coello
He received a BSc in Civil Engineering from the Universidad Autonoma de Chiapas in Mexico in 1991 (graduating Summa Cum Laude). Then, he was awarded a scholarship from the Mexican government to pursue graduate studies in Computer Science at Tulane University (in the USA). He received a MSc and a PhD in Computer Science in 1993 and 1996,respectively.
Dr. Coello has been a Senior Research Fellow in the Plymouth Engineering Design Centre (in England) and a Visiting Professor at DePauw University (in the USA). He is currently full professor at INVESTAVIPN in Mexico City, Mexico. He has published over 250 papers in international peerreviewed journals, book chapters, and conferences. He has also coauthored the book "Evolutionary Algorithms for Solving MultiObjective Problems", which is now in its Second Edition (Springer, 2007) and has coedited the book "Applications of MultiObjective Evolutionary Algorithms" (World Scientific, 2004). His publications currently report over 5,000 citations. He has delivered invited talks, keynote speeches and tutorials at international conferences held in Spain, USA, Canada, Switzerland, UK, Chile, Colombia, Brazil, Argentina, India, Uruguay and Mexico.
Dr. Coello has served as a technical reviewer for over 50 international journals and for more than 100 international conferences and actually serves as associate editor of the journals "Evolutionary Computation", "IEEE Transactions on Evolutionary Computation", "Computational Optimization and Applications", "Pattern Analysis and Applications" and the "Journal of Heuristics". He is also a member of the editorial boards of the journals "Soft Computing", "Engineering Optimization", the "Journal of Memetic Computing" and the "International Journal of Computational Intelligence Research". He received the 2007 National Research Award (granted by the Mexican Academy of Science) in the area of "exact sciences" and, since January 2011, he is an IEEE Fellow for contributions to multiobjective optimization and constrainthandling techniques."He is member of the Mexican Academy of Science, and the ACM. His current research interests are: evolutionary multiobjective optimization and constrainthandling techniques for evolutionary algorithms.




Representations for Evolutionary Algorithms
Successful and efficient use of evolutionary algorithms (EAs) depends on the choice of the genotype, the problem representation (mapping from genotype to phenotype) and on the choice of search operators that are applied to the genotypes. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation.
In EA practice one can distinguish two complementary approaches. The first approach uses indirect representations where a solution is encoded in a standard data structure, such as strings, vectors, or discrete permutations, and standard offtheshelf search operators are applied to these genotypes. To evaluate the solution, the genotype needs to be mapped to the phenotype space. The proper choice of this genotypephenotype mapping is important for the performance of the EA search process. The second approach, the direct representation, encodes solutions to the problem in its most 'natural' space and designs search operators to operate on this representation.
Research in the last few years has identified a number of key concepts to analyse the influence of representationoperator combinations on EA performance.
These concepts are
• locality and
• redundancy.
Locality is a result of the interplay between the search operator and the genotypephenotype mapping. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Furthermore, redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes. Finally, a bias need not be the result of the representation but can also be caused by the search operator.
The tutorial gives a brief overview about existing guidelines for representation design, illustrates the different aspects of representations, gives a brief overview of theoretical models describing the different aspects, and illustrates the relevance of the aspects with practical examples.
It is expected that the participants have a basic understanding of EA principles.

Franz Rothlauf
He received a Diploma in Electrical Engineering from the University of Erlangen, Germany, a Ph.D. in Information Systems from the University of Bayreuth, Germany, a Habilitation from the university of Mannheim, Germany, in 1997, 2001, and 2007, respectively.
He is currently chair of Information Systems, Mainz University, Germany. He has published more than 50 technical papers in the context of evolutionary computation, coedited several conference proceedings, and is author of the book "Representations for Genetic and Evolutionary Algorithms" which is published in a second edition in 2006.
His main research interests are problem representations for heuristic search approaches especially evolutionary algorithms. He is a member of the Editorial Board of Evolutionary Computation Journal and Journal of Artificial Evolution and Applications. He has been organizer of several workshops on representations issues, chair of EvoWorkshops in 2005 and 2006, coorganizer of the European workshop series on "Evolutionary Computation in Communications, Networks, and Connected Systems", coorganizer of the European workshop series on "Evolutionary Computation in Transportation and Logistics", and cochair of the program commitee of the GA track at GECCO 2006. He was conference chair of GECCO 2009. 



Statistical Analysis for Evolutionary
Computation: Introduction
Although the
situation has been improving over
the past few years, many researchers
in the EC field are still using an
unfortunate lack
of statistical rigor when performing and analyzing experiments.
Experimental results that lack a proper statistical analysis should be
considered as anecdotal at best, and may even be wholly inaccurate.
This tutorial aims at providing the basic statistical framework that
all experimenters in the EC field should follow.
The tutorial will cover introductory
topics in Statistics such as the
Z test, T test, confidence intervals,
the Central Limit theorem (its
power and limitations), nonparametric
statistics, and will touch on multivariate
and multifactorial analysis as
well as proper experimental design.
The tutorial will be presented
at an introductory level and is
meant for any EC researchers who
want to compare their newly designed
EC system with established EC systems
to see if there is an improvement,
or who want to determine which
EC system performs better on their
chosen problem; i.e. nearly everyone.
It is vital, if our community
is to be taken seriously, for us
to continue to educate ourselves,
and especially new Graduate students,
on the statistical techniques that
are considered de rigor for experiments
performed in any field that is
stochastic in nature, as is EC.

Mark Wineberg
He has
been actively researching
the field
of Evolutionary Computation
(EC) and Genetic Algorithms
(GA) since 1993 while
he was still a graduate
student at Carlton University,
Ottawa, Canada. Over
the years he has published
on various topics including:
the intersection of Genetic
Algorithms and Genetic
Programming, enhancing
the GA for improved behavior
in dynamic environments
through specialized multiple
populations, and exploring
the concept of distances
and diversity in GA populations.
He is currently continuing
his research in dynamic
environments and studying
the properties that allow
one population to evolve
more readily than another.
Prof. Wineberg also teaches an undergraduate
course at Guelph on computer simulation
and modeling of discrete stochastic
systems, which emphasizes proper
statistical analysis, as well as
a graduate course on experimental
design and analysis for computer
science, which is an outgrowth of
the statistical analysis tutorial
given at GECCO.


Genetic Algorithms Theory
There are many interesting things we can study about genetic algorithms from a theoretical point of view. There are structural issues, such as how to design operators for different search spaces, and there are dynamical issues, such as working out the probability of future generations and where a GA will spend most of its time. We will touch on a number of these points from a general perspective, asking what can be said about how to design GAs and how they behave, independent of any particular optimization problem.

Jonathan Rowe
He is a Reader in Natural Computation in the School of Computer Science at the University of Birmingham. He got his PhD from the University of Exeter in 1991 in Artificial Intelligence, and then worked in industry for British Maritime Technology. Returning to academia to complete postdoctoral studies (studying the evolution of neural networks in parallel genetic
algorithms) he then became a Senior Researcher at De Montfort University, specialising in the theory of evolutionary algorithms. He joined the University of Birmingham in 2000 as a lecturer in Computer Science. As well as contributing to the theory of evolutionary algorithms, Rowe has many interdisciplinary research interests including complex systems, biological and social modelling, medical image understanding and quantum computation. He is on the editorial board of Evolutionary Computation (MIT Press) and Theoretical Computer Science (Elsevier). 

Elementary Landscapes: Theory and Applications
The original theory of elementary landscapes focused on computing the mean of the local neighborhood search landscape for a few combinatorial optimization problems. The theory has now been generalized in several ways. First, it is now possible to compute statistical moments over neighborhoods. Second, the theory has been extended to provide statistical information over hyperspheres representing all neighbors at distance d on generalized hypercubes. Third, other combinatorial optimization problems such as MAXkSAT and NKLandscapes and QAP have been shown to be a superposition of a small number of elementary landscapes.
Taken together, these results make it possible to compute statistical moments up to a fixed order over exponentially large neighborhoods in polynomial time for many wellknown combinatorial optimization problems. These methods are being used to develop new search strategies for MAXSAT and NKLandscapes and other problems. This includes extremely fast steepest ascent methods and search utilizing summary statistics that look multiple steps ahead.

Darrell Whitley
Dr. Whitley is Professor and Chair at Colorado State University. He is a former Chair of SIGEVO and former EditorinChief of the journal Evolutionary Computation. He currently serves on the editorial board of Artificial Intelligence. 

Andrew Sutton
Dr. Sutton completed his Ph.D. research at Colorado State University on the analysis of combinatorial fitness landscapes. He currently holds a postdoctoral research position at the University of Adelaide, Australia studying theory of evolutionary algorithms, particularly for combinatorial optimization. 

Generative and Developmental Systems
In evolutionary computation it is common for the genotype to map directly to the phenotype such that each gene in the genotype represents a single discrete component of the phenotype. While this convention is widespread, in nature the mapping is not direct. Instead, the genotype maps to the phenotype through a process of development, which means that each gene may be activated multiple times and in multiple places in the process of constructing the phenotype.
This tutorial will examine why such indirect mapping may be critical to the continuing success of evolutionary computation. Rather than just an artifact of nature, indirect mapping means that vast phenotypic spaces (e.g. the 100 trillion connection human brain) can be searched effectively in a space of far fewer genes (e.g. the 30,000 gene human genome). The tutorial will introduce this research area, called Generative and Developmental Systems (GDS), by surveying milestones in the field and introducing its most profound puzzles. Most importantly, what is the right abstraction of natural development to capture its essential advantages without introducing unnecessary overhead into the search?

Kenneth O. Stanley
He
is an associate professor in the Department of Electrical Engineering and Computer Science at the University of Central Florida. He received a B.S.E. from the University of Pennsylvania in 1997 and received a Ph.D. in 2004 from the University of Texas at Austin. He is an inventor of the Neuroevolution of Augmenting Topologies (NEAT), HyperNEAT, and novelty search algorithms for evolving complex artificial neural networks. His main research contributions are in neuroevolution (i.e. evolving neural networks), generative and developmental systems, coevolution, machine learning for video games, and interactive evolution. He has won best paper awards for his work on NEAT, NERO, NEAT Drummer, HyperNEAT, novelty search, FSMC, and Galactic Arms Race. He is an associate editor of IEEE Transactions on Computational Intelligence and AI in Games, on the editorial board of Evolutionary Computation journal, and on the ACM SIGEVO Executive Committee.


Expressive Genetic Programming
The language in which evolving programs are expressed can have significant impacts on the problemsolving capabilities of a genetic programming system. These impacts stem both from the absolute computational power of the languages that are used, as elucidated by formal language theory, and from the ease with which various computational structures can be produced by random code generation and by the action of genetic operators. Highly expressive languages can facilitate the evolution of programs for any computable function using, when appropriate, multiple data types, evolved subroutines, evolved control structures, evolved data structures, and evolved modular program and data architectures. In some cases expressive languages can even support the evolution of programs that express methods for their own reproduction and variation (and hence for the evolution of their offspring).
This tutorial will begin with a comparative survey of approaches to the evolution of programs in expressive programming languages ranging from machine code to graphical and grammatical representations. Within this context it will then provide a detailed introduction to the Push programming language, which was designed specifically for expressiveness and specifically for use in genetic programming systems. Push programs are syntactically unconstrained but can nonetheless make use of multiple data types and express arbitrary control structures, supporting the evolution of complex, modular programs in a particularly simple and flexible way. The Push language will be described and ten years of Pushbased research, including the production of humancompetitive results, will be briefly surveyed. The tutorial will conclude with a discussion of recent enhancements to Push that are intended to support the evolution of complex and robust software systems.

Lee Spector
He
is a Professor of Computer Science in the School of Cognitive Science at Hampshire College in Amherst, Massachusetts, and an adjunct professor in the Department of Computer Science at the University of Massachusetts, Amherst. He received a B.A. in Philosophy from Oberlin College in 1984 and a Ph.D. from the Department of Computer Science at the University of Maryland in 1992. His areas of teaching and research include genetic and evolutionary computation, quantum computation, and a variety of intersections between computer science, cognitive science, evolutionary biology, and the arts. He is the EditorinChief of the journal Genetic Programming and Evolvable Machines (published by Springer) and a member of the editorial board of Evolutionary Computation (published by MIT Press). He is also a member of the SIGEVO executive committee and he was named a Fellow of the International Society for Genetic and Evolutionary Computation.
More info: http://hampshire.edu/lspector


Fitness Landscapes and Graphs: multimodularity, ruggedness and neutrality
One of the most commonlyused metaphors to describe the process of heuristic search methods in solving combinatorial optimisation problems is that of a fitness landscape. The landscape metaphor appears most commonly in work related to evolutionary algorithms, however, it can be used for search in general; the search space can be regarded as a spatial structure where each point (solution) has a height (objective function value) forming a landscape surface. In this scenario, the search process would be an adaptivewalk over a landscape that can range from having many peaks of high fitness flanked by cliffs falling to profound valleys of low fitness, to being smooth, with low hills and gentle valleys. Combinatorial landscapes can be seen as graphs whose vertices are the possible configurations. If two configurations can be transformed into each other by a suitable operator move, then we can trace an edge between them. The resulting graph, with an indication of the fitness at each vertex, is a representation of the given problem fitness landscape. The study of fitness landscapes consists in analysing this graph or a relevant partition of it, with respect to the search dynamics or problem difficulty. This advanced tutorial will give an overview of the origins of the fitness landscape metaphor, and will cover alternative ways to define fitness landscapes in evolutionary computation. The two main geometries: multimodal and neutral landscapes, which correspond to two different graph partitions found in combinatorial optimization, will be considered, as well as the dynamics of metaheuristics searching on them. Furthermore, the relationship between problem hardness and fitness landscape metrics (i.e. autocorrelation, fitness distance correlation, neutral degree, etc), and the local optima network properties, studied in recent work, will be deeply analysed. Finally, the tutorial will conclude with a brief survey of open questions and recent research directions on fitness landscapes.

Sébastien Verel
He
is an associate professor in Computer Science at the University of Nice SophiaAntipolis, France, since 2006. He received a PhD in computer science from the University of Nice SophiaAntipolis, France, in 2005. His PhD work was related to fitness landscape analysis in combinatorial optimization. He is a member of the Institut des Systèmes Complexes Paris IledeFrance, and is currently an invited researcher at INRIA Lille  Nord Europe, France. His research interests are in the theory of evolutionary computation, cognitive science modelling, and complex systems.


Bioinspired Computation in Combinatorial Optimization  Algorithms and Their Computational Complexity
Bioinspired computation methods, such as evolutionary algorithms and ant colony optimization, are being applied successfully to complex engineering and combinatorial optimization problems, and it is very important that we understand the computational complexity of these algorithms. This tutorials explains the most important results achieved in this area.
The presenters show how runtime behavior can be analyzed in a rigorous way, in particular for combinatorial optimization. They present wellknown problems such as minimum spanning trees, shortest paths, maximum matching, and covering and scheduling problems. Classical singleobjective optimization is examined first. They then investigate the computational complexity of bioinspired computation applied to multiobjective variants of the considered combinatorial optimization problems, and in particular they show how multiobjective optimization can help to speed up bioinspired computation for singleobjective optimization problems.
The tutorial is based on a book written by the authors with the same title. Further information about the book can be found at www.bioinspiredcomputation.com.

Frank Neumann
He studied Technical Computer Science in
Dortmund and Kiel, Germany. He received his diploma and Ph.D. from the
ChristianAlbrechtsUniversity of Kiel in 2002 and 2006, respectively.
Currently, he is a Senior Lecturer at the School of Computer Science, University of Adelaide, Australia.
In his work, he considers evolutionary computation methods in particular for combinatorial and multiobjective optimization problems. Together with Carsten Witt he has written the textbook "Bioinspired Computation in Combinatorial Optimization  Algorithms and Their Computational Complexity" published by Springer. Frank has given several tutorials at GECCO and PPSN during the last years. 

Carsten Witt
He studied Computer Science at the Technical University of
Dortmund, Germany, where he received his diploma and Ph.D. in 2000
and 2004, respectively. In spring 2009, he moved to the Technical
University of Denmark, where he now works as an associate professor
in algorithms. Carsten's main research interests are the theoretical
aspects of randomized search heuristics, in particular evolutionary
algorithms, ant colony optimization and particle swarm optimization.
He coorganized the biannual Dagstuhl seminar "Theory of
Evolutionary Algorithms" in 2008 and 2010 and has given tutorials on
the computational complexity of bioinspired search heuristics in
combinatorial optimization at GECCO 2008, PPSN 2008 (together with
Frank Neumann), ThRaSH 2009 (together with Thomas Jansen) and at
GECCO 2010. Together with Frank Neumann, he has authored the
textbook "Bioinspired Computation in Combinatorial Optimization 
Algorithms and Their Computational Complexity". 


Specialized Techniques and Applications

Artificial Immune Systems for Optimisation
Artificial immune systems (AIS) are a class of biologically inspired algorithms which are build after different theories from immunology. While the field of AIS is a relatively new area of research, it has achieved numerous promising results in different areas of application, e.g., learning, classification, anomaly detection, and optimisation. In this tutorial we focus in particular on AIS build for the purpose of optimisation. From an algorithmic point of view AIS show on a high level similarities to other biologically inspired algorithms, e.g. evolutionary algorithms. Due to their different origin concrete AIS for optimisation are quite different from evolutionary algorithms. They constitute an interesting alternative approach to current methods.
The tutorial gives an overview over different methods in the field of AIS. It addresses everyone who wants to broaden his or her area of research within this emerging field, both practitioners and theoreticians. It enables attendees without prior knowledge of AIS to learn about a novel kind of optimisation method that can be used as an alternative to other biologically inspired algorithms. Moreover, it gives researchers with prior knowledge of AIS the opportunity to deepen their understanding of the considered algorithms.
We start with an overview over the different areas of AIS, including different general approaches and some immunological background. Afterwards, we discuss several examples of AIS for optimisation. We introduce concrete algorithms and their implementations and point out similarities and differences to other biologically inspired algorithms. In the last part of the tutorial, we present an overview over recent theoretical results for this kind of algorithms.

Thomas Jansen
He
is Stokes Lecturer at the Department of Computer Science at the University College Cork, Ireland. He studied Computer Science at the University of Dortmund, Germany, and received his diploma (1996, summa cum laude) and Ph.D. (2000, summa cum laude) there. From September 2001 to August 2002 he stayed as a PostDoc at Kenneth De Jong's EClab at the George Mason University in Fairfax, VA. He was Juniorp rofessor for Computational Intelligence from September 2002 to February 2009 at the Technical University Dortmund. His research is centred around design and theoretical analysis of artificial immune systems, evolutionary algorithms and other randomised search heuristics. He has published 17 journal papers, 36 conference papers and contributed seven book chapters. He is associate editor of Evolutionary Computation (MIT Press), member of the steering committee of the Theory of Randomised Search Heuristics workshop series, was program chair at PPSN 2008, coorganised FOGA 2009, organised a workshop on Bridging Theory and Practice (PPSN 2008), two GECCO workshops on Evolutionary Computation Techniques for Constraint Handling (2010 and 2011) and Dagstuhl workshops on Theory of Evolutionary Computation (2004 and 2006) and on Artificial Immune Systems (2011) 

Christine Zarges
She
is a PostDoc at the Department of Computer Science of the University of Warwick, UK, financed by a scholarship of the German Academic Research Service (DAAD). She studied Computer Science at the TU Dortmund, Germany, and obtained her Diploma (2007, summa cum laude) and PhD (2011, summa cum laude and dissertation award of the TU Dortmund) there. In 2010 she received a Google Anita Borg Memorial Scholarship. Her research focuses on the theoretical analysis of randomised algorithms, in particular randomised search heuristics such as artificial immune systems and evolutionary algorithms as well as ballsintobins processes. She is also interested in computational and theoretical aspects of immunology. So far she has published 4 journal papers and 13 reviewed conference papers. 

BlackBox Complexity: From Complexity Theory to Playing Mastermind
What is the minimum number of fitness evaluations needed to solve a
particular problem? This socalled blackbox complexity of the problem
is an attempt to understand what is the intrinsic difficulty of a
problem to be solved via general purpose heuristics. If the blackbox
complexity is large, you cannot solve the problem efficiently via
evolutionary approaches. If the blackbox complexity is low, most
likely, there are good randomized search heuristics for the problem. If
the blackbox complexity is lower than the complexity of the best known
randomized search heuristic, we should analyze where this gap comes from.
Blackbox complexity theory is one of the youngest subareas in the
theory of randomized search heuristics. While the notion first appeared
already in the FOGA 2002 paper by Droste, Jansen, Tinnefeld, and
Wegener, it really got going only after the FOGA 2009 paper by Anil and
Wiegand and the GECCO 2010 theory track best paper award winner by Lehre
and Witt.
Being such a young field, this tutorial can give a gentle introduction
to the problem, the complete stateoftheart and a discussion of
several directions for future research at the same time. Since one of
the most basic blackbox complexity problems turns out to be essentially
the problem of playing the classic board game of Mastermind, I will talk
about this and related guessing games as well.

Benjamin Doerr
He
is a senior researcher at the Max Planck Institute for Computer Science and a professor at Saarland University. He received his diploma (1998), PhD (2000) and habilitation (2005) in mathematics from Kiel University. His research area is the theory both of problemspecific algorithms and of randomized search heuristics like evolutionary algorithms. Major contributions to the latter include runtime analyses for evolutionary algorithms and ant colony optimizers, as well as the further development of the drift analysis method, in particular, multiplicative and adaptive drift. In the young area of blackbox complexity, he proved several of the current best bounds.
Together with Frank Neumann and Ingo Wegener, Benjamin Doerr founded the theory track at GECCO and served as its cochair 20072009. He is a member of the editorial boards of Evolutionary Computation and Information Processing Letters. Together with Anne Auger, he is an editor of the book "Theory of Randomized Search Heuristics".




Cartesian Genetic Programming
Cartesian Genetic Programming (CGP) is an increasingly popular and efficient
form of Genetic Programming.
Cartesian Genetic Programming is a highly cited technique that was developed by
Julian Miller in 1999 and 2000 from some earlier joint work of Julian Miller with
Peter Thomson in 1997. In its classic form, it uses a very simple integer based
genetic representation of a program in the form of a directed graph. Graphs are
very useful program representations and can be applied to many domains (e.g.
electronic circuits, neural networks). In a number of studies, CGP has been shown
to be comparatively efficient to other GP techniques. It is also very simple to
program.
Since then, the classical form of CGP has been developed made more efficient in
various ways. Notably by including automatically defined functions (modular CGP)
and selfmodification operators (selfmodifying CGP). SMCGP was developed by
Julian Miller, Simon Harding and Wolfgang Banzhaf. It uses functions that cause
the evolved programs to change themselves as a function of time. Using this
technique it is possible to find general solutions to classes of problems and
mathematical algorithms (e.g. arbitrary parity, nbit binary addition, sequences
that provably compute pi and e to arbitrary precision, and so on).
This tutorial is will cover the basic technique, advanced developments and
applications to a variety of problem domains. It is given by two of the leading
researchers in CGP.
The first edited book on CGP is due to be published in the first half of 2011.

Julian Miller
He is an academic in the Department of Electronics at the University of York. His main research interests are genetic programming (GP), and computational development. He has published over 180 refereed papers on evolutionary computation, genetic programming, evolvable hardware, and computational development and other topics. He has been chair or cochair of sixteen conferences or workshops in genetic programming, computational development, evolvable hardware and evolutionary techniques. Dr. Miller chaired of the Evolvable Hardware tracks at the Genetic and Evolutionary Computation Conference in 20022003 and was Genetic Programming track chair in 2008. He was cochair the Generative and Developmental Systems(GDS) track in 2007, 2009 and 2010. He is an associate editor of the Genetic Programming and Evolvable Machines and a former associate editor of IEEE Transactions on Evolutionary Computation. He is an editorial board member of the journals Evolutionary Computation and Unconventional Computing. 

Simon Harding
He is a post doctoral research fellow in the Department of
Computer Science at Memorial University, Canada. His
research interests include genetic programming,
unconventional computation and parallel computing on
consumer grade hardware. He has been cochair of the
Computational Intelligence on Consumer Games and
Graphics Hardware track at WCCI 2008 and GECCO
2009/2010. He is also coorganizer of the GECCO GPU
programming competition. 

Evolutionary Algorithms and Genetic Programming on Graphic
Processing Units (GPU)
GPGPU (General Purpose Graphic Processing Unit) cards are about to revolutionize evolutionary computation because of their monstrous computing power that evolutionary algorithms can directly use.
In order to obtain fluid games with realistic rendering, the games industry has a considerable need for powerful graphic processors, and enough money to create chips of their own.
Rendering algorithms have specificity: they need to execute the same function on thousands of different pixels, meaning that it is possible to save on silicon space by grouping ALUs together so that many cores execute the same instruction at the same time, on different data (Single Instruction Multiple Data): recent GPU chips (with an equivalent number of transistors as standard CPUs) host up to 512 cores with the restriction that groups of 32 cores must execute the same instruction at the same time, as they have been designed to execute function in parallel on different data.
Now, by pure chance, the flowchart of a rendering algorithm is virtually the same as that of an evolutionary algorithm, meaning that it is possible for EAs to run more or less directly on GPUs: an identical rendering algorithm (resp. evaluation function) must be executed on thousands of different pixels/vertices (resp. genomes).
The story is a bit different for Genetic Programming, where, to the opposite of standard Evolutionary Algorithms, thousands of different functions must be evaluated on a small constant data set. Fortunately, all cores on a GPU are not linked together as in a Single Instruction Multiple Data parallel machine. In NVidia cards, cores are grouped by bunches of 32 virtual SIMD processors, but different bunches can run on different parts of the code, meaning that if at least 32 fitness cases are available as a training set for a GP problem, it is possible to get the 32 virtual cores of one bunch to evaluate one individual in parallel on the 32 different training cases.
This tutorial is divided in two parts:
 A first part will explain how to program an NVIDIA GPU card at a low level.
 A second part will show the kind of speedup that can be regularly obtained with the EASEA platform on both EAs and GP on general problems. This massive parallelism can be extended to several machines thanks to an island model that allows to efficiently exploit clusters of GPGPU machines.

Pierre Collet
Pierre
leads the Datamining and Theoretical Bioinformatics team of the Image Sciences, Computer Sciences, and Remote Sensing Laboratory of the University of Strasbourg where his current research is on adapting evolutionary computation to be run on GPU chips. He has been developing the EASEA platform for the past 10 years, that was granted a 389k€ grant from the French government to port this platform to grids and to the Cloud. The EASEACLOUD platform will be one of the solvers of several million euros projects on Complex Systems.
More info: http://newlsiit.ustrasbg.fr/bfo_en/index.php/Main_Page 

Simon Harding
He
is a post doctoral research fellow at IDSIA (Istituto Dalle Molle di Studi sull'Intelligenza Artificiale) in Switzerland. His research interests include genetic programming, unconventional computation and parallel computing on consumer grade hardware. He has been cochair of the Computational Intelligence on Consumer Games and Graphics Hardware track at WCCI 2008 and GECCO 2009, 2010 and 2011. He is also coorganiser of the GECCO GPU programming competition. 

Computational Intelligence in Games
Computational Intelligence and Games is a rather young but fast growing application domain where evolutionary computation is been widely applied. This tutorial aims at giving a brief overview of this area, highlighting the several opportunities for genetic and evolutionary computation methods, and providing some indepth discussion of specific class of problems where evolutionary computation has proved very successful.
The tutorial is structured in two parts. In the (shorter) first part, we overview the current state of the game industry discussing the different game genres, some of the known computational intelligence applications to commercial games, and the interesting problems/applications that are receiving increasing attention from the game industry.
The (longer) second part of the tutorial delves into three main problems/applications areas where EC has been successfully applied:
 SearchBased Procedural content generation (SBPCG) applies search algorithms (typically genetic algorithms) for the generation of game content and thus aims at providing automatic tools to help designers in producing huge amount of content.
 Optimization for game AI: in this case EC is applied either to optimize existing AI systems or to evolve new AI from scratch in games where the development constraints are not fully foreseeable in the early stages of development due to player actions and/or dynamically generated content.
 Multiobjective evolutionary algorithms are widely used in this area as a way to automatically deal with unforeseeable design tradeoffs
The proposed tutorial is related to the Digital Entertainment Technologies and Arts (DETA) track, an attempts to bring together people and concepts from related areas, e.g. music and games, but also visual arts. Connecting these different domains is of growing importance for game production and CI techniques can play an important role here.
Finally, we are giving an overview of the opensource highend games available to researchers.

Daniele Loiacono
He
graduated cum laude in 2004 in Computer Engineering at
Politecnico di Milano. In 2008 he received the Ph.D. in Computer Engineering
from the Department of Electronics and Information of Politecnico di Milano,
where he is currently a Postdoctoral researcher. His research interests include
machine learning, evolutionary computation, and computational intelligence in
games.
He has been in the program committee of the ACM Genetic and Evolutionary
Computation Conference (GECCO), the IEEE Congress on Evolutionary
Computation (CEC), the IEEE Symposium on Computational Intelligence and
Games (CIG), the International Workshop on Learning Classifier Systems
(IWLCS), and the IEEE International Conference on Fuzzy Systems (FUZZIEEE).
He also reviewed articles for the following journals: IEEE Transactions on
Evolutionary Computation, Evolutionary Computation Journal, IEEE Transaction
on Computational Intelligence and AI in Games, Genetic Programming and
Evolvable Machines.
Since 2008, Daniele Loiacono has been organizing several scientific
competitions at major conferences including GECCO, CEC and CIG. In 2009 he
was local cochair of the IEEE Symposium on Computational Intelligence and
Games and coorganized the special session on Computational Intelligence in
Games at the IEEE Congress on Evolutionary Computation. In 2010 he was in
the organizing committee of the special session on Racing Games at the IEEE
Symposium on Computational Intelligence. Hw will serve as competitions chair
for GECCO 2012. He has 43 refereed international publications between journal
papers, conference papers, book chapters, and workshop papers.


Mike Preuss
He is Research Associate at the Computer Science Department, TU Dortmund, Germany, where he also received his Diploma degree in 1998.
His research interests focus on the field of evolutionary algorithms for realvalued problems, namely on multimodal and multiobjective niching and the experimental methodology for (nondeterministic) optimization algorithms. He is currently working on the adaptability and applicability of computational intelligence techniques for various engineering domains and computer games, pushing forward modern approaches of experimental analysis as the Exploratory Landscape Analysis (ELA) and innovative uses of surrogate models. He was involved in founding the EvoGames track at Evo* and the Digital Entertainment Technologies and Arts (DETA) track at GECCO. 

Introduction to Bioinformatics and Computational Biology
The field of biological sciences has been transformed in recent years to a domain of incredibly rich data ripe for computational exploration. High throughput technologies allow investigators to construct vast feature sets, including genetic variables, gene expression values, protein levels, biomarkers, and a multitude of other traits. These rich feature sets can be used to predict disease risk and prioritize treatment strategies, but there are incredible challenges in the analysis of such data. Features often have correlation patterns, are subject to normalization, measurement errors, and other forms of noise. Furthermore, there are far more features in a typical dataset than there are samples. Despite these issues, the analysis of complex biological data can lead to a new understanding of biological systems and human health.
This tutorial provides an introduction to the fundamental concepts of biological science. It examines the established methods for generating biological datasets, outlines online databases that contain much of this data, and introduces the newest methods for capturing highresolution genomic sequence data. Attendees will leave this tutorial with a better understanding of the problem domains that exist in biological science.

William S. Bush
PhD, MS. Dr. Bush is a human geneticist and bioinformatician, and Assistant Professor in the Center for Human Genetics Research and the Department of Biomedical Informatics at Vanderbilt University. He has expertise in the analysis of largescale genetic data, and in machine learning and data mining techniques for identifying complex genetic associations. He has conducted many studies on the utility of rule mining, neural networks, and evolutionary algorithms for genetic studies. Dr. Bush was recently named a “P.I. of Tomorrow” by Genome Technology. His lab continues to develop methodology for relating genetic variants to human diseases and to intermediate gene expression phenotypes. 

Large scale data mining using GeneticsBased Machine Learning
We are living in the petabyte era. We have larger and larger data to analyze, process and transform into useful answers for the domain experts. Robust data mining tools, able to cope with petascale volumes and/or high dimensionality producing humanunderstandable solutions are key on several domain areas. Geneticsbased machine learning (GBML) techniques are perfect candidates for this task. Recent advances in representations, learning paradigms, and theoretical modelling have showed the competitiveness of non EC techniques in herding large scale data analysis. If evolutionary learning techniques aspire to be a relevant player in this context, they need to have the capacity of processing these vast amounts of data and they need to process this data within reasonable time. Moreover, massive computation cycles are getting cheaper and cheaper every day, allowing researchers to have access to unprecedented computational resources on the edge of petascale computing. Several topics are interlaced in these two requirements: (1) having the proper learning paradigms and knowledge representations, (2) understanding them and knowing when are they suitable for the problem at hand, (3) using efficiency enhancement techniques, and (4) transforming and visualizing the produced solutions to give back as much insight as possible to the domain experts are few of them.
This tutorial will try to shed light to the above mentioned questions, following a roadmap that starts exploring what large scale means, and why large is a challenge and opportunity for GBML methods. As we will show later, opportunity has multiple facets: Efficiency enhancement techniques, representations able to cope with large dimensionality spaces, scalability of learning paradigms, and alternative programming models, each of them helping to make GBML very attractive for largescale data mining. Given these building blocks, we will continue to unfold how we can model the scalability of the components of GBML systems targeting a better engineering effort that will make embracing large datasets routine. Finally, we will illustrate how all these ideas fit by reviewing real applications of GBML systems and what further directions will require serious consideration.

Jaume Bacardit
He
received a BEng and MEng in Computer Engineering and a PhD in Computer Science from the Ramon Llull University in Barcelona, Spain in 1998, 2000 and 2004, respectively. His PhD thesis involved the adaptation and application of Learning Classifier Systems to Data Mining tasks in terms of scalability, knowledge representations and generalization capacity. In 2008 he was appointed as a Lecturer in Bioinformatics at the University of Nottingham. This is a joint post between the schools of Biosciences (MyCiB research group) and Computer Science (Interdisciplinary Computing and Complex Systems research group) with the aim of developing interdisciplinary research at the interface of both disciplines. His research interests include the application of Learning Classifier Systems and other kinds of Evolutionary Learning to data mine largescale challenging datasets and, in a general sense, the use of data mining and knowledge discovery for biological domains. He has more than 30 refereed international publications between journal papers, conference papers and book chapters, has given 8 invited talks and coedited two books. From 2007 to 2010 he was the coorganizer of the International Workshop on Learning Classifier Systems and he was the chair of the GeneticsBased Machine Learning track of GECCO2009. 

Xavier
Llorà
His interests
and work on geneticsbased
machine learning (GBML)
have earned him a spot
among the leaders of
the renaissance of
learning classifier systems (LCSs). His 2002 PhD dissertation challenged
traditional datamining techniques by showing the effectiveness of GBML
approaches. He has help organize two edition of the International
Workshop of Learning Classifier Systems books and their proceedings. He
served as LCS/GBML track chair in the ACM SIGEVO organized GECCO 2005
conference and also organized the NCSA/IlliGAL Gathereing on
Evolutionary Learning in 2006. Llorà, awarded with a young research
fellowship by the Catalan Government, started his academic career at the
Ramon Llull University (Barcelona, Spain) where he earned his PhD degree
with honors. In 2003 he moved to the University of Illinois at
UrbanaChampaign where he joined the Illinois Genetic Algorithms
Laboratory.
Since then, Llorà has headed the DISCUS project, a project to support collaborative humaninnovation and creativity. In summer 2005, Llorà was named research assistant professor at the University of Illinois at UrbanaChampaign and in 2006 a NCSA affiliate where he pursues the development of dataintensive computing infrastructure for largescale data and text mining under the SEASR project (http://seasr.org). In July 2010, Xavier joined Google where he is working on spam and abuse fighting.




Hyperheuristics and Crossdomain Optimization
Hyperheuristics comprise a set of approaches which are motivated (at least in part) by the goal of automating the design of heuristic methods to solve hard computational search problems. An underlying strategic research challenge is to develop more generally applicable search methodologies. The term hyperheuristic is relatively new; it was firstly used in 2000 to describe ‘heuristics to choose heuristics’ in the context of combinatorial optimization. However, the idea of automating the design of heuristics is not new; it can be traced back to the 1960s and 1970s, and a number of related research areas can be identified.
The definition of hyperheuristics has been recently extended to refer to ‘a search method or learning mechanism for selecting or generating heuristics to solve computational search problems’. Two main hyperheuristic categories can be considered: heuristic selection and heuristic generation. The distinguishing feature of hyperheuristics is that they operate on a search space of heuristics (or heuristic components) rather than directly on the search space of solutions to the underlying problem that is being addressed.
This tutorial will discuss the motivation, definition and classification of hyperheuristics. The tutorial will also cover a recent international research competition: the first ‘Crossdomain Heuristic Search Challenge’ (CHeSC 2011) (http://www.asap.cs.nott.ac.uk/chesc2011) organised by the ASAP research group in Nottingham. The challenge was to design a search algorithm that works well, not only across different instances of the same problem, but also across different problem domains. Using a sport metaphor, it is the 'Decathlon' challenge of search heuristics. To support the competition, and to promote research in this area, a software framework (HyFlex) was developed, which features a common interface for dealing with different optimization problems. The tutorial will summarize the main features of HyFlex and the competition; and will present an analysis of the results, highlighting the design principles of the most successful crossdomain hyperheuristics

Gabriela Ochoa
Dr Gabriela Ochoa is a Senior Research Fellow in the Automated Scheduling, Optimisation and Planning (ASAP) research group, School of Computer Science, University of Nottingham, UK. She holds BEng and MRes degrees in Computer Science from the University Simon Bolivar, Venezuela; and a PhD in Artificial intelligence from the University of Sussex, UK. Her research interests lie in the foundations and application of evolutionary algorithms, heuristic search methods, and machine learning with emphasis in automated heuristic design, hyperheuristics, and fitness landscape analysis. Among her research contributions are the study of error thresholds and the roles of crossover and mate selection in evolutionary algorithms; the study of heuristic search spaces, and the incorporation of complex networks in the study of combinatorial landscapes. She was involved in founding the Self* Search (SS) track at GECCO, and proposed and coorganised the first Crossdomain Heuristic Search Challenge (CHeSC 2011) a research competition that received significant national and international attention. 



Theory of Swarm Intelligence
Social animals as found in fish schools, bird flocks, bee hives, and ant colonies are able to solve highly complex problems in nature. This includes foraging for food, constructing astonishingly complex nests, and evading or defending against predators. Remarkably, these animals in many cases use very simple, decentralized communication mechanisms that do not require a single leader. This makes the animals perform surprisingly well, even in dynamically changing environments. The collective intelligence of such animals is known as swarm intelligence and it has inspired popular and very powerful optimization paradigms, including ant colony optimization (ACO) and particle swarm optimization (PSO).
The theory of swarm intelligence has made rapid progress in the last 5 years. Following a very successful line of research in evolutionary computation, results on the computational complexity of swarm intelligence algorithms have appeared. These results shed light on the working principles of swarm intelligence algorithms, help to identify the impact of parameters and other design choices on performance, and contribute to a solid theoretical foundation of swarm intelligence.
This tutorial will give a comprehensive overview of recent theoretical results on swarm intelligence algorithms, with an emphasis on their computational complexity. In particular, it will be shown how techniques for the analysis of evolutionary algorithms can be used to analyze swarm intelligence algorithms and how the performance of swarm intelligence algorithms compares to that of evolutionary algorithms. The tutorial will be divided into a first, larger part on ACO and a second, smaller part on PSO.
For ACO we will consider simple variants of the MAXMIN ant system. Investigations of example functions in pseudoBoolean optimization demonstrate that the choice of the pheromone update strategy and the evaporation rate has a drastic impact on the running time, even for very simple functions like ONEMAX. We will also elaborate on the effect of using local search within the ACO framework. In terms of combinatorial optimization problems, we will look at the performance of ACO for minimum spanning trees, shortest path problems, and the TSP.
For particle swarm optimization, the tutorial will cover results on PSO for pseudoBoolean optimization as well as a discussion of theoretical results in continuous spaces.

Dirk Sudholt
He obtained his Diplom (Master's) degree in 2004 and his PhD in computer science in 2008 from the Technische Universitaet Dortmund, Germany, under the supervision of Prof. Ingo Wegener. He has held postdoc positions at the International Computer Science Institute in Berkeley, California, working in the Algorithms group led by Prof. Richard M. Karp and at the University of Birmingham, UK, working with Prof. Xin Yao. Since January 2012 he is a Lecturer at the University of Sheffield, UK.
His research focuses on the computational complexity of randomized search heuristics such as evolutionary algorithms, in particular memetic and parallel variants, and swarm intelligence algorithms like ant colony optimization and particle swarm optimization. He has won 5 best paper awards at leading conferences, GECCO and PPSN. 

Drift Analysis
Drift analysis is among the most powerful theoretical tools available for estimating the optimisation time of evolutionary algorithms (EAs). Informally, it shows how the challenging problem of predicting the longterm behaviour of an EA can be reduced to the often trivial problem of describing how the state of the EA changes during one generation.
Drift analysis has dramatically simplified the analysis of EAs, and has led to significant advances in the theory of evolutionary computation. Many of the most important results about the optimisation time of EAs were obtained with the help of drift analysis.
This tutorial gives a gentle, yet comprehensive, introduction to drift analysis, assuming only basic knowledge of probability theory. We approach the area by examining a few simple drift theorems that are both straightforward to apply, and that yield useful bounds on the expected optimisation time of basic EAs. We then turn to more sophisticated drift theorems that, while needing stronger conditions, allow us to make very precise statements about the success probability of EAs. Finally, we show how to investigate complex EAs with the aid of a new populationdrift theorem that was discovered recently.

Per Kristian Lehre
Dr Lehre received MSc and PhD degrees in Computer Science from the Norwegian University of Science and Technology (NTNU) in Trondheim, Norway. He finished the PhD in 2006, and joined the School of Computer Science at The University of Birmingham, UK, as a Research Fellow in January 2007.
He was a Post Doctoral Fellow at DTU Informatics, Technical University of Denmark in Lyngby, Denmark from April 2010. He is since September2011 a Lecturer in the School of Computer Science at the University of Nottingham, UK. Dr Lehre's research interests are in theoretical aspects of natureinspired search heuristics, in particular runtime analysis of populationbased evolutionary algorithms.
University of Nottingham Nottingham NG8 1BB, UK http://www.cs.nott.ac.uk/~pkl Phone: +44 0115 82 32825 

STATISTICAL ANALYSIS OF OPTIMIZATION ALGORITHMS WITH R
Based on our experiences from several (rather theoretical) tutorials and workshops devoted to the experimental analysis of algorithms at the world's leading conferences in the field of Computational Intelligence (GECCO, CEC, PPSN, MIC, etc.), we provide a practical, handson tutorial for the statistical analysis of optimization algorithms. We feel the strong need for teaching, demonstrating, and discussing good practice in experimental research.
Our tutorial is partly based on and will be held in close cooperation with Mark Wineberg's introductory tutorial "Statistical Analysis for Evolutionary Computation", to be abbreviated as "Stats I" in the following. More specifically, it:
(1) exemplies and extends ideas presented during Stats I
(2) demonstrates how to analyze results from real experimental studies, e.g., experimental studies in EC
(3) gives a comprehensive introduction to the R language
(4) introduces the powerful GUI "rstudio" (http://rstudio.org)
(5) discusses R alternatives such as Excel, matlab and JMP/SAS
(6) will be held in an interactive manner, i.e., the analyses will be performed in real time.
Stats I introduces the basic definitions from statistics, we actually apply these tools to real problems.
Contents
(1) Interfacing with R
(2) R basics and technical details
(3) Case study 1: Analyzing algorithms
(4) Design of Experiments: How to plan and set up experiments in an efficient manner
(5) Case study 2: Tuning algorithms interactively
(6) Some pitfalls in experimentation
(7) Comparison of R and related approaches, e.g., Excel or JMP based analyses
(8) R based packages for an automated analysis and tuning, e.g., sequential parameter optimization
(9) Reporting results. Automated report generation using Sweave
(10) Discussion
The tutorial will be held at an introductory level. However, Stats I attendance (or comparable experience) is mandatory. R source code examples and further materials will be provided in advance. Participants can perform experiments on their on computer during the workshop.

Thomas BartzBeielstein
Dr. Thomas BartzBeielsteinis a professor for Applied Mathematics at the Faculty of Computer Science and Engineering Science at CUAS (Cologne University of Applied SciencesFachhochschule Koeln). He is head of the research center CIOP (Computational Intelligence, Optimization & Data Mining) at CUAS. His research interests include optimization, simulation, and statistical analysis of complex realworld problems. Thomas is an R user with more than ten years of experience. He is the driving force behind the "sequential parameter optimization" (SPO), which presents a framework for the statistical analysis of algorithms [1]. The related toolbox is freely available via CRAN, see http://cran.rproject.org/web/packages/SPOT. SPOT is applied as a tuner for numerous optimization algorithms such as evolution strategies, genetic programming, diffrential evolution, or particle swarm optimization. Thomas worked as a research assistant in the Collaborative Research Center Design and Management of Complex Technical Processes and Systems by Means of Computational Intelligence Methods at the Technical University Dortmund, Chair Algorithm Engineering. He wrote his doctoral thesis "New Experimentalism Applied to Evolutionary Computation" under the supervision of Prof. HansPaul Schwefel. The recent book "Experimental methods for the analysis of optimization algorithms" (edited together with Marco Chiarandini, Luis Paquete, and Mike Preuss) is published by Springer (http://www.springer.com/9783642025372). 

Mike Preuss.
Mike s Research Associate at the Computer Science Department, TU Dortmund, Germany, where he also received his Diploma degree in 1998. His research interests focus on the field of evolutionary algorithms for realvalued problems, namely on multimodal and multiobjective niching and the experimental methodology for (nondeterministic) optimization algorithms. He is currently working on the adaptability and applicability of computational intelligence techniques for various engineering domains and computer games, pushing forward modern approaches of experimental analysis as the Exploratory Landscape Analysis (ELA) and innovative uses of surrogate models. He was involved in founding the EvoGames track at Evo* and the Digital Entertainment Technologies and Arts (DETA) track at GECCO. Mike has been using R for research and teaching for several years. 

Algorithm and Experiment Design with HeuristicLab  An Open Source Optimization Environment for Research and Education
The proposed tutorial demonstrates how to apply and analyze metaheuristic optimization algorithms using the HeuristicLab [1] open source optimization environment. It will be shown how to parameterize and execute evolutionary algorithms to solve combinatorial optimization problems (traveling salesman, vehicle routing) as well as data analysis problems (regression, classification). The attendees will learn how to assemble different algorithms and parameter settings to large scale optimization experiments and how to execute such experiments on multicore or cluster systems. Furthermore, the experiment results will be compared using HeuristicLab’s interactive charts for visual and statistical analysis to gain knowledge from the executed test runs. To complete the tutorial, it will be sketched briefly how HeuristicLab can be extended with further optimization problems and how custom optimization algorithms can be modeled using the graphical algorithm designer.
[1] http://dev.heuristiclab.com

Stefan Wagner
He
received his MSc in computer science in 2004 and his PhD in technical sciences in 2009, both from the Johannes Kepler University Linz, Austria. From 2005 to 2009 he worked as an associate professor for software project engineering and since 2009 as a full professor for complex software systems at the University of Applied Sciences Upper Austria, Campus Hagenberg, Austria. Dr. Wagner is one of the founders of the research group Heuristic and Evolutionary Algorithms Laboratory (HEAL) and is the project manager and head developer of the HeuristicLab optimization environment. 

The Geometry of Evolutionary Algorithms
The aim of the tutorial is to introduce a formal, unified point of view on evolutionary algorithms across representations based on geometric ideas, and to present the benefits for both theory and practice brought by this novel perspective.
The key idea behind the geometric framework is that search operators are not defined directly on solution representations, but are defined on the structure of the search space by means of simple geometric shapes, such as balls and segments, that delimit the region of space that includes all possible offspring with respect to the location of their parents. These geometric definitions can be then rewritten as equivalent but operational definitions involving the underlying representation. For example, the operator termed "uniform geometric crossover" is defined as to produce offspring that are uniformly distributed in the segment between parents. When the uniform geometric crossover is instantiated to the space of real vectors endowed with the Euclidean distance, and to the space of binary strings with the Hamming distance it comes to coincide to familiar operators, the blend crossover for real vectors and the uniform crossover for binary strings, respectively. This natural dualility of geometric search operators allows us to define exactly the same search operator across representations in a strong mathematical sense. This possibility forms the basis of a geometric framework for the unification of evolutionary algorithms across representations.

Alberto Moraglio
He
is currently a Postdoctoral Research Fellow in the School of Computer Science at the University of Birmingham. He holds a PhD in Computer Science from the University of Essex and a Master (Laurea) in Computer Engineering from the Polytechnic University of Turin, Italy. Before the current appointment, he worked as a researcher for HewlettPackard Research Laboratories, as an Assistant Professor for the University of Coimbra, Portugal, and as postdoctoral research associate affiliated with the School of Computing and the Centre for Reasoning, University of Kent, UK. He has ~50 peerreviewed publications in journals and conference proceedings almost exclusively on the geometric view of evolutionary algorithms, which he has developed in his PhD study and postdoctoral research. His current research interest is on any subject in Evolutionary Computation, Machine Learning, Mathematical Optimization and Artificial Intelligence on which he can successfully apply his geometric theory. 



RealWorld Data Modeling
There are many industrial and corporate data sources: financial, manufacturing, experimental, processing, marketing, quality control, etc. Capturing the value in that data requires more than fitting trivial models or visually exploring the data. Rather, we must efficiently isolate driving variables, confirm or reject potential outliers and build models which are both accurate and trustable. Fortunately, multiobjective genetic programming (aka, ParetoGP) allows us to achieve this objective. ParetoGP will be the foundation technology in this tutorial; however, we will address the entire modeling process including data balancing, outlier detection and model usage/exploitation  as well as the model development.
Conventional data modeling assumes an a priori model form and then tweaks coefficients to get the best fit. Conversely, symbolic regression hypothesizes and assesses many different model forms. Many are rejected. Some are deemed interesting and explored and refined. The net result is that the data defines appropriate model structures (and coefficients).
"Appropriate" in this case means a balance between accuracy and simplicity. Considering both of these objectives avoids models which overfit or underfit the data. We can also extract meaningful results from fat data arrays (more variables than records) or data sets having correlated inputs. We can also gain insight by examining the selected variables, variable combinations and structures of the developed model expressions. Furthermore, because diverse model structures are explored during the model search, we can assemble ensembles of highquality but diverse models; these enable building trustable models capable of detecting extrapolation into new operating regions or fundamental changes in the underlying system.
In addition to covering the basic theory of ParetoGP, we illustrate key points using realworld industrial data modeling case studies as well as review best practices of industrial data modeling. Current economic conditions demand maximum efficiency in developing and exploiting maximal quality models; ParetoGP has been used for applications ranging from energy trading to active designofexperiments to plant troubleshooting to patent litigation modeling to ...

Mark Kotanchek
Mark's diverse academic background (B.S. in Engineering Science, M.Eng. in Acoustics, Ph.D. in Aerospace Engineering, Senior Member of IEEE) is consistent with the diversity of professional experience (ladder design, defense system design, signal processing research, technology trend analysis, IT project management, chemical plant troubleshooting, highthroughput biology, energy trading system development, etc.). He is recognized as a global leader in genetic programming theory and application. He founded Evolved Analytics in 2005 with a goal of developing tools and systems to address the data deluge of the modern world and the need to convert that data into actionable insight and understanding. 


More info:
For enquiries regarding tutorials contact gecco2012tutorialssigevolution.org
For general help and administrative matters contact GECCO support at gecco2012sigevolution.org
