Free Tutorials and Workshops: Two days of free tutorials and workshops (included with conference registration) presented by some of the world's foremost experts in topics of interest to genetic and evolutionary computation researchers and practitioners.

Proceedings will be published and distributed to all registered attendees.

Tutorials and workshops will be presented on Saturday, July 12 and Sunday, July 13, 2007.
View the tutorial schedule
.


Planned Free Tutorials


Introductory

 
Genetic Algorithms
more info:
Erik Goodman
Genetic Programming
more info:
John Koza
Evolution Strategies Thomas Bäck
Evolutionary Computation: A Unified Approach
more info:
Kenneth De Jong
Particle Swarm Optimization Riccardo Poli
Probabilistic Model-Building GAs
more info:
Martin Pelikan
Grammatical Evolution Conor Ryan,
Atif Azad
Financial Evolutionary Computation
more info:
Christopher D. Clack
Learning Classifier Systems
more info:
Martin V. Butz
Coevolution
more info:
Anthony Bucci,
Paul Wiegand,
Sevan Ficici


Advanced

 
GP Theory
more info:
Riccardo Poli
No Free Lunch
more info:
Darrell Whitley
Bioinformatics
more info:
Jason Moore
Evolutionary Multiobjective Optimization
more info:
Eckart Zitzler,
Kalyanmoy Deb
Representations for Evolutionary Algorithms
more info:
Franz Rothlauf
Evolutionary Practical Optimization
more info:
Kalyanmoy Deb
Computational Complexity and EC
more info:
Thomas Jansen,
Frank Neumann

GA Theory
more info:

Jonathan Rowe
Experimental Research in EC
more info:
Mike Preuss,
Thomas Bartz-Beielstein
Co-evolution Advanced
more info:
Anthony Bucci,
Paul Wiegand,
Sevan Ficici
Constraint-Handling Techniques used with Evolutionary Algorithms
more info:
Carlos Coello Coello
An Introduction to Statistical Analysis for Evolutionary Computation
more info:
Mark Wineberg


Specialized Techniques and Applications

 
Symbolic Regression Maarten Keijzer
Quantum Computing
more info:
Lee Spector
Evolutionary Multiobjective Combinatorial Optimization (EMCO)
more info:
Rajeev Kumar
Theory of Randomized Search Heuristics in Combinatorial Optimization
more info:
Carsten Witt
Evolutionary Design
more info:
Ian Parmee
Evolutionary Computation and Games
more info:
Moshe Sipper
An Information Perspective on Evolutionary Computation
more info:
Yossi Borenstein
Evolution Strategies and related Estimation of Distribution Algorithms
more info:
Nikolaus Hansen,
Anne Auger
Cartesian Genetic Programming
more info:
Julian F. Miller,
Simon Harding
EA-based Test and Verification of Microprocessors
more info:
Giovanni Squillero
Evolutionary Computer Vision
more info:
Stefano Cagnoni
Generative and Developmental Systems
more info:
Kenneth O. Stanley
Evolving Neural Networks
more info:
Risto Miikkulainen,
Kenneth O. Stanley

 


Introductory

 


Genetic Algorithms:

Description of Tutorial

The Introduction to Genetic Algorithms Tutorial is aimed at GECCO attendees with limited knowledge of genetic algorithms, and will start “at the beginning,” describing first a “classical” genetic algorithm in terms of the biological principles on which it is loosely based, then present some of the fundamental results that describe its performance. It will cover many variations on the classical model, some successful applications of genetic algorithms, and advances that are making genetic algorithms more useful.


Erik Goodman

Biosketch:

Erik Goodman is a Professor of Electrical and Computer Engineering and of Mechanical Engineering at Michigan State University. He studied genetic algorithms under John Holland at the University of Michigan, before they had been named. His use of a genetic algorithm in 1971 to solve for 40 coefficients of a highly nonlinear model of a bacterial cell was the first known GA application on a real-world problem, and ran for nearly a year on a dedicated computer. He has used genetic algorithms in his research ever since, including for parameterization of complex ecosystem models, for evolution of cooperative behavior in artificial life, for factory layout and scheduling, for protein folding and ligand docking, for design of shape and layup of composite structures, and for data mining and pattern classification.

Much of his recent research has been in development of new types of parallel genetic algorithms and in design methods for mechatronic systems using genetic programming. He is co-founder and VP Technology of Red Cedar Technology, which provides tools for automated engineering design based on evolutionary computation. He chaired ICGA-97 and GECCO-2001, chaired GECCO’s sponsoring organization, ISGEC, from 2001-2004, and he served as Chair of ACM SIGEVO, its successor. He continues serving on the SIGEVO executive committee.


Genetic Programming:


John R. Koza

Biosketch:

John R. Koza received his Ph.D. in computer science from the University of Michigan in 1972, working under John Holland. From 1973 through 1987, he was co-founder, chairman, and CEO of Scientific Games Inc. where he co-invented the rub-off instant lottery ticket used by state lotteries. He has taught a course on genetic algorithms and genetic programming at Stanford University since 1988, where he is currently a Consulting Professor in the Biomedical Informatics Program in the Department of Medicine and in the Department of Electrical Engineering at Stanford University. He is Vice-Chair of SIGEVO, and has been on the Business Committee of GECCO since it started in 1999. He has written four books on genetic programming and one on presidential elections.

 


Evolutionary Computation: A Unified Approach:


Description of Tutorial

The field of Evolutionary Computation (EC) has experienced tremendous growth over the past 15 years, resulting in a wide variety of evolutionary algorithms and applications. The result poses an interesting dilemma for many practitioners in the sense that, with such a wide variety of algorithms and approaches, it is often hard to se the relationships between them, assess strengths and weaknesses, and make good choices for new application areas.

This tutorial is intended to give an overview of EC via a general framework that can help compare and contrast approaches, encourage crossbreeding, and facilitate intelligent design choices. The use of this framework is then illustrated by showing how traditional EAs can be compared and contrasted with it, and how new EAs can be effectively designed using it.

Finally, the framework is used to identify some important open issues that need further research.


Kenneth A. De Jong

Biosketch:

Kenneth A. De Jong received his Ph.D. in computer science from the University of Michigan in 1975. He joined George Mason University in 1984 and is currently a Professor of Computer Science, head of the Evolutionary Computation laboratory, and the associate director of the Krasnow Institute. His research interests include genetic algorithms, evolutionary computation, machine learning, and adaptive systems. He is currently involved in research projects involving the development of new evolutionary algorithm (EA) theory, the use of EAs as heuristics for NP-hard problems, and the application of EAs to the problem of learning task programs in domains such as robotics, diagnostics, navigation and game playing.

He is also interested in experience-based learning in which systems must improve their performance while actually performing the desired tasks in environments not directly their control or the control of a benevolent teacher. Support for these projects is provided by DARPA, ONR, and NRL. He is an active member of the Evolutionary Computation research community and has been involved in organizing many of the workshops and conferences in this area. He is the founding editor-in-chief of the journal Evolutionary Computation (MIT Press), and a member of the board of the ACM SIGEVO. He is the recipient of an IEEE Pioneer award in the field of Evolutionary Computation and a lifetime achievement award from the Evolutionary Programming Society.


Probabilistic Model-Building Algorithms (PMBGAs)

Description of Tutorial

Probabilistic model-building algorithms (PMBGAs) replace traditional variation of genetic and evolutionary algorithms by (1) building a probabilistic model of promising solutions and (2) sampling the built model to generate new candidate solutions. PMBGAs are also known as estimation of distribution algorithms (EDAs) and iterated density-estimation algorithms (IDEAs).

Replacing traditional crossover and mutation operators by building and sampling a probabilistic model of promising solutions enables the use of machine learning techniques for automatic discovery of problem regularities and exploitation of these regularities for effective exploration of the search space. Using machine learning in optimization enables the design of optimization techniques that can automatically adapt to the given problem. There are many successful applications of PMBGAs, for example, Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature subset selection, forest management, groundwater remediation design, telecommunication network design, antenna design, and scheduling.

The tutorial Probabilistic Model-Building GAs will provide a gentle introduction to PMBGAs with an overview of major research directions in this area. Strengths and weaknesses of different PMBGAs will be discussed and suggestions will be provided to help practitioners to choose the best PMBGA for their problem.


Martin Pelikan

Biosketch:

Martin Pelikan received Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 2002. Since 2003 he has been an assistant professor at the Dept. of Mathematics and Computer Science at the University of Missouri in St. Louis. In 2006 Pelikan founded the Missouri Estimation of Distribution Algorithms Laboratory (MEDAL). Prior to joining the University of Missouri, he worked at the Illinois Genetic Algorithms Laboratory (IlliGAL), the German National Center for Information Technology in Sankt Augustin, the Slovak University of Technology in Bratislava, and the Swiss Federal Institute of Technology (ETH) in Zurich.

Pelikan has worked as a researcher in genetic and evolutionary computation since 1995. His most important contributions to genetic and evolutionary computation are the Bayesian optimization algorithm (BOA), the hierarchical BOA (hBOA), the scalability theory for BOA and hBOA, and the efficiency enhancement techniques for BOA and hBOA. His current research focuses on extending BOA and hBOA to other problem domains, applying genetic and evolutionary algorithms to real-world problems with the focus on physics and machine learning, and designing new efficiency enhancement techniques for BOA and other evolutionary algorithms.


Financial Evolutionary Computing:

Description of Tutorial

Financial investment and trading have been transformed through the application of mathematical analysis and computer technology. The research problems posed by financial computing are extremely challenging, taxing both mathematicians and computer scientists. While traditional computational techniques have yet to provide an efficient means for numerical evaluation of the complex equations produced by financial mathematicians, evolutionary computing has been remarkably effective and Financial Evolutionary Computing is currently a fertile area of research.

The Introduction to Financial Evolutionary Computing (FEC) Tutorial is aimed at GECCO attendees with limited knowledge of finance. The tutorial will introduce the area of FEC, provide a basic understanding of trading and investment, identify some of the main research challenges, who is working in the area, and how to get started on FEC research. Topics will include for example stock selection, calculation of value at risk, and modelling financial markets.


Christopher D. Clack

Biosketch:

Christopher D. Clack is Director of Financial Computing at UCL. He founded UCL's Virtual Trading Floor and has attracted funds exceeding £ 2.1 million in the last three years. He is Coordinator of the PROFIT European Network in Financial Computing, which includes UCL, Deutsche Bank, Reuters, and the universities of Athens, Moscow, Prague, and Sofia. He was conference chair at Trade Tech Architecture 2008, is a regular panel member at both industry and academic conferences and workshops, and is also presenting a tutorial on Financial Evolutionary Computing at GECCO 2008. His research team focuses on applying Genetic Computation and multi-agent systems to real-world problems in finance, and his work on GP robustness in highly volatile markets won a Best Paper award at GECCO 2007.

He has twenty years' experience of consulting in Financial Computing, from settlement systems to portfolio optimization, and has established very close partnerships with Credit Suisse, Goldman Sachs, Merrill Lynch, Morgan Stanley, Reuters, and the London Stock Exchange.This provides unrivalled exposure to the most pressing technology problems in finance, coupled with invaluable access to real data.


Learning Classifier Systems

Description of Tutorial

When Learning Classifier Systems (LCSs) were introduced by John H. Holland in the 1970s, the intention was the design of a highly adaptive cognitive system. Since the introduction of the accuracy-based XCS classifier system by Stewart W. Wilson in 1995 and the modular analysis of several LCSs thereafter, it was shown that LCSs can effectively solve data-mining problems, reinforcement learning problems, other predictive problems, and cognitive control problems. It was shown that performance is machine learning competitive, but learning is taking place online and is often more flexible and highly adaptive. Moreover, system knowledge can be extracted easily.The Learning Classifier System tutorial provides a gentle introduction to LCSs and their general functioning. It surveys the current theoretical understanding as well as most successful LCS applications to various problem types. The tutorial ends with a discussion of the most promising areas for future applications.


Martin V. Butz

Biosketch:

Martin V. Butz received his Diploma in computer science from the University of Wuerzburg, Germany in 2001 with the thesis topic: “Anticipatory Learning Classifier Systems”. He then moved on to the University of Illinois at Urbana-Champaign for his PhD studies under the supervision of David E. Goldberg. Butz finished his PhD in computer science on “Rule-based evolutionary online learning systems: Learning Bounds, Classification, and Prediction” in October, 2004. The thesis puts forward a modular, facet-wise system analysis for Learning Classifier Systems (LCSs) and analyzes and enhances the XCS classifier system.

Since October 2004, Butz is working back at the University of Würzburg at the Department of Cognitive Psychology III on the interdisciplinary cognitive systems project “MindRACES: From reactive to anticipatory cognitive embodied systems”. Butz is the co-founder of the “Anticipatory Behavior in Adaptive Learning Systems (ABiALS)” workshop series and is currently also co-organizing the “International Workshop on Learning Classifier Systems (IWLCS 2007)”. Moreover, Butz has published and co-edited four books on learning classifier systems and anticipatory systems. Currently, Butz is focussing on the design of highly flexible and adaptive cognitive system architectures, which are inspired by studies in cognitive psychology, evolutionary biology, and behavioral neuroscience.

http://www.psychologie.uni-wuerzburg.de/i3pages/butz


Coevolution

Description of Tutorial

Recent advances in coevolutionary algorithm design and theory stimulate a need for tutorials that provide insight into these advances. We present material on coevolutionary computation in two tutorials; we emphasize core concepts, but also flesh them out into application.

The Introductory Tutorial on Coevolution is aimed at those with basic working knowledge of evolutionary computation, but limited knowledge of coevolution. This tutorial will outline the potential advantages of coevolution, explain terminology, underscore the key ideas that help frame our understanding of these methods, develop a simple algorithm with variants, then explore some of the issues and phenomena that are particular to coevolution.

The Advanced Tutorial on Coevolution is aimed at attendees who have a working knowledge of coevolution, but who wish to learn more about the modern algorithmic and analytical approaches to coevolutionary computation. We will present a more detailed discussion of how coevolutionary algorithms work (or fail to). We will also examine several successful modern coevolutionary algorithms and deconstruct them to reveal the principles behind their successful operation.

Specific topics that will be covered over the two tutorials include, among others, solution concepts, monotonic progress, underlying objectives, informativeness, gradient (and its loss), monitoring methods, methods of interaction and evaluation, and key representation issues unique to coevolution.

 

Anthony Bucci.- received a Ph.D. in computer science from Brandeis University in 2007. He is presently a scientist at Icosystem Corporation in Cambridge, Massachusetts. His interests include coevolutionary algorithms, evolutionary computation, machine learning, artificial intelligence and agent-based modeling, with a particular emphasis on modeling and understanding strategic behavior.

 

Paul Wiegand.- Dr. Wiegand received his Ph.D. from George Mason University in 2004 and currently holds a research faculty position at the Institute for Simulation and Training with a joint faculty appointment to the Department of Computer Science at the University of Central Florida. His postdoctoral research was conducted at the Navy Center for Applied Research in Artificial Intelligence at the U.S. Naval Research Laboratory. Dr. Wiegand's research interests include studying methods for designing and applying effective learning algorithms and representations for generating and modeling robust heterogeneous, multiagent team behaviors. He currently directs the Natural Computation and Coadaptive Systems Laboratory at UCF.


Paul Wiegand

 


Sevan G. Ficici

Sevan G. Ficici .- He is currently a Post-Doctoral Fellow in computer science at Harvard University; he obtained his Ph.D. from Brandeis University working under Jordan Pollack. Sevan has worked broadly in the field of multi-agent systems and learning for over a decade. His Ph.D. work focused on coevolutionary learning in multi-agent systems. At Harvard, Sevan is working with Avi Pfeffer to develop computational models of human behavior in multi-agent domains and construct computer agents that utilize these models to interact successfully with human participants. Sevan was chair of the coevolution track at GECCO 2006.





Advanced

 


Genetic Programming Theory

Description of Tutorial

GP is a complex adaptive system with zillions of degrees of freedom. Experimental studies require the experimenter to choose which problems, parameter settings and descriptors to use. Plotting the wrong data increases the confusion about GP's behaviour, rather than clarify it.

We treat GP as a search process and explain its behaviour by considering the search space, in terms of its size, its limiting fitness distributions and also the halting probability. Then we shall use modern schema theory to characterise GP search. We finish with lessons and implications.

Knowledge of genetic programming will be assumed.


Riccardo Poli

Biosketches:

Riccardo Poli is a professor in the Department of Computer Science at Essex. His main research interests include genetic programming (GP) and the theory of evolutionary algorithms. In July 2003 Prof.Poli was elected a Fellow of The International Society for Genetic and Evolutionary Computation (ISGEC) ``... in recognition of sustained and significant contributions to the field and the community''. He has published over 180 refereed papers on evolutionary algorithms (particularly genetic programming), neural networks and image/signal processing. With W.B.Langdon, he has co-authored the book Foundations of Genetic Programming . He has been co-founder and co-chair of EuroGP, the European Conference on Genetic Programming for 1998, 1999, 2000 and 2003.

Prof.Poli was the chair of the GP theme at the Genetic and Evolutionary Computation Conference (GECCO) 2002 (the largest conference in the field) and was co-chair of the prestigious Foundations of Genetic Algorithms (FOGA) Workshop in 2002.
He has been (the first non-US) general chair of GECCO in 2004, and served as a member of the business committee for GECCO 2005.
He is technical chair of the International workshop on Ant Colony Optimisation and Swarm Intelligence (ANTS 2006) and competition chair for GECCO 2006. He is an associate editor of ``Evolutionary Computation'' (MIT Press), ``Genetic Programming and Evolvable Machines'' (Springer) and the ``International Journal of Computational Intelligence Research'' (IJCIR). Prof. Poli has been programme committee member of over 50 international events. He has presented invited tutorials on GP at 10 international conferences. He is a member of the EPSRC Peer Review College and has attracted, as principal Investigator or co-investigator, funding for over GBP 1.8M from EPSRC, DERA, Leverhulme Trust, Royal Society, and others.


No Free Lunch:

Description of Tutorial

The "No Free Lunch" theorem is now more than 10 years old; it asserts that all possible search algorithms have the same expected behavior over all possible discrete search problems. But there is more than this to No Free Lunch (NFL). NFL also holds over finite sets of functions, some of which are compressible and some of which are not. A focused form of NFL also holds for very small finite groups of parameter optimization problems encoded using Binary and Gray Code representations. Some observations that hold when NFL is applied over all possible discrete functions are not true when NFL holds over a finite set.

Variants of local search are capable of beating random enumeration (and side stepping NFL) over large classes of problems of bounded complexity. The talk will also explore what NFL tells us about how to construct, compare and evaluate search algorithms.



Darrell Whitley

Biosketch:

Darrell Whitley is Professor and Chair of Computer Science at Colorado State University. He was the Chair of the Governing Board of the International Society for Genetic Algorithms from 1993 to 1997. He was editor-in-chief of the journal Evolutionary Computation from 1997 to 2003.


Bioinformatics:

Description of Tutorial

Bioinformatics is an interdisciplinary field of study that blends computer science and statistics with the biological sciences. Important objectives of bioinformatics include the storage, management and retrieval of high-dimensional biological data and the detection, characterization, and interpretation of patterns in such data. The goal of this tutorial is to provide an entry-level introduction to bioinformatics for those hoping to apply genetic and evolutionary computation to solving complex biological problems. Specific examples of how methods such as genetic programming have been applied in bioinformatics will be presented.



Jason H. Moore

Biosketch:

Jason H. Moore, Ph.D.: Dr. Moore received his M.S. in Statistics and his Ph.D. in Human Genetics from the University of Michigan. He then served as Assistant Professor of Molecular Physiology and Biophysics (1999-2003) and Associate Professor of Molecular Physiology and Biophysics with tenure (2003-2004) at Vanderbilt University. While at Vanderbilt, Dr. Moore held an endowed position as an Ingram Asscociate Professor of Cancer Research. He also served as Director of the Bioinformatics Core and Co-Founder and Co-Director of the Vanderbilt Advanced Computing Center for Research and Education (ACCRE).

In 2004, Dr. Moore accepted a position as the Frank Lane Research Scholar in Computational Genetics, Associate Professor of Genetics, and Adjunct Associate Professor of Community and Family Medicine, and Director of Bioinformatics at Dartmouth Medical School. He also holds adjunct positions in the Department of Biological Sciences at Dartmouth College, the Department of Computer Science at the University of New Hampshire, and the Department of Computer Science at the University of Vermont. Dr. Moore serves as Director of the Bioinformatics Shared Resource for the Norris-Cotton Cancer Center and Founder and Director of The Discovery Resource, a 300-processor parallel computer cooperatively operated for the Dartmouth community. His research has been communicated in more than 130 scientific publications.

 


Evolutionary Multiobjective Optimization:

Description of Tutorial

Many real-world search and optimization problems are naturally posed as non-linear programming problems having multiple conflicting objectives.

Due to lack of suitable solution techniques, such problems are usually artificially converted into a single-objective problem and solved. The difficulty arises because multi-objective optimization problems give rise to a set of Pareto-optimal solutions, each corresponding to a certain trade-off among the objectives. It then becomes important to find not just one Pareto-optimal solution but as many of them as possible.

Classical methods are found to be not efficient because they require repetitive applications to find multiple Pareto-optimal solutions and in some occasions repetitive applications do not guarantee finding distinct Pareto-optimal solutions. The population approach of evolutionary algorithms (EAs) allows an efficient way to find multiple Pareto-optimal solutions simultaneously in a single simulation run.


In this tutorial, we shall contrast the differences in philosophies between classical and evolutionary multi-objective methodologies and provide adequate fundamentals needed to understand and use both methodologies in practice.

Particularly, major state-of-the-art evolutionary multi-objective optimization (EMO) methodologies will be presented and various related issues such as performance assessment and preference articulation will be discussed. Thereafter, three main application areas of EMO will be discussed with adequate case studies from practice -- (i) applications showing better decision-making abilities through EMO, (ii) applications exploiting the multitude of trade-off solutions of EMO in extracting useful information in a problem, and (iii) applications showing better problem-solving abilities in various other tasks (such as, reducing bloating, solving single-objective constraint handling, and others).

Clearly, EAs have a niche in solving multi-objective optimization problems compared to classical methods. This is why EMO methodologies are getting a growing attention in the recent past. Since this is a comparatively new field of research, in this tutorial, a number of future challenges in the research and application of multi-objective optimization will also be discussed.

This tutorial is aimed for both novices and users of EMO. Those without any knowledge in EMO will have adequate ideas of the procedures and their importance in computing and problem-solving tasks. Those who have been practicing EMO will also have enough ideas and materials for future research, know state-of-the-art results and techniques, and make a comparative evaluation of their research.



Eckart Zitzler

Biosketches:

Eckart Zitzler received degrees from University of Dortmund in Germany (diploma in computer science) and ETH Zurich in Switzerland (doctor of technical sciences). Since 2003, he has been Assistant Professor for Systems Optimization at the Computer Engineering and Networks Laboratory at the Department of Information Technology and Electrical Engineering of ETH Zurich, Switzerland. His research focuses on bio-inspired computation, multiobjective optimization, computational biology, and computer engineering applications. Dr. Zitzler was General Co-Chairman of the first three international conferences on evolutionary multi-criterion optimization (EMO 2001, EMO 2003, and EMO 2005)


Kalyanmoy Deb is currently a Professor of Mechanical Engineering at Indian Institute of Technology Kanpur, India and is the director of Kanpur Genetic Algorithms Laboratory (KanGAL). He is the recipient of the prestigious Shanti Swarup Bhatnagar Prize in Engineering Sciences for the year 2005. He has also received the `Thomson Citation Laureate Award' from Thompson Scientific for having highest number of citations in Computer Science during the past ten years in India. He is a fellow of Indian National Academy of Engineering (INAE), Indian National Academy of Sciences, and International Society of Genetic and Evolutionary Computation (ISGEC). He has received Fredrick Wilhelm Bessel Research award from Alexander von Humboldt Foundation in 2003.


Kalyanmoy Deb

His main research interests are in the area of computational optimization, modeling and design, and evolutionary algorithms. He has written two text books on optimization and more than 180 international journal and conference research papers. He has pioneered and a leader in the field of evolutionary multi-objective optimization. He is associate editor of two major international journals and an editorial board members of five major journals. More information about his research can be found from http://www.iitk.ac.in/kangal/deb.htm.

Addresses:
Eckart Zitzler

Computer Engineering (TIK), ETH Zurich
Gloriastr. 35, 8092 Zurich, Switzerland
Phone:+41-1-6327066 Fax:+41-1-6321035
http://www.tik.ee.ethz.ch/~zitzler/

Kalyanmoy Deb

Professor Phone : +91 512 2597205 (O)
Department of Mechanical Engineering +91 512 2598310 (H)
Indian Institute of Technology, Kanpur Fax : +91 512 2597205, 2590007
Kanpur, Pin 208 016, INDIA
http://www.iitk.ac.in/kangal/deb.htm

 


Representations for Evolutionary Algorithms:

Description of Tutorial

Successful and efficient use of evolutionary algorithms (EAs) depends on the choice of the problem representation - that is, the genotype and the mapping from genotype to phenotype - and on the choice of search operators that are applied to this representation. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation.

In EA practice one can distinguish two complementary approaches. The first approach uses indirect representations where a solution is encoded in a standard data structure, such as strings, vectors, or discrete permutations, and standard off-the-shelf search operators are applied to these genotypes.

To evaluate the solution, the genotype needs to be mapped to the phenotype space. The proper choice of this genotype-phenotype mapping is important for the performance of the EA search process. The second approach, the direct representation, encodes solutions to the problem in its most 'natural' space and designs search operators to operate on this representation.

Research in the last few years has identified a number of key concepts to analyse the influence of representation-operator combinations on EA performance.

These concepts are

• locality,
• redundancy, and
• bias.

Locality is a result of the interplay between the search operator and the genotype-phenotype mapping. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Furthermore, redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes. Finally, a bias need not be the result of the representation but can also be caused by the search operator.

The tutorial gives a brief overview about existing guidelines for representation design, illustrates the different aspects of representations, gives a brief overview of theoretical models describing the different aspects, and illustrates the relevance of the aspects with practical examples.

It is expected that the participants have a basic understanding of EA principles.



Franz Rothlauf

Biosketch:

Franz Rothlauf received a Diploma in Electrical Engineering from the University of Erlangen, Germany, a Ph.D. in Information Systems from the University of Bayreuth, Germany, a Habilitation from the university of Mannheim, Germany, in 1997, 2001, and 2007, respectively.

He is currently full professor at the Department of Information Systems, Mainz University, Germany. He has published more than 45 technical papers in the context of evolutionary computation, co- edited several conference proceedings, and is author of the book "Representations for Genetic and Evolutionary Algorithms" which is published in a second edition in 2006.

His main research interests are problem representations for heuristic search approaches especially evolutionary algorithms. For several years he was a visiting researcher at the Illinois Genetic Algorithm Laboratory (IlliGAL). He is a member of the Editorial Board of Evolutionary Computation Journal, Information Sciences, and Journal of Artificial Evolution and Applications. He has been organizer of several workshops on representations issues, chair of EvoWorkshops in 2005 and 2006, co-organizer of the European workshop series on "Evolutionary Computation in Communications, Networks, and Connected Systems", co-organizer of the European workshop series on "Evolutionary Computation in Transportation and Logistics", and co-chair of the program commitee of the GA track at GECCO 2006. He is conference chair of GECCO 2009.

 


Evolutionary Practical Optimization:

Description of Tutorial

Classical optimization methods are often too restrictive to be applied to practical problem solving due to a number of limitations in their working principles: convexity requirement, continuity of search space, unimodality, single-objectiveness etc. Unfortunately, most practical optimization problems have all but the above properties. Evolutionary algorithms belong to a class of stochastic optimization algorithms which, although do not always guarantee finding the optimal solution, often are the only viable choices to solve those problems to near-optimality. In this tutorial, we shall discuss a number of properties and operators which provide EAs the necessary platform to solve such complex optimization problems. Every aspect will be contrasted with competitive classical methods and will be illustrated with interesting case studies.

Evolutionary optimization has a bright future beyond optimization in terms of their abilities in deciphering innovative principles for being optimal - a matter which will be well-illustrated with a number of interesting practical case studies.

Contents:

1. Practical optimization problems and their characteristics
1.1 Large dimension (variables and constraints)
1.2 Non-linear constraints
1.3 Discontinuities, non-differentiability, discreteness of search space
1.4 Multi-modalities (multiple optima, local and global)
1.5 Multi-objectivity (multiple conflicting objectives, trade-off solutions)
1.6 Uncertainties in variables and objectives (robust design, reliability-based design, handling noise)
1.7 Large computational time for evaluation (need for approximate eval. using RSM, Kriging, neural nets etc.)
1.8 Imprecise description of variables and objectives (need for fuzzy logic and rough set descriptions)
1.9 Dynamic optimization problems in which optimization problem changes with time (iteration counter)

2. Efficient evolutionary algorithms for practical optimization (Modifications to a basic EA will be discussed to handle each issue of item 1. Reasons for their success will be given. Moreover, to illustrate at least one case study for each case will be discussed to drive the point home)

3. Beyond optimization: Unveiling innovation and innovative principles for being optimum (EAs allow finding multiple optimal solutions in one simulation. These solutions can be analyzed for commonality principles which often give rise to innovative principles about solving the problem which were not known before and not possible to achieve by other means. A number of interesting case studies will be discussed to illustrate the innovative principles which can be deciphered by this process. This task has a tremendous application in engineering design activities.)

Who should attend?

All those who are interested in practical optimization and practical problem solving will be interested in this tutorial. Besides getting a feel for the differences between evolutionary optimization and their classical counterparts, participants will get a feel that evolutionary optimization provides a more (w)holistic optimization which can not only be used to just find the optimum or near-optimum solution, but to look beyond and gain a plethora of important insights about solving the problem at hand.


Kalyanmoy Deb

Biosketch:

Kalyanmoy Deb
is a Professor of Mechanical Engineering at Indian Institute of Technology Kanpur, India. He is the recipient of the prestigious Shanti Swarup Bhatnagar Prize in Engineering Sciences for the year 2005. He has also received the `Thomson Citation Laureate Award' from Thompson Scientific for having highest number of citations in Computer Science during the past ten years in India. Recently he received the MCDM Edgeworth-Pareto Award from Intl. Soecity on Multiple Criterion Decision Making (MCDM). He is a fellow of Indian National Academy of Engineering (INAE), Indian National Academy of Sciences, and International Society of Genetic and Evolutionary Computation (ISGEC). He has received Fredrick Wilhelm Bessel Research award from Alexander von Humboldt Foundation in 2003.

His main research interests are in the area of computational optimization, modeling and design, and evolutionary algorithms. He has written two text books on optimization and more than 200 international journal and conference research papers.

Kalyanmoy Deb
Professor of Mechanical Engineering
Department of Mechanical Engineering
Indian Institute of Technology Kanpur
Kanpur, PIN 208016, India
Email:
Web: http://www.iitk.ac.in/kangal/deb.htm

 


Computational Complexity and Evolutionary Computation:

Description of Tutorial

Evolutionary algorithms and other nature-inspired search heuristics like ant colony optimization have been shown to be very successful when dealing with real-world applications or problems from combinatorial optimization. In recent years, analyses has shown that these general randomized search heuristics can be analyzed like "ordinary" randomized algorithms and that such analyses of the expected optimization time yield deeper insights in the functioning of evolutionary algorithms in the context of approximation and optimization. This is an important research area where a lot of interesting questions are still open.

The tutorial enables attendees to analyze the computational complexity of evolutionary algorithms and other search heuristics in a rigorous way. An overview of the tools and methods developed within the last 15 years is given and practical examples of the application of these analytical methods are presented.



Thomas Jansen

Biosketches:

Thomas Jansen,
born 1969, studied Computer Science at the University of Dortmund, Germany. He received his diploma (1996) and Ph.D. (2000) there. From September 2001 to August 2002 he stayed as a Post-Doc at Kenneth De Jong's EClab at the George Mason University in Fairfax, VA. Since September 2002 he holds a position as Juniorprofessor for Computational Intelligence at the University of Dortmund. He is an associate editor of Evolutionary Computation (MIT Press). His research is centered around the theoretical analysis of evolutionary algorithms.


Frank Neumann studied Technical Computer Science in Dortmund and Kiel, Germany. He received his diploma (2002) and Ph.D. (2006) from the Christian-Albrechts-University of Kiel. Since November 2006 he is a Post-Doc in the research group "Algorithms and Complexity" (headed by Kurt Mehlhorn) of the Max-Planck-Institute for Computer Science in Saarbrücken, Germany. A central topic in his work are theoretical aspects of randomized search heuristics in particular for problems from combinatorial optimization. He has been a co-chair of the track "Formal Theory" at GECCO 2007 and is chairing the track "Evolutionary Combinatorial Optimization" at GECCO 2008.


Frank Neumann

 


GA Theory

Description of Tutorial

There are many interesting things we can study about genetic algorithms from a theoretical point of view. There are structural issues, such as how to design operators for different search spaces, and there are dynamical issues, such as working out where a GA will spend most of its time, and how long does it take to get there. We will touch on a number of these points from a general perspective, asking what can be said about how to design GAs and how they behave, independent of any particular optimisation problem.

This tutorial therefore complements the tutorials on "Computational Complexity and EC" and "Theory of Randomized Search Heuristics" which look at EA performance on specific problems.


Jonathan Rowe

Biosketch:

Jonathan Rowe is Reader in Natural Computation at the University of Birmingham. He got his PhD in 1991 from the University of Exeter and worked in industry for a couple of years. He then returned to academia where, amongst other things, he has studied the theory of genetic and other evolutionary algorithms. In addition to publishing many papers on the subject, he co-authored the graduate-level text book "Genetic Algorithms: Principles and Perspectives" with Colin Reeves. He has also helped organise several workshops and conferences in the field, including PPSN and FOGA.

 


Experimental Research in EC

Description of Tutorial

The Future of Experimental Research

We present a comprehensive, effective and very efficient methodology for the design and experimental analysis of search heuristics such as evolutionary algorithms, differential evolution, pattern search or even classical methods such as the Nelder-Mead simplex algorithm.

Our approach extends the sequential parameter optimization (SPO) method that has been successfully applied as a tuning procedure to numerous heuristics for practical and theoretical optimization problems.
The benefit of combining modern and classical statistical methods is demonstrated. Optimization practitioners receive valuable hints for choosing an adequate heuristic for their optimization problems -- theoreticians receive guidelines for testing results systematically on real problem instances.

We demonstrate how SPO improves the performance of many search heuristics significantly. However, this performance gain is not available for free.
Therefore, costs of this tuning process are discussed.

Several examples from theory and practice are used to illustrate typical pitfalls in experimentation. Software tools implementing procedures described in this tutorial are freely available.


Mike Preuss

Biosketches:

Mike Preuss is Research Associate at the Computer Science Department, University of Dortmund, Germany (since 2000), where he also received his diploma degree in 1998.
His current research interests focus on the field of evolutionary algorithms and include structured populations, niching, multiobjective optimization, and real-world applications. Additionally, he has published about information distribution in peer-to-peer systems, experimental methodology and parameter tuning, simple EA hybrids, EAs for classification problems, and basic models for EA behavior on multimodal test problems.



Thomas Bartz-Beielstein Dr. Bartz-Beielstein is Professor for Applied Mathematics a Cologne University of Applied Sciences. He studied Mathematics, Computer Science and Pedagogics and worked as a researcher at the collaborative research center “Computational Intelligence” at the University Dortmund where he also received his PhD. Thomas is experienced in consulting to industry and teaching university courses. His interests include optimization, modeling, simulation, statistical analysis, and experimental design of complex real-world problems.


Thomas Bartz-Beielstein

 


Coevolution

Description of Tutorial

Recent advances in coevolutionary algorithm design and theory stimulate a need for tutorials that provide insight into these advances. We present material on coevolutionary computation in two tutorials; we emphasize core concepts, but also flesh them out into application.

The Introductory Tutorial on Coevolution is aimed at those with basic working knowledge of evolutionary computation, but limited knowledge of coevolution. This tutorial will outline the potential advantages of coevolution, explain terminology, underscore the key ideas that help frame our understanding of these methods, develop a simple algorithm with variants, then explore some of the issues and phenomena that are particular to coevolution.

The Advanced Tutorial on Coevolution is aimed at attendees who have a working knowledge of coevolution, but who wish to learn more about the modern algorithmic and analytical approaches to coevolutionary computation. We will present a more detailed discussion of how coevolutionary algorithms work (or fail to). We will also examine several successful modern coevolutionary algorithms and deconstruct them to reveal the principles behind their successful operation.

Specific topics that will be covered over the two tutorials include, among others, solution concepts, monotonic progress, underlying objectives, informativeness, gradient (and its loss), monitoring methods, methods of interaction and evaluation, and key representation issues unique to coevolution.

 

Anthony Bucci.- received a Ph.D. in computer science from Brandeis University in 2007. He is presently a scientist at Icosystem Corporation in Cambridge, Massachusetts. His interests include coevolutionary algorithms, evolutionary computation, machine learning, artificial intelligence and agent-based modeling, with a particular emphasis on modeling and understanding strategic behavior.

 

Paul Wiegand.- Dr. Wiegand received his Ph.D. from George Mason University in 2004 and currently holds a research faculty position at the Institute for Simulation and Training with a joint faculty appointment to the Department of Computer Science at the University of Central Florida. His postdoctoral research was conducted at the Navy Center for Applied Research in Artificial Intelligence at the U.S. Naval Research Laboratory. Dr. Wiegand's research interests include studying methods for designing and applying effective learning algorithms and representations for generating and modeling robust heterogeneous, multiagent team behaviors. He currently directs the Natural Computation and Coadaptive Systems Laboratory at UCF.


Paul Wiegand

 


Sevan G. Ficici

Sevan G. Ficici .- He is currently a Post-Doctoral Fellow in computer science at Harvard University; he obtained his Ph.D. from Brandeis University working under Jordan Pollack. Sevan has worked broadly in the field of multi-agent systems and learning for over a decade. His Ph.D. work focused on coevolutionary learning in multi-agent systems. At Harvard, Sevan is working with Avi Pfeffer to develop computational models of human behavior in multi-agent domains and construct computer agents that utilize these models to interact successfully with human participants. Sevan was chair of the coevolution track at GECCO 2006.



Constraint-Handling Techniques used with Evolutionary Algorithms


Description of Tutorial

When used for global optimization, Evolutionary Algorithms (EAs) can be seen as unconstrained optimization techniques. Therefore, they require an additional mechanism to incorporate constraints of any kind (i.e., inequality, equality, linear, nonlinear) into their fitness function. Although the use of penalty functions (very popular with mathematical programming techniques) may seem an obvious choice, this sort of approach requires a careful fine tuning of the penalty factors to be used. Otherwise, an EA may be unable to reach the feasible region (if the penalty is too low) or may reach quickly the feasible region but being unable to locate solutions that lie in the boundary with the infeasible region (if the penalty is too severe).

This has motivated the development of a number of approaches to incorporate constraints into the fitness function of an EA. This tutorial will cover the main proposals in current use, including novel approaches such as the use of tournament rules based on feasibility, multiobjective optimization concepts, hybrids with mathematical programming techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial immune systems, among others. Other topics such as the importance of maintaining diversity, current benchmarks and the use of alternative search engines (e.g., particle swarm optimization, differential evolution, evolution strategies, etc.) will be also discussed (as time allows).


Carlos Artemio Coello Coello

Biosketch:

Carlos Artemio Coello Coello streceived a BSc in Civil Engineering from the Universidad Autonoma de Chiapas in Mexico in 1991 (graduating Summa Cum Laude). Then, he was awarded a scholarship from the Mexican government to pursue graduate studies in Computer Science at Tulane University (in the USA). He received a MSc and a PhD in Computer Science in 1993 and 1996, respectively.
Dr. Coello has been a Senior Research Fellow in the Plymouth Engineering Design Centre (in England) and a Visiting Professor at DePauw University (in the USA). He is currently full professor at CINVESTAV-IPN in Mexico City, Mexico.

He has published over 170 papers in international peer-reviewed journals and conferences. He has also co-authored the book "Evolutionary Algorithms for Solving Multi-Objective Problems" (Kluwer Academic Publishers, 2002) and has co-edited the book "Applications of Multi-Objective Evolutionary Algorithms (World Scientific, 2004). He has delivered invited talks, keynote speeches and tutorials at international conferences held in Spain, USA, Canada, Switzerland, UK, Chile, Colombia, Brazil, Argentina, Uruguay and Mexico.
Dr. Coello has served as a technical reviewer for over 40 international journals and for more than 40 international conferences and actually serves as associate editor of the journals "IEEE Transactions on Evolutionary Computation" and "Evolutionary Computation" and as a member of the editorial boards of the journals "Soft Computing", "Engineering Optimization", "Journal of Heuristics", "Computational Optimization and Applications" and the "International Journal of Computational Intelligence Research". He is member of the Mexican Academy of Science, Senior Member of the IEEE, and member of Sigma Xi, The Scientific Research Society. He is also a member of the Council of Authors of the "International Society for Genetic and Evolutionary Computation". His current research interests are: evolutionary multiobjective optimization, constraint-handling techniques for evolutionary algorithms and evolvable hardware.

 


An Introduction to Statistical Analysis for Evolutionary Computation

Description of Tutorial

Although the situation has been improving over the past few years, many researchers in the EC field are still using an unfortunate lack of statistical rigor when performing and analyzing experiments. Experimental results that lack a proper statistical analysis should be considered as anecdotal at best, and may even be wholly inaccurate. This tutorial aims at providing the basic statistical framework that all experimenters in the EC field should follow.

The tutorial will cover introductory topics in Statistics such as the Z test, T test, confidence intervals, the Central Limit theorem (its power and limitations), non-parametric statistics, and will touch on multi-variate and multi-factorial analysis as well as proper experimental design. The tutorial will be presented at an introductory level and is meant for any EC researchers who want to compare their newly designed EC system with established EC systems to see if there is an improvement, or who want to determine which EC system performs better on their chosen problem; i.e. nearly everyone.

It is vital, if our community is to be taken seriously, for us to continue to educate ourselves, and especially new Graduate students, on the statistical techniques that are considered de rigor for experiments performed in any field that is stochastic in nature, as is EC.



Mark Wineberg

Biosketch:

Mark Wineberg has been actively researching the field of Evolutionary Computation (EC) and Genetic Algorithms (GA) since 1993 while he was still a graduate student at Carlton University, Ottawa, Canada. Over the years he has published on various topics including: the intersection of Genetic Algorithms and Genetic Programming, enhancing the GA for improved behavior in dynamic environments through specialized multiple populations, and exploring the concept of distances and diversity in GA populations. He is currently continuing his research in dynamic environments and studying the properties that allow one population to evolve more readily than another.

Prof. Wineberg also teaches an undergraduate course at Guelph on computer simulation and modeling of discrete stochastic systems, which emphasizes proper statistical analysis, as well as a graduate course on experimental design and analysis for computer science, which is an outgrowth of the statistical analysis tutorial given at GECCO.

 

 




Specialized Techniques and Applications

 

Quantum Computing

Description of Tutorial

Computer science will be radically transformed if ongoing efforts to build large-scale quantum computers eventually succeed and if the properties of these computers meet optimistic expectations. Nevertheless, computer scientists still lack a thorough understanding of the power of quantum computing, and it is not always clear how best to utilize the power that is understood. This dilemma exists because quantum algorithms are difficult to grasp and even more difficult to write. Despite large-scale international efforts, only a few important quantum algorithms are documented, leaving many essential questions about the potential of quantum algorithms unanswered.

These unsolved problems are ideal challenges for the application of automatic programming technologies. Genetic programming techniques, in particular, have already produced several new quantum algorithms and it is reasonable to expect further discoveries in the future. These methods will help researchers to discover how additional practical problems can be solved using quantum computers, and they will also help to guide theoretical work on both the power and limits of quantum computing.

This tutorial will provide an introduction to quantum computing and an introduction to the use of evolutionary computation for automatic quantum computer programming. No background in physics or in evolutionary computation will be assumed. While the primary focus of the tutorial will be on general concepts, specific results will also be presented, including human-competitive results produced by genetic programming. Follow-up material is available from the presenter's book, Automatic Quantum Computer Programming: A Genetic Programming Approach, published by Springer and Kluwer Academic Publishers.

 


Lee Spector

Biosketch:

Lee Spector is a Professor of Computer Science in the School of Cognitive Science at Hampshire College in Amherst, Massachusetts. He received a B.A. in Philosophy from Oberlin College in 1984 and a Ph.D. from the Department of Computer Science at the University of Maryland in 1992. His areas of teaching and research include genetic and evolutionary computation, quantum computation, and a variety of intersections between computer science, cognitive science, evolutionary biology, and the arts. He is a member of the SIGEVO executive committee and he was named a Fellow of the International Society for Genetic and Evolutionary Computation.

He is an active author and editor in the field of genetic programming and he serves on the editorial boards of the journals Genetic Programming and Evolvable Machines and Evolutionary Computation.

More info: http://hampshire.edu/lspector


Evolutionary Multiobjective Combinatorial Optimization (EMCO):

Description of Tutorial

Discrete/combinatorial optimization has been an interesting and challenging area to researchers belonging to development of algorithms and OR techniques for real-world applications. Most such problems abstracted/taken from graph theory are NP-complete in single objective, and NP-hard for multiple objectives. There are many applications of such problems in communication topology and VLSI design.

For most such problems, there do not exist good heuristics/algorithms, therefore, black-box optimization techniques, e.g., EAs are usually preferred for getting effective solutions. In this tutorial, we will do a quick review of some of the standard combinatorial optimization problems, and then include the techniques and design of operators to effectively solve EMCO problems taken from real-world applications. This would be followed by some case-studies taken from standard problems. This interdisciplinary tutorial provides a practical foundation for:

• an introduction to combinatorial problems and their multiobjective versions,
• problem challenges and solution through EMO techniques,
• need of designing specialized genetic operators, representation and techniques for embedding of (limited) problem specific knowledge,
• few case studies taken form well known EMCO, and comparison of EMO results with the existing heuristics/approximation algorithms, and
• improvement of the EMO solutions through hybridization using local search and others.

Key design considerations to designing operators/representations so as to effectively explore the search space will be looked into. In general, the tutorial is aimed at quickly laying a foundation that can be used as the basis for further study, research and development in this emerging area.


Rajeev Kumar

Biosketch:

Rajeev Kumar is a Professor of Computer Science & Engineering (CSE) at Indian Institute of Technology (IIT), Kharagpur. He received his Ph.D. from University of Sheffield, and M.Tech. from University of Roorkee (now, IIT - Roorkee) both in Computer Science & Engineering. He had been a faculty of CSE at IIT Kanpur and BITS Pilani. He had worked for National Semiconductors, Germany and DRDO, India. His main research interests include evolutionary algorithms & multiobjective combinatorial optimization, embedded & multimedia systems, and programming language & software engineering.

He has published over 100 research articles in lead journals and conferences. He is a senior member of ACM, senior member of IEEE and a fellow of IETE. More info about him can be found at http://www.facweb.iitkgp.ernet.in/~rkumar/.


Theory of Randomized Search Heuristics in Combinatorial Optimization :

Description of Tutorial

The rigorous mathematical analysis of randomized search heuristics (RSHs) with respect to their expected runtime is a growing research area where many results have been obtained in recent years.
This class of heuristics contains well-known approaches such as Randomized Local Search (RLS), the Metropolis Algorithm (MA), Simulated Annealing (SA), and Evolutionary Algorithms (EAs) as well as more recent approaches such as Ant Colony Optimization (ACO). Such heuristics are often applied to problems whose structure is not known or if there are not enough resources such as time, money, or knowledge to obtain good specific algorithms. It is widely acknowledged that a solid mathematical foundation for such heuristics is needed.

Most designers of RSHs, however, rather focused on mimicking processes in nature (such as evolution) rather than making the heuristics amenable to a mathematical analysis. This is different to the classical design of (randomized) algorithms which are developed with their theoretical analysis of runtime (and proof of correctness) in mind. Despite these obstacles, research from the last about 15 years has shown how to apply the methods for the probabilistic analysis of randomized algorithms to RSHs. Mostly, the expected runtime of RSHs on selected problems is analzyed. Thereby, we understand why and when RSHs are efficient optimizers and, conversely, when they cannot be efficient.

The tutorial will give an overview on the analysis of RSHs for solving combinatorial optimization problems. Starting from the first toy examples such as the OneMax function, we approach more realistic problems and arrive at analysis of the runtime and approximation quality of RSHs even for NP-hard problems. Our studies treat not only simple RLS algorithms and SA but also more complex population-based EAs and even ACO algorithms. The combinatorial optimization problems that we discuss include the maximum matching problem, the partition problem and, in particular, the minimum spanning tree problem as an example where Simulated Annealing beats the Metropolis algorithm in combinatorial optimization. Important concepts of the analyses will be described as well.


Carsten Witt

Biosketch:

Carsten Witt studied Computer Science at the University of Dortmund, Germany, where he received his diploma and Ph.D. in 2000 and 2004, respectively. Currently he is a postdoctoral researcher at the Technical University of Dortmund. Carsten's main research interests are theoretical aspects of randomized search heuristics, in particular evolutionary algorithms and ant colony optimization.
More information:
http://ls2-www.cs.uni-dortmund.de/~witt


Evolutionary Design:

Description of Tutorial

The tutorial explores and describes the integration of evolutionary search, exploration and optimization across a wide spectrum of design activities and thus investigates the manner in which evolutionary computation can be utilized to generate design concepts and achieve meaningful designs in addition to more standard evolutionary optimization processes. The support of innovation and creativity during preliminary design and novel EC strategies and problem representations that best handle design complexity and support people-centred aspects are also presented. Although the focus is upon development and integration of appropriate evolutionary computing strategies in engineering design the transfer of the technologies into other domains such as drug design and discovery and software design will also be included. Generic design aspects across several such domains that can particularly benefit from EC application / integration and illustrate EC potential across these areas will be presented.

The tutorial will therefore include the following aspects:
• The integration of both deterministic design approaches and computational intelligence techniques with evolutionary design environments
• The extraction and processing of evolutionary design data and its visualization within appropriate designer interfaces
• Human-centred aspects and interactive evolutionary design systems
• Evolutionary search and exploration across uncertain / poorly-defined design environments
• Supporting innovative and creative design
• Development and integration of subjective fitness measures
• Qualitative and quantitative multi-objective design satisfaction
• Search and optimization within heavily constrained design domains
• Reducing computational expense during detailed design, analysis and optimisation
• Evolutionary design systems involving high-performance and distributed computing, problem-solving environments and grid-based application.


Ian Parmee

Biosketch:

Formerly Director of the EPSRC Engineering Design Centre at the University of Plymouth, Ian Parmee established the Advanced Computation in Design and Decision-making Lab (www.ad-comtech.co.uk/ACDDM_Group.htm) at Bristol, UWE in 2001. The research within the Lab focuses upon the integration of people-centred, evolutionary and other computational intelligence technologies with complex design and decision-making processes. From an engineering perspective his work has extended into mechanical, aerospace, electronic, power system and marine engineering design.

From this basis his work now encompasses financial systems, software design and drug design. Enabling computational technologies such as high performance, distributed computing and Grid technology; data-mining, analysis, processing and visualisation and aspects relating to human-computer interaction also play a major role. Experience across these various domains has resulted in an understanding of generic issues, degrees of complexity and the difficulties facing both researchers and practitioners when operating within them. Ian has published extensively across engineering and evolutionary computing domains and has presented plenary talks at leading conferences in both areas. His own book ‘Evolutionary and Adaptive Computing in Engineering Design’ was published by Springer-Verlag in 2001. He has also organised and chaired the biennial international conference ‘Adaptive Computing in Design and Manufacture’ since 1994. The 8th event takes place in Bristol, UK in April 2008 (www.ad-comtech.co.uk/ACDM08). Ian is also Vice-chair of the IEEE Evolutionary Computation Technical Committee.

He is the Scientific Director of ACT (www.ad-comtech.co.uk) which specialises in a range of computational intelligence strategies to provide search, optimisation and modelling capabilities across complex industrial and commercial problem domains. These areas of activity are underpinned by twenty years experience of application of these technologies (especially evolutionary computation) in close collaboration with industry.


Evolutionary Computation in Games

Description of Tutorial

Ever since the dawn of artificial intelligence in the 1950s, games have been part and parcel of this lively field. In 1957, a year after the Dartmouth Conference that marked the official birth of AI, Alex Bernstein designed a program for the IBM 704 that played two amateur games of chess. In 1958, Allen Newell, J. C. Shaw, and Herbert Simon introduced a more sophisticated chess program (beaten in thirty-five moves by a ten-year-old beginner in its last official game played in 1960). Arthur L. Samuel of IBM spent much of the fifties working on game-playing AI programs, concentrating especially on checkers. In 1961 and 1963 Donald Michie described a simple trial-and-error learning system for learning how to play Tic-Tac-Toe (or Noughts and Crosses) called MENACE (for Matchbox Educable Noughts and Crosses Engine). These are but examples of highly popular games that have been treated by AI researchers since the field's inception.

"There are two principal reasons to continue to do research on games," wrote Epstein (1999). "First, human fascination with game playing is long-standing and pervasive. Anthropologists have catalogued popular games in almost every culture... Games intrigue us because they address important cognitive functions... The second reason to continue game-playing research is that some difficult games remain to be won, games that people play very well but computers do not. These games clarify what our current approach lacks. They set challenges for us to meet, and they promise ample rewards."

Studying games may thus advance our knowledge in both cognition and artificial intelligence, and, last but not least, games possess a competitive angle which coincides with our human nature, thus motivating both researcher and student alike. During the past few years there has been an ever-increasing interest in the application of evolutionary algorithms within the vast domain of games. This tutorial aims to present a selection of works in this area, focusing on our own recent Humie-winning efforts.


Moshe Sipper

Biosketch:

Moshe Sipper is an Associate Professor in the Department of Computer Science, Ben-Gurion University, Israel. His current research focus is on evolutionary computation, mainly as applied to games and bioinformatics. Dr. Sipper has published over 120 scientific papers, and is the author of two books: "Machine Nature: The Coming Age of Bio-Inspired Computing," and "Evolution of Parallel Cellular Machines: The Cellular Programming Approach." Dr. Sipper and his students won two Human-Competitive Awards ("Humies") — A Silver Award in 2007 and a Bronze Award in 2005 — for their work on attaining human-competitive game playing with genetic programming. Dr. Sipper is the recipient of the 1999 University Latsis Prize.

 




An Information Perspective on Evolutionary Computation

Description of Tutorial

“Information is the tennis ball of communication…” this sentence concluded Keith Devlin’s talk trying to answer the question: “Does information Really Exist? “. Whether information exists or not, the discussion about information surely does. Even though the term information has a formal definition (i.e., Kolmogorov complexity, Shannon’s information theory), in the EC community (and, as it seems, in many other fields), the term information is used, more often than not, in an informal way (e.g., the problem contains no information and hence it cannot be solved efficiently).

This tutorial focuses on the connection between the two formal definitions of information (Shannnon’s entropy and Kolmogorov’s Complexity) to Optimisation problems and Black- Box algorithms. For example, we will discuss how informal observations of the kind, “ ONEMAX contains good information”, “NIAH does not contain any” connects with the formal definition. Moreover, we will show that the No Free Lunch Theorems are just a private case of a more general phenomenon. The tutorial covers the following major issues: Kolmogorov complexity (KC) and its relation to Shannon information theory, KC and problem hardness, the relation between KC and other (applicable) predictive measures to problem difficulty (e.g., auto-correlation, ruggedness), an information perspective on the nofree- lunch theorems and (if time allows) philosophical aspects of information.


Yossi Borenstein

Biosketch:

Yossi Borenstein is a senior research officer in the University of Essex. His PhD thesis, Information Landscape explores the connection between information theory and optimisation problems in the black box scenario. It consists of two parts: (1) a formal part which quantifies the amount of information that a landscape (problem in the black box scenario) contains and accordingly, using the notion of Kolmogorov complexity gives upper and lower bounds to performance, and (2) a part based on a linear approximation to the performance of a search algorithm which investigates the alignment of particular search algorithms with problems. Yossi is a guest co-editor of a special issue in ECJ and a reviewer of several international journals and conferences.


Evolution Strategies and related Estimation of Distribution Algorithms

Description of Tutorial

Evolution Strategies and some continuous domain Estimation of Distribution Algorithms are stochastic optimization procedures based on sampling multivariate Gaussian (normal) distributions. They can be formulated in a common, unified, but still very simple framework. Such a framework is very useful to understand subtle differences of algorithms.

This tutorial focuses on the most important question: how to chose and update the sample distribution parameters. The most common and successful approaches are reviewed. Covered methods include self-adaptation, success rules, path length control, Covariance Matrix Adaptation (CMA), and Estimation of Multivariate Normal Algorithm (EMNA).

Methods are motivated with respect to the difficulties one has to face when solving continuous domain non-linear, non-convex optimization problems. Specific problem difficulties will be discussed, for example ill-conditioning and non-separability. Finally, the performance of state-of-the art Evolution Strategies is related to other well-known evolutionary and non-evolutionary optimization algorithms, namely BFGS, DE, PSO,...


Nikolaus Hansen

Biosketch:

Nikolaus Hansen is a s is a research scientist at INRIA Saclay in France. Educated in medicine and mathematics, he received his Ph.D. in civil engineering in 1998 from the Technical University Berlin, working under Ingo Rechenberg. Since then he has been working in evolutionary computation and in computational science at the Technical University Berlin and at ETH Zurich. His main research interests are learning and adaptation in evolutionary computation and the development of useful and efficient algorithms. His most well-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).

 


Cartesian Genetic Programming

Description of Tutorial

Cartesian Genetic Programming is a form of genetic programming. It was developed by Julian Miller with Peter Thomson in 1997. In its classic form it uses a very simple integer based genetic representation of a program in the form of a directed graph. In a number of studies, it has been shown to be comparatively efficient to other GP techniques.

Since then, the classical form of CGP has been enhanced in various ways by Julian Miller and James Walker to include automatically defined functions. Most recently it has been developed by Julian Miller, Wolfgang Banzhaf and Simon Harding to include self-modification operators.

This tutorial is the first given on this increasingly popular form of GP. The tutorial will cover the basic technique, advanced developments and applications to a variety of problem domains.


Julian Miller

Biosketch:

Julian Miller is a lecturer in the Department of Electronics at the University of York. His main research interests are genetic programming (GP), and computational development. He has published over 120 refereed papers on evolutionary computation (particularly genetic programming), evolvable hardware, and computational development. He has been chair or co-chair of eleven conferences or workshops in genetic programming, computational development, evolvable hardware and evolutionary techniques.

Dr. Miller was the chair of the Evolvable Hardware tracks at the Genetic and Evolutionary Computation Conference (GECCO) 2002-2003 (the largest conference in the field) and was co-chair the Generative and Developmental Systems track at GECCO in 2007. He is chair of the genetic programming track at GECCO in 2008. He is an associate editor of the journals IEEE Transactions on Evolutionary Computation, and Genetic Programming and Evolvable Machines. He is an editorial board member of the journals Evolutionary Computation and Unconventional Computing. He has given 31 invited talks at conferences, universities, research institutions and commercial companies.

 

Simon Harding is a post doctoral research fellow in the Department of Computer Science at Memorial University, Canada. His research interests include genetic programming, unconventional computation and parallel computing on consumer grade hardware.


Simon Harding

For further info on Cartesian GP see:

http://www.cartesiangp.co.uk/



EA-based Test and Verification of Microprocessors

Description of Tutorial

The tutorial tackle a quite significant problem for industries: how to check that a microprocessor or microcontroller core is correctly designed and, after its production, how to test if it is working. Nowadays technological advances, both in the production processes and design paradigms, are coupled with an ever-increasing time-to- market pressure, exacerbating these problems.

Today simulation-based approaches have been introduced in almost all industrial design flows, and are playing an essential role [1].However, adding contents to a test or verification plan is widely recognized as a major bottleneck, and EA-based heuristics may help test and verification engineers.

The tutorial VERY briefly introduces all the required concepts, such as "test", "verification". Then sketches the similarities and differences of a RT-level description with a high-level program (assumed familiar to the audience), and shows code-coverage metrics.

Then, the problem of "adding contents" to a test or verification plan is analyzed from a EA-centric point of view, showing which methodologies can be used and in what situations, and ineed which been used in the past. Microprocessor peculiar characteristics are analyzed [2].

The usefulness of the approach is demonstrated by the presentation of 3 cases of study: the verification of a DLX/pII processor; post-silicon verification for the Pentium 4 (backed by Intel); test set generation for the PLASMA processor.

PS: Part of the tutorial could probably also be applied to software testing problems.

Recent References:

[1] M. Sonza Reorda, E. Sanchez, G. Squillero; "Test generation and coverage metrics; /Practical Design Verification/; Cambridge University Press; IN-PRESS

[2] E. Sanchez, G. Squillero; "Evolutionary Techniques Applied to Hardware Optimization Problems: Test and Verification of Advanced Processors"; /Advances in Evolutionary Computing for System Design/; Springer; 2007

http://www.cad.polito.it/staff/squillero/


Giovanni Squillero

Biosketch:

Giovanni Squillero received his M.S. and Ph.D. degrees in computer science engineering, respectively, in 1996 and 2000 from Politecnico di Torino, and is now an Assistant Professor in the same institution. His research interests combine evolutionary techniques with verification, testing, and fault tolerant design of electronic systems. Since 2002 the research activity includes microprocessors and microcontroller cores, often collaborating with industrial partners. From 2004, and together with Rolf Dreschler, Squillero organizes EvoHOT, the European Workshop on Evolutionary Optimization under Evo* umbrella. He is now an associate editor of the Journal of Artificial Evolution and Applications.



Evolutionary Computer Vision


Description of Tutorial

Computer vision and related disciplines are one of the major fields of applications for Evolutionary Algorithms (EAs).

Over the years, EAs have been applied to an ever-increasing number of problems and have themselves 'evolved' from a basic use as tools for optimizing parameters of some pre-defined algorithm, to a more and more prominent role as elements of the so-called 'vision chain'. The tutorial aims at providing attendees with a review of this evolution, illustrating several practical examples of the different facets of evolutionary computer vision. A section of the tutorial, in particular, will be dedicated to some very recent applications of swarm intelligence to computer vision, showing how such techniques can often provide real-time performances, thanks to the efficiency of some of the algorithms belonging to this category.


Stefano Cagnoni

Biosketch:

Stefano Cagnoni graduated in Electronic Engineering at the University of Florence in 1988 where he has been a PhD student and a post-doc until 1997, working in the Bioengineering Lab of the Department of Electronic Engineering and Telecommunications. He received the PhD degree in Bioengineering in 1993. In 1994 he was a visiting scientist at the Whitaker College Biomedical Imaging and Computation Laboratory at the Massachusetts Institute of Technology. Since 1997, he has been with the Department of Computer Engineering of the University of Parma, where he is currently Associate Professor.

Stefano Cagnoni's main research interests are in the fields of Computer vision, Robotics, Evolutionary Computation and Neural Networks. He is editor-in-chief of the "Journal of Artificial Evolution and Applications", chair of EvoIASP, the European Workshop on applications of Evolutionary Computation to Image Analysis and Signal Processing, and member of the Programme Committee of several International Conferences and Workshops. He has been reviewer for over 15 journals.


Generative and Developmental Systems

Description of Tutorial

In evolutionary computation it is common for the genotype to map directly to the phenotype such that each gene in the genotype represents a single discrete component of the phenotype. While this convention is widespread, in nature the mapping is not direct. Instead, the genotype maps to the phenotype through a process of development, which means that each gene may be activated multiple times and in multiple places in the process of constructing the phenotype.

This tutorial will examine why such indirect mapping may be critical to the continuing success of evolutionary computation. Rather than just an artifact of nature, indirect mapping means that vast phenotypic spaces (e.g. the 100 trillion connection human brain) can be searched effectively in a space of far fewer genes (e.g. the 30,000 gene human genome). The tutorial will introduce this new research area, called Generative and Developmental Systems (GDS), by surveying milestones in the field and introducing its most profound puzzles. Most importantly, what is the right abstraction of natural development to capture its essential advantages without introducing unnecessary overhead into the search?


Kenneth O. Stanley

Biosketch:

Kenneth O. Stanley is an Assistant Professor in the School of Computer Science at the University of Central Florida. His research focuses on artificially evolving complex solutions to difficult real-world tasks. He graduated with a B.S.E. in Computer Science Engineering and a minor in Cognitive Science from the University of Pennsylvania in 1997. He received an M.S. in Computer Science in 1999, and a Ph.D. in 2004 at the University of Texas at Austin. He is the co-chair of GECCO's 2008 Generative and Developmental Systems track.

He has won best paper awards for his work on NEAT (at the 2002 Genetic and Evolutionary Computation Conference) and for his work on NERO (at the IEEE 2005 Symposium on Computational Intelligence and Games), and also won the Independent Games Festival Student Showcase Award (at the 2006 Game Developers conference) for NERO. He has published papers in JAIR, Evolutionary Computation, IEEE Transactions on Evolutionary Computation, and Artificial Life journals.


Evolving Neural Networks

Description of Tutorial

Neuroevolution, i.e. evolution of artificial neural networks, has recently emerged as a powerful technique for solving challenging reinforcement learning problems. Compared to traditional (e.g. value-function based) methods, neuroevolution is especially strong in domains where the state of the world is not fully known: the state can be disambiguated through recurrency, and novel situations handled through pattern matching. In this tutorial, we will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) ways of combining traditional neural network learning algorithms with evolutionary methods, and (3) applications of neuroevolution to game playing, robot control, resource optimization, and cognitive science.


Risto Miikkulainen

Biosketches:

Risto Miikkulainen is a Professor of Computer Sciences at the University of Texas at Austin. He received an M.S. in Engineering from the Helsinki University of Technology, Finland, in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His recent research focuses on methods for evolving neural networks and applying these methods to game playing, robotics, and intelligent control. He is an author of over 250 articles on neuroevolution, connectionist natural language processing, and the computational neuroscience of the visual cortex. He is an editor of the Machine Learning Journal and Journal of Cognitive Systems Research.

 

Kenneth O. Stanley is an Assistant Professor in the School of Computer Science at the University of Central Florida. His research focuses on artificially evolving complex solutions to difficult real-world tasks. He graduated with a B.S.E. in Computer Science Engineering and a minor in Cognitive Science from the University of Pennsylvania in 1997. He received an M.S. in Computer Science in 1999, and a Ph.D. in 2004 at the University of Texas at Austin. He is the co-chair of GECCO's 2008 Generative and Developmental Systems track.
He has won best paper awards for his work on NEAT (at the 2002 Genetic and Evolutionary Computation Conference) and for his work on NERO (at the IEEE 2005 Symposium on Computational Intelligence and Games), and also won the Independent Games Festival Student Showcase Award (at the 2006 Game Developers conference) for NERO.


Kenneth O. Stanley

He has published papers in JAIR, Evolutionary Computation, IEEE Transactions on Evolutionary Computation, and Artificial Life journals.

 


To Propose a Tutorial or Workshop
To propose a tutorial, contact Jano Van Hemert -
To propose a workshop, contact Marc Ebner -

Please include "GECCO" in your subject line.

   
     
 Genetic and Evolutionary Computation Conference (GECCO-2008)
   GECCO 2007 site     GECCO 2006 site      GECCO 2005 site         GECCO 2004 site    
GECCO 2003 site       GECCO 2002 site         GECCO 2001 site      GECCO 2000 site      GECCO 1999 site