| |
 |
|
 |
|
| |
 |
|
In 2009, GECCO
will be held WEDNESDAY to SUNDAY, not the traditional Saturday
through Wednesday. However, WORKSHOPS and TUTORIALS will
still take place on the first two days of the conference,
Wednesday and Thursday. See schedule
Free Tutorials
and Workshops: Two days of free tutorials
and workshops (included with conference registration) presented
by some of
the world's foremost experts in topics of interest to genetic
and evolutionary computation researchers and practitioners.
Proceedings will be published and distributed to all registered
attendees.
To propose a tutorial, contact Martin V. Butz at 
Planned Free Tutorials
Introductory
| |
Genetic
Algorithms
more info:
|
Erik
Goodman -
|
|
Riccardo
Poli - 
Nic
McPhee -
|
Evolution
Strategies: Basic Introduction
more info: |
Thomas
Bäck -
|
Evolutionary
Computation: A Unified Approach
more info: |
Kenneth
De Jong -  |
|
Christopher
D. Clack -  |
|
Christian Blum -  |
|
Pier
Luca Lanzi -  |
|
Martin Pelikan -  |
Grammatical
Evolution |
Conor
Ryan -  |
Statistical
Analysis for Evolutionary Computation: Introduction
|
Mark Wineberg - 
Steffen Christensen -
|
Evolutionary Neural
Networks
more info: |
Risto Miikkulainen - 
Kenneth O. Stanley -  |
|
Advanced
| |
GA
Theory
|
Jonathan Rowe -
|
|
William B. Langdon - 
Riccardo
Poli -
|
No
Free Lunch |
Darrell Whitley -
|
|
Jason Moore -
|
Evolutionary
Multiobjective Optimization
more info: |
Eckart
Zitzler -
|
Constraint Handling
Techniques Used with EAs
more info:
|
Carlos Coello Coello -
|
Statistical Analysis
for Evolutionary Computation: Advanced Techniques
|
Mark Wineberg - 
Steffen
Christensen -
|
Representations
for Evolutionary Algorithms
more info:
|
Franz Rothlauf -
|
|
Thomas Jansen - 
Frank
Neumann -
|
Experimental Research
in EC
more info:
|
Mike Preuss - 
Thomas
Bartz-Beielstein -
|
Elementary Landscape
Analysis for TSP, Graph Coloring, Graph Partitioning,
and MAXSAT
|
Andrew Sutton - 
Darrell
Whitley -
|
|
Specialized Techniques and Applications
| |
A Billion Bits or
Bust: Little Models and Efficiency
Enhancement for Fast, Effective
Genetic Algorithms on Extremely
Large, Hard Problems.
|
Kumara
Sastry -
|
Accelerating Evolutionary
Computation with Graphics Processing
Units
more
info:
|
Wolfgang Banzhaf - 
Simon Harding -  |
|
Lee Spector -
|
Symbolic Regression
|
Maarten Keijzer -
|
An Information Perspective
on Evolutionary Computation
|
Yossi Borenstein -
|
|
Kenneth O. Stanley -  |
(Genetic and) Evolutionary Computer
Vision
more
info: |
Mengjie Zhang - 
Stefano Cagnoni - 
Gustavo Olague -  |
Evolutionary Design
|
Ian Parmee -
|
Large scale data
mining using Genetics-Based Machine
Learning
more
info:
|
Jaume Bacardit - 
Xavier Llorà -
|
Evolutionary Multiobjective
Combinatorial Optimization
more
info:
|
Rajeev Kumar -
|
Applications of
Evolutionary Neural Networks |
Risto Miikkulainen - 
Kenneth O.
Stanley -  |
Synthetic
Biology II: Modeling & Optimization
more
info: |
Natalio Krasnogor -  |
Experimental Optimization by Evolutionary
Algorithms
more
info: |
Thomas
Bäck - 
Ofer M. Shir -  |
Synthetic
Biology I: Fundamentals & Devices
more
info: |
Nawwaf Kharma - 
Luc Varin -  |
|
Julian F.
Miller - 
Simon Harding -  |
|
Muddassar Farooq -  |
Theory
of Randomized Search Heuristics
in Combinatorial Optimization
more
info: |
Carsten Witt
-  |
Fitness
Landscapes and Graphs: Multimodularity,
Ruggedness and Neutrality
more
info: |
Sébastien
Verel -  |
Evolution
Strategies and Covariance Matrix
Adaptation |
Anne Auger - 
Nikolaus Hansen -  |
Fitness
Landscapes and Problem Hardness
in Genetic Programming
more
info: |
Leonardo Vanneschi -  |
|
Introductory
| |
Genetic Algorithms:
Description of Tutorial
The Introduction to Genetic Algorithms Tutorial is aimed at GECCO attendees with limited knowledge of genetic algorithms, and will start “at the beginning,” describing first a “classical” genetic algorithm in terms of the biological principles on which it is loosely based, then present some of the fundamental results that describe its performance, described using the schema concept. It will cover some variations on the classical model, some successful applications of genetic algorithms, and advances that are making genetic algorithms more useful.

Erik Goodman
|
Biosketch:
Erik Goodman is a Professor of Electrical
and Computer Engineering and of Mechanical Engineering
at Michigan State
University. He studied genetic algorithms under John Holland
at the University of Michigan, before they had been named.
His use of a genetic algorithm in 1971 to solve for 40
coefficients of a highly nonlinear model of a bacterial
cell was the first known GA application on a real-world
problem, and ran for nearly a year on a dedicated computer.
He has used genetic algorithms in his research ever since,
including for parameterization of complex ecosystem models,
for evolution of cooperative behavior in artificial life,
for factory layout and scheduling, for protein folding
and ligand docking, for design of shape and layup of composite
structures, and for data mining and pattern classification.
|
Much of his recent research has been in development of new types
of parallel genetic algorithms and in design methods for mechatronic
systems using genetic programming. He is co-founder and VP Technology
of Red Cedar Technology, which provides tools for automated engineering
design based on evolutionary computation. He chaired ICGA-97 and
GECCO-2001, chaired GECCO’s sponsoring organization, ISGEC,
from 2001-2004, and is now chair of ACM SIGEVO, its successor.

|
Genetic Programming:
Description of Tutorial
Genetic programming (GP) is a collection of evolutionary computation
techniques that allow computers to solve problems automatically. Using
ideas from natural evolution, GP starts from an ooze of random
computer programs, and progressively refines them through processes of
mutation and sexual recombination, until solutions emerge. All this
without the user having to know or specify the form or structure of
solutions in advance.
Since its inception twenty years ago, GP has been used to solve
a wide range of practical problems, producing a number of human-competitive
results and even patentable new inventions. Like many other areas
of computer science, GP is evolving rapidly, with new ideas, techniques
and applications being constantly proposed, making it is difficult
for people to identify the key ideas in the field and keep up with
the pace of new developments. The aim of this tutorial is to provide
a roadmap to GP for both newcomers and old-timers.
The tutorial starts with a gentle introduction which describes
how a population of programs is stored in the computer so that
they can evolve with time. We explain how programs are represented,
how random programs are initially created, and how GP creates a
new generation by mutating the better existing programs or combining
pairs of good parent programs to produce offspring programs. Then,
we describe a variety of alternative representations for programs
and some advanced GP techniques. These include: the evolution of
machine-code and parallel programs, the use of grammars and probability
distributions for the generation of programs, variants of GP which
allow the solution of problems with multiple objectives, many speed-up
techniques and some useful theoretical tools. To illustrate genetic
programming's scope, we then review some real-world applications
of GP. The tutorial is concluded by a series of recommendations
and suggestions to obtain the most from a GP system and open questions.
Riccardo Poli
|
Biosketches:
Riccardo Poli is a Professor
in the School of Computer Science and Electronic Engineering
at Essex. He started his academic career as an electronic engineer
doing a PhD in biomedical image analysis to later become an
expert in the field of EC. He has published around 250 refereed
papers and two books on the theory and applications of genetic
programming, evolutionary algorithms, particle swarm optimisation,
biomedical engineering, brain-computer interfaces, neural networks,
image/signal processing, biology and psychology. He is a Fellow
of the International Society for Genetic and Evolutionary Computation
(since 2003), a recipient of the EvoStar award for outstanding
contributions to this field (2007), and an ACM SIGEVO executive
board member (2007-2013).
|
He was co-founder and co-chair of the European
Conference on GP (1998-2000, 2003). He was general chair (2004),
track chair (2002,
2007), business committee member (2005), and competition chair
(2006) of ACM's Genetic and Evolutionary Computation Conference,
co-chair of the Foundations of Genetic Algorithms Workshop (2002)
and technical chair of the International Workshop on Ant Colony
Optimisation and Swarm Intelligence (2006). He is an associate
editor of Genetic Programming and Evolvable Machines, Evolutionary
Computation and the International Journal of Computational Intelligence
Research. He is an advisory board member of the Journal on Artificial
Evolution and Applications and an editorial board member of Swarm
Intelligence. He is a member of the EPSRC Peer Review College,
an EU expert evaluator and a grant-proposal referee for Irish,
Swiss and Italian funding bodies.
|
Nicholas Freitag
McPhee is a Full Professor in Computer Science
in the Division of Science and Mathematics, University
of Minnesota,
Morris. He is an associate editor of the Journal on Artificial
Evolution and Applications, an editorial board member of Genetic
Programming and Evolvable Machines, and has served on the program
committees for dozens of international events. He has extensive
expertise in the design of GP systems, and in the theoretical analysis
of their behaviours. His joint work with Poli on the theoretical
analysis of GP received the best paper award at the 2001 European
Conference on Genetic Programming, and several of his other
foundational studies continue to be widely cited.
|

Nicholas McPhee
|
He has also worked closely with biologists on a number of projects,
building individual-based models to illuminate genetic interactions
and changes
in the genotypic and phenotypic diversity of populations.

|
Evolution Strategies: Basic Introduction:
Description of Tutorial
This tutorial introduces the basics of evolution strategies, a branch of evolutionary
computation which focused already very early on the self-adaptation of strategy
parameters such as mutation variances. Based on the original (1+1)-evolution
strategy (using the 1/5 success rule for step size adaptation), the generalized
(1,l) and (m,l)-strategies use the principle of self-adaptation mutation variances
extensively. With both mutation and recombination, the (m,l)-strategy is a
well known and understood algorithm for mixed-integer optimization (including
the integer-only and the real-valued-only cases).
The tutorial will present all basic algorithmic variants as well
as guidelines for their implementation. Moreover, extensions such
as mixed-integer optimization and some derandomized variants of
evolution strategies are also introduced briefly.
Concerning the comparison to genetic algorithms and evolutionary
programming, we will use the concept of a generalized evolutionary
algorithm as introduced in [1], and outline the similarities and
differences of those classes of algorithms.
The tutorial will conclude with some examples of applications
of evolution strategies.
References
[1] Th. Bäck (1996): Evolutionary
Algorithms in Theory and Practice, Oxford University Press, NY.
|
Thomas Bäck,
PhD, is Professor for Natural Computing at the Leiden Institute
for Advanced Computer Science (LIACS) at Leiden University,
The Netherlands, and Chief Scientist of the Qe3 group at
Netezza Corporation. Thomas received his PhD in Computer
Science from Dortmund University, Germany, in 1994.
Thomas Bäck has more than
150 publications on natural computing technologies, as well
as a book on evolutionary algorithms, entitled Evolutionary
Algorithms: Theory and Practice. He is editorial board member
and associate editor of a number of journals on evolutionary
and natural computation, and has served as program chair
for all major conferences in evolutionary computation.
|

Thomas Bäck
|
His expertise lies in adaptive technologies
for optimization and data-driven modeling, predictive analytics,
and bioinformatics. He received the best dissertation award from
the Gesellschaft für Informatik (GI) in 1995 and is an elected
fellow of the International Society for Genetic and Evolutionary
Computation for his contributions to the field

|
Evolutionary Computation: A Unified Approach:
Description
of Tutorial
The field of Evolutionary Computation (EC) has experienced tremendous growth
over the past 15 years, resulting in a wide variety of evolutionary algorithms
and applications. The result poses an interesting dilemma for many practitioners
in the sense that, with such a wide variety of algorithms and approaches, it
is often hard to se the relationships between them, assess strengths and weaknesses,
and make good choices for new application areas.
This tutorial is intended
to give an overview of
EC via a general framework
that can help compare and
contrast approaches, encourage
crossbreeding, and facilitate
intelligent design choices.
The use of this framework
is then illustrated by
showing how traditional
EAs can be compared and
contrasted with it, and
how new EAs can be effectively
designed using it.
Finally, the framework
is used to identify some
important open issues that
need further research.

Kenneth A. De Jong
|
Biosketch:
Kenneth A. De Jong received his Ph.D.
in computer science from the University of Michigan in
1975. He joined George Mason University in 1984 and is
currently a Professor of Computer Science, head of the
Evolutionary Computation laboratory, and the associate
director of the Krasnow Institute. His research interests
include genetic algorithms, evolutionary computation, machine
learning, and adaptive systems. He is currently involved
in research projects involving the development of new evolutionary
algorithm (EA) theory, the use of EAs as heuristics for
NP-hard problems, and the application of EAs to the problem
of learning task programs in domains such as robotics,
diagnostics, navigation and game playing.
|
He is also interested in experience-based
learning in which systems must improve their performance while
actually performing the desired tasks in environments not directly
their control or the control of a benevolent teacher. Support for
these projects is provided by DARPA, ONR, and NRL. He is an active
member of the Evolutionary Computation research community and has
been involved in organizing many of the workshops and conferences
in this area. He is the founding editor-in-chief of the journal
Evolutionary Computation (MIT Press), and a member of the board
of the ACM SIGEVO. He is the recipient of an IEEE Pioneer award
in the field of Evolutionary Computation and a lifetime achievement
award from the Evolutionary Programming Society.

|
Financial Evolutionary Computing:
Description
of Tutorial
Financial investment and trading have been transformed through the application
of mathematical analysis and computer technology. The research problems posed
by financial computing are extremely challenging, taxing both mathematicians
and computer scientists. While traditional computational techniques have yet
to provide an efficient means for numerical evaluation of the complex equations
produced by financial mathematicians, evolutionary computing has been remarkably
effective and Financial Evolutionary Computing is currently a fertile area
of research.
The Introduction
to Financial Evolutionary Computing
(FEC) Tutorial is aimed at GECCO
attendees with limited knowledge
of finance. The tutorial will
introduce the area of FEC, provide
a basic understanding of trading
and investment, identify some
of the main research challenges,
who is working in the area, and
how to get started on FEC research.
Topics will include for example
stock selection, calculation
of value at risk, and modelling
financial markets.

Christopher D. Clack
|
Biosketch:
Christopher D. Clack is
Director of Financial Computing at UCL. He founded UCL's
Virtual Trading Floor and has attracted funds exceeding £ 2.1
million in the last three years. He is Coordinator of the
PROFIT European Network in Financial Computing, which includes
UCL, Deutsche Bank, Reuters, and the universities of Athens,
Moscow, Prague, and Sofia. He was conference chair at Trade
Tech Architecture 2008, is a regular panel member at both
industry and academic conferences and workshops, and is also
presenting a tutorial on Financial Evolutionary Computing
at GECCO 2008.
|
His research team focuses on applying Genetic Computation and
multi-agent systems to real-world problems in finance, and his
work on GP robustness in highly volatile markets won a Best Paper
award at GECCO 2007. He has twenty years' experience of consulting in Financial Computing,
from settlement systems to portfolio optimization, and has established
very close partnerships with Credit Suisse, Goldman Sachs, Merrill
Lynch, Morgan Stanley, Reuters, and the London Stock Exchange.This
provides unrivalled exposure to the most pressing technology problems
in finance, coupled with invaluable access to real data.

|
Ant Colony Optimization:
Description of Tutorial
Ant colony optimization (ACO) is an approximate method for tackling combinatorial
as well as continuous optimization problems. From an artificial intelligence
point of view ACO is a swarm intelligence method, while from the operations research
point of view ACO belongs to the class of algorithms known as metaheuristics.
The inspiring source of ACO is the foraging behaviour of ant colonies. The aims
of this tutorial are twofold. First, we explain the basics of ACO algorithms
in order to satisfy participants that have no previous knowledge of this technique.
Second, we also deal with more advanced features of ACO with the aim of presenting
useful material for participants with some prior knowledge of ACO algorithms.
The tutorial starts by introducting the swarm intelligence origins
of ACO algorithms. Then, it is shown how to translate the biological
inspiration into a technical algorithm. Some of the most successful
ACO variants are presented in this context. More advanced features
of ACO algorithms are presented in the context of appealing applications.
Finally, the tutorial concludes with an interesting recent research
topic: the hybridization of ACO algorithms with other optimization
techniques.

Christian Blum
|
Biosketch:
Christian Blum received
the doctoral degree in 2004 from the Free University
of Brussels (Belgium). After spending one year at the
Advanced Computation Laboratory of the Imperial Cancer
Research Fund (ICRF) in London, he worked at IRIDIA at
the Free University of Brussels under the supervision
of Marco Dorigo. He currently is a "Ramon y Cajal" research
fellow at the ALBCOM research group of the Universitat
Politècnica de Catalunya in Barcelona. Current
subject of his research is the use of swarm intelligence
techniques for the management of large-scale mobile ad-hoc
and sensor networks, as well as the hybridization of
metaheuristics with more classical artificial intelligence
and operations research methods.
|
He is a member of the editorial boards of international
journals such as Computers & Operations
Research and Swarm Intelligence. As a co-founder, he initiated the series of
International Workshops on Hybrid Metaheuristics (HM). Finally, he gave invited
tutorials at conferences such as PPSN, GECCO, and HIS.
|
Introduction to Learning Classifier Systems:
Description of Tutorial
Broadly conceived as computational models of cognition and tools
for modeling complex adaptive systems, later extended for use
in adaptive
robotics, and today also applied to effective classification and
data-mining–what is a learning classifier system? How does it work?
What’s the theory behind its functioning? What are the most interesting
research directions? What the applications? And what the relevant open
issues? This introductory tutorial tries to answer these questions. It
provides a gentle introduction to learning classifier systems, it
overviews the theoretical understanding we have today, the current
research directions, the most interesting applications, and the open
issues.

Pier Luca Lanzi
|
Biosketch:
Pier Luca Lanzi was
born in Turin, Italy, in 1967. He received the Laurea degree
in
computer science from the Università degli Studi di Udine, in 1994 and
the Ph.D. degree in computer and automation engineering from the Politecnico
di Milano, Milan, Italy, in 1999. He is an Associate Professor with the Department
of Electronics and Information, Politecnico di Milano. He is member of the Editorial
Board of the Evolutionary Computation Journal and Editor of SIGEVOlution, the
newsletter of the Special Interest Group on Genetic and Evolutionary Computation
(SIGEVO). His research areas include evolutionary computation and machine learning.
He is interested in applications to data
mining and autonomous agents. .
|
|
Probabilistic Model-Building Algorithms
(PMBGAs)
Description
of Tutorial
Probabilistic model-building algorithms (PMBGAs) replace traditional variation
of genetic and evolutionary algorithms by (1) building a probabilistic model
of promising solutions and (2) sampling the built model to generate new candidate
solutions. PMBGAs are also known as estimation of distribution algorithms (EDAs)
and iterated density-estimation algorithms (IDEAs).
Replacing traditional crossover and mutation operators
by building and sampling a probabilistic model of promising solutions
enables the use of machine learning techniques for automatic discovery
of problem regularities and exploitation of these regularities
for effective exploration of the search space. Using machine learning
in optimization enables the design of optimization techniques that
can automatically adapt to the given problem. There are many successful
applications of PMBGAs, for example, Ising spin glasses in 2D and
3D, graph partitioning, MAXSAT, feature subset selection, forest
management, groundwater remediation design, telecommunication network
design, antenna design, and scheduling.
The tutorial Probabilistic Model-Building GAs will
provide a gentle introduction to PMBGAs with an overview of major
research directions in this area. Strengths and weaknesses of different
PMBGAs will be discussed and suggestions will be provided to help
practitioners to choose the best PMBGA for their problem.

Martin Pelikan
|
Biosketch:
Martin Pelikan received Ph.D. in Computer
Science from the University of Illinois at Urbana-Champaign
in
2002. He is now an associate professor at
the Department of Mathematics and Computer Science at the University of
Missouri in St. Louis. In 2006 Pelikan founded the Missouri Estimation of
Distribution Algorithms Laboratory (MEDAL). Prior to joining the
University of Missouri, he worked at the Illinois Genetic Algorithms
Laboratory (IlliGAL), the German National Center for Information
Technology in Sankt Augustin, the Slovak University of Technology in
Bratislava, and the Swiss Federal Institute of Technology (ETH) in Zurich.
|
Pelikan has worked as a
researcher in genetic and evolutionary computation since 1995.
His most important contributions to genetic and evolutionary
computation are the Bayesian optimization algorithm (BOA), the
hierarchical BOA (hBOA), the scalability theory for BOA and hBOA, and the
efficiency enhancement techniques for BOA and hBOA. His current research
focuses on extending BOA and hBOA to other problem domains, applying
genetic and evolutionary algorithms to real-world problems with the focus
on physics, bioinformatics and machine learning, and designing new
efficiency enhancement techniques for BOA and other evolutionary
algorithms.

|
Evolving Neural Networks
Description of Tutorial
Neuroevolution, i.e. evolution of artificial neural networks, has
recently emerged as a powerful technique for solving challenging
reinforcement learning problems. Compared to traditional
(e.g. value-function based) methods, neuroevolution is especially
strong in domains where the state of the world is not fully known: the
state can be disambiguated through recurrency, and novel situations
handled through pattern matching. In this tutorial, we will review (1)
neuroevolution methods that evolve fixed-topology networks, network
topologies, and network construction processes, (2) ways of combining
traditional neural network learning algorithms with evolutionary
methods, and (3) applications of neuroevolution to game playing, robot
control, resource optimization, and cognitive science.

Risto Miikkulainen
|
Biosketch:
Risto Miikkulainen is a Professor of Computer Sciences
at the University of Texas at Austin. He received an M.S.
in Engineering from the Helsinki University of Technology,
Finland, in 1986, and a Ph.D. in Computer Science from
UCLA in 1990. His recent research focuses on methods for
evolving neural networks and applying these methods to
game playing, robotics, and intelligent control. He is
an author of over 250 articles on neuroevolution, connectionist
natural language processing, and the computational neuroscience
of the visual cortex. He is an editor of Machine Learning,
Neural Networks, IEEE Transactions on Computational Intelligence
and AI in Games, Cognitive Systems Research, and IEEE Transactions
on Autonomous Mental
Development.
|

|
|
Advanced
| |
Genetic Programming Theory I & II:
Description of Tutorial
We start by describing and characterising the search space explored by genetic
programming (GP). We show how to compute the size of the search space. Then,
we introduce some work on the distribution of functionality of the programs
in the search space and indicate its scientific and practical consequences.
In particular, we explain why GP can work despite the immensity of the search
space it explores. Then, we show recent theory on halting probability that
extends these results to forms of Turing complete GP. This indicates that Turing
complete GP has a hard time solving problems unless certain measures are put
in place.
Having characterised the search space, in the second part of the tutorial,
we characterise GP as a search algorithm by using the schema theory. In the
tutorial we introduce the basics of schema theory, explaining how one can derive
an exact probabilistic description of GAs and GP based on schemata. We illustrate
the lessons that can be learnt from the theory and indicate some recipes to
do GP well for practitioners. These include important new results on bloat
in GP and ways to cure it.
Despite its technical contents, an big effort has been made to limit the use
of mathematical formulae to a minimum.
William
B. Langdon
|
Biosketches:
William B. Langdon, was research officer for the Central Electricity
Research Laboratories and project manager and technical coordinator
for Logica before becoming a prolific, internationally recognised
researcher (working at UCL, Birmingham, CWI and Essex). He has
written two books, edited six more, and published over 80 papers in
international conferences and journals. He is the resource review
editor for {Genetic Programming and Evolvable Machines and a member of
the editorial board of Evolutionary Computation. He has been a
co-organiser of eight international conferences and workshops, and has
given nine tutorials at international conferences. He was elected
ISGEC Fellow for his contributions to EC.
|
Dr Langdon has extensive experience
designing and implementing GP
systems, and is a leader in both
the empirical and theoretical
analysis of evolutionary systems.
He also has broad experience
both in industry and academic
settings in biomedical engineering,
drug design, and bioinformatics.
|
Riccardo
Poli is
a Professor in the
School of Computer
Science and Electronic
Engineering at Essex.
He started his academic
career as an electronic
engineer doing a
PhD in biomedical
image analysis to
later become an expert
in the field of EC.
He has published
around 250 refereed
papers and two books
on the theory and
applications of genetic
programming, evolutionary
algorithms, particle
swarm optimisation,
biomedical engineering,
brain-computer interfaces,
neural networks,
image/signal processing,
biology and psychology.
He is a Fellow of
the International
Society for Genetic
and Evolutionary
Computation (since
2003), a recipient
of the EvoStar award
for outstanding contributions
to this field (2007),
and an ACM SIGEVO
executive board member
(2007-2013).
|

Riccardo Poli
|
He was co-founder and co-chair
of the European Conference
on GP (1998-2000, 2003). He
was general chair (2004), track
chair (2002, 2007), business
committee member (2005), and
competition chair (2006) of
ACM's Genetic and Evolutionary
Computation Conference, co-chair
of the Foundations of Genetic
Algorithms Workshop (2002)
and technical chair of the
International Workshop on Ant
Colony Optimisation and Swarm
Intelligence (2006). He is
an associate editor of Genetic
Programming and Evolvable Machines,
Evolutionary Computation and
the International Journal of
Computational Intelligence
Research. He is an advisory
board member of the Journal
on Artificial Evolution and
Applications and an editorial
board member of Swarm Intelligence.
He is a member of the EPSRC
Peer Review College, an EU
expert evaluator and a grant-proposal
referee for Irish, Swiss and
Italian funding bodies.

|
Bioinformatics: Description
of Tutorial
Bioinformatics is an interdisciplinary field of study that blends computer
science and statistics with the biological sciences. Important objectives of
bioinformatics include the storage, management and retrieval of high-dimensional
biological data and the detection, characterization, and interpretation of
patterns in such data. The goal of this tutorial is to provide an entry-level
introduction to bioinformatics for those hoping to apply genetic and evolutionary
computation to solving complex biological problems. Specific examples of how
methods such as genetic programming have been applied in bioinformatics will
be presented.
Jason
H. Moore
|
Biosketch:
Jason H.
Moore, Ph.D.:
Dr. Moore received
his
B.S. in Biological
Sciences from Florida State University. He then received an M.S. in
Human Genetics, an M.A. in Statistics and a Ph.D. in Human Genetics from
the University of Michigan. He then served as Assistant Professor of
Molecular Physiology and Biophysics (1999-2003) and Associate Professor
of Molecular Physiology and Biophysics with tenure (2003-2004) at
Vanderbilt University. While at Vanderbilt, Dr. Moore held an endowed
position as an Ingram Associate Professor of Cancer Research. He also
served as Director of the Bioinformatics Core and Co-Founder and
Co-Director of the Vanderbilt Advanced Computing Center for Research and
Education (ACCRE).
|
In 2004,
Dr. Moore accepted a position
as the Frank Lane Research
Scholar in Computational Genetics,
Associate Professor of Genetics,
Associate Professor of Community
and Family Medicine, and Director
of Bioinformatics at Dartmouth
Medical School. He was promoted
to Full Professor with tenure
in 2008. He also holds adjunct
positions in the Department
of Computer Science at the
University of New Hampshire,
the Department of Computer
Science at the University of
Vermont and as an Adjunct Investigator
at the Translational Genomics
Research Institute (TGen) in
Phoenix. Dr. Moore serves as
Director of the Computational
Genetics Laboratory and the
Bioinformatics Shared Resource
for the Norris-Cotton Cancer
Center, Director of the Integrative
Biology Core for the Center
for Environmental Health Sciences
and Founder and Director of
The DISCOVERY Resource, a 500-processor
parallel computer cooperatively
operated for the Dartmouth
community. His research has
been communicated in more than
175 scientific publications
and is supported by three NIH
R01 grants in his name. He
has previously served as Program
Chair for the Bioinformatics
and Computational Biology track
at GECCO and as Chair of the
European Conference on Evolutionary
Computation, Machine Learning
and Data Mining in Bioinformatics
(EvoBIO). He is also Founder
and Chair of the GECCO Workshop
on Open-Source Software for
Genetic and Evolutionary Computation
(SoftGEC).

|
Evolutionary
Multiobjective Optimization:
Description
of Tutorial
Multiple,
often conflicting objectives arise
naturally in most real-world optimization
scenarios. As evolutionary algorithms
possess several characteristics that
are desirable for this type of problem,
this class of search strategies has
been used for multiobjective optimization
for more than a decade. Meanwhile
evolutionary multiobjective optimization
has become established as a separate
subdiscipline combining the fields
of evolutionary computation and classical
multiple criteria decision making.
This
tutorial gives an overview of evolutionary
multiobjective optimization
(EMO) with the focus on methods and
theory. On the one hand, basic principles
of multiobjective optimization are
presented, and various algorithmic
aspects such as fitness assignment
and environmental selection are discussed
in the light of state-of-the-art
techniques. On the other hand, the
tutorial covers several theoretical
issues such as performance assessment
and running-time analysis.
The presentation is aimed for both
novices and users of EMO. Those without
any knowledge in EMO will have adequate
ideas of the procedures and their
importance in computing and problem-solving
tasks. Those who have been
practicing EMO will also have enough
ideas and materials for future
research, know state-of-the-art results
and techniques, and make a
comparative evaluation of their research.
Eckart
Zitzler
|
Biosketch:
Eckart Zitzler received
degrees from University
of Dortmund in Germany
(diploma in computer science)
and ETH Zurich in Switzerland
(doctor of
technical sciences). Since 2003, he has been Assistant Professor for Systems
Optimization at the Computer Engineering and Networks Laboratory at the
Department of Information Technology and Electrical Engineering of ETH
Zurich, Switzerland. His research focuses on the development and
investigation of nature-inspired search methods for multiobjective
optimization and their application to systems analysis and systems design,
mainly in Computer Engineering and Systems Biology. Dr. Zitzler was General
Co-Chairman of the first three international conferences on evolutionary
multi-criterion optimization (EMO 2001, EMO 2003, and EMO 2005)
|

|
Constraint-Handling Techniques used with Evolutionary Algorithms
Description
of Tutorial
Evolutionary
Algorithms (EAs), when used for global
optimization, can be seen as unconstrained
optimization techniques. Therefore,
they require an additional mechanism to incorporate constraints of
any kind (i.e., inequality, equality, linear, nonlinear) into their fitness
function. Although the use of penalty functions (very popular with
mathematical programming techniques) may seem an obvious choice,
this sort of approach requires a careful fine tuning of the penalty
factors to be used. Otherwise, an EA may be unable to reach the
feasible region (if the penalty is too low) or may reach quickly
the feasible region but being unable to locate solutions that lie in
the boundary with the infeasible region (if the penalty is too severe).
This has motivated the development of a number of approaches to
incorporate constraints into the fitness function of an EA. This tutorial
will cover the main proposals in current use, including novel approaches
such as the use of tournament rules based on feasibility, multiobjective
optimization concepts, hybrids with mathematical programming
techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial
immune systems, among others. Other topics such as the importance of
maintaining diversity, current benchmarks and the use of alternative
search engines (e.g., particle swarm optimization, differential evolution,
evolution strategies, etc.) will be also discussed (as time allows).

Carlos Artemio Coello Coello
|
Biosketch:
Carlos
Artemio Coello Coello received
a BSc in Civil Engineering
from the Universidad
Autonoma de Chiapas
in Mexico in 1991
(graduating Summa
Cum Laude). Then,
he was awarded a
scholarship from
the Mexican government
to pursue graduate
studies in Computer
Science at Tulane
University (in the
USA). He received
a MSc and a PhD in
Computer Science
in 1993 and 1996,
respectively. Dr.
Coello has been a
Senior Research Fellow
in the Plymouth Engineering
Design Centre (in
England) and a Visiting
Professor at DePauw
University (in the
USA). He is currently
full professor at
CINVESTAV-IPN in
Mexico City, Mexico.
|
He has published
over 200 papers in international
peer-reviewed journals, book chapters,
and conferences. He has also co-authored
the book "Evolutionary Algorithms
for Solving Multi-Objective Problems",
which is now in its Second Edition
(Springer, 2007) and has co-edited
the book "Applications of
Multi-Objective Evolutionary Algorithms" (World
Scientific, 2004).
He has delivered
invited talks, keynote speeches
and tutorials at international
conferences held in Spain, USA,
Canada, Switzerland, UK, Chile,
Colombia, Brazil, Argentina, India,
Uruguay and Mexico.
Dr. Coello
has served as a technical reviewer
for over 50 international journals
and for more than 60 international
conferences and actually serves
as
associate editor of the journals "IEEE Transactions
on Evolutionary Computation", "Evolutionary Computation", " Computational
Optimization and Applications", "Pattern
Analysis and Applications" and the "Journal of Heuristics". He
is also a member of the editorial boards
of the journals "Soft Computing", "Engineering
Optimization", the "Journal of Memetic Computing" and the "International
Journal of Computational
Intelligence Research".
He received the 2007 National Research Award (granted
by the Mexican Academy of Science) in the area of " exact sciences".
He is member of the Mexican Academy of Science, Senior Member of the IEEE, and
member of Sigma Xi, The Scientific Research Society. He is also a member of the
Council of Authors of the "International Society for Genetic and Evolutionary
Computation". His current research interests are: evolutionary multiobjective
optimization, constraint-handling techniques for evolutionary algorithms and
evolvable
hardware.

|
Representations
for Evolutionary Algorithms:
Description
of Tutorial
Successful
and efficient use of evolutionary
algorithms (EAs) depends on the choice
of the genotype, the problem representation
(mapping from genotype to
phenotype) and on the choice of search operators that are applied to the genotypes.
These choices cannot be made independently of each other. The question whether
a certain representation leads to better performing EAs than an alternative representation
can only be answered when the operators applied are taken into consideration.
The reverse is also true: deciding between alternative operators is only meaningful
for a given representation.
In EA practice one can distinguish
two complementary approaches. The
first approach uses indirect representations
where a solution is encoded in a
standard data structure, such as
strings, vectors, or discrete permutations,
and standard off-the-shelf search
operators are applied to these genotypes.
To evaluate the solution, the genotype
needs to be mapped to the phenotype
space. The proper choice of this
genotype-phenotype mapping is important
for the performance of the EA search
process. The second approach, the
direct representation, encodes solutions
to the problem in its most 'natural'
space and designs search operators
to operate on this representation.
Research in the last few years has
identified a number of key concepts
to analyse the influence of representation-operator
combinations on EA performance.
These concepts are
• locality and
• redundancy.
Locality is a result of the interplay
between the search operator and
the genotype-phenotype mapping.
Representations are redundant
if the number of phenotypes exceeds
the number of possible genotypes. Furthermore, redundant representations
can
lead to biased encodings if some phenotypes are on average represented
by
a larger number of genotypes.
Finally, a bias need not be the
result of
the representation
but can also be caused by the search operator. The tutorial gives a brief overview
about existing guidelines for representation
design, illustrates the different
aspects of representations, gives
a brief overview of theoretical models
describing the different aspects,
and illustrates the relevance of
the aspects with practical examples.
It is expected that the participants have a basic understanding of EA principles.
Franz
Rothlauf
|
Biosketch:
Franz Rothlauf received
a Diploma in Electrical
Engineering
from the University of
Erlangen, Germany, a Ph.D.
in Information Systems
from the University of
Bayreuth, Germany, a Habilitation
from the university of
Mannheim, Germany, in 1997,
2001, and 2007, respectively.
He is currently full professor
at the Department of Information
Systems, Mainz University,
Germany. He has published more
than 50 technical papers in
the context of evolutionary
computation, co-edited several
conference proceedings, and
is author of the book "Representations
for Genetic and Evolutionary
Algorithms" which is published
in a second edition in 2006. |
His main research interests
are problem representations for heuristic
search approaches especially evolutionary
algorithms. For several years he
was a visiting researcher at the
Illinois Genetic Algorithm Laboratory
(IlliGAL). He is a member of the
Editorial Board of Evolutionary Computation
Journal, Information Sciences, and
Journal of Artificial Evolution and
Applications.
He has been organizer of several
workshops on representations issues,
chair of EvoWorkshops in 2005 and
2006, co-organizer of the European
workshop series on "Evolutionary
Computation in Communications, Networks,
and Connected Systems", co-organizer
of the European workshop series on "Evolutionary
Computation in Transportation and
Logistics", and co-chair of
the program commitee of the GA track
at GECCO 2006. He is conference chair
of GECCO 2009.

|
Computational
Complexity and Evolutionary
Computation:
Description
of Tutorial
Evolutionary algorithms and other nature-inspired search heuristics like
ant colony optimization have been shown to be very successful when dealing
with real-world applications or problems from combinatorial optimization.
In recent years, analyses have shown that these general randomized search
heuristics can be analyzed like "ordinary" randomized algorithms
and that such analyses of the expected optimization time yield deeper insights
in the functioning of evolutionary algorithms in the context of approximation
and optimization. This is an important research area where a lot of interesting
questions are still open.
The tutorial enables attendees
to analyze the computational
complexity of evolutionary
algorithms and other
search heuristics in
a rigorous way. An overview
of the tools and methods
developed within the
last 15 years is given
and practical examples
of the application of
these analytical methods
are presented.
Thomas
Jansen
|
Biosketches:
Thomas Jansen, born 1969, studied Computer
Science at the University of Dortmund, Germany. He received his diploma
(1996) and Ph.D. (2000) there. From September 2001 to August 2002
he stayed as a Post-Doc at Kenneth De Jong's EClab at the George
Mason University in Fairfax, VA. Since September 2002 he holds a
position as Juniorprofessor for Computational Intelligence at the
Technical University of Dortmund. He is an associate editor of Evolutionary
Computation (MIT Press). His research is centered around design and
theoretical analysis of evolutionary algorithms and other randomized
search heuristics. He is a programm chair at PPSN 2008 and co-organizes
FOGA 2009.
|
|
Frank
Neumann studied
Technical Computer
Science in Dortmund
and Kiel, Germany.
He received his diploma
(2002) and Ph.D.
(2006) from the Christian-Albrechts-University
of Kiel. Since November
2006 he is a Post-Doc
in the research group "Algorithms
and Complexity" (headed
by Kurt Mehlhorn)
of the Max-Planck-Institute
for Computer Science
in Saarbrücken,
Germany. A central
topic in his work
are theoretical aspects
of randomized search
heuristics in particular
for problems from
combinatorial optimization.
He has been a co-chair
of the track "Formal
Theory" at GECCO
2007 and chaired
the track "Evolutionary
Combinatorial Optimization" at
GECCO 2008.
|

Frank Neumann
|

|
Experimental
Research in EC:
Description
of Tutorial
It is an open secret that the performance of algorithms depends on their parameterizations
--- and of the parameterizations of the problem instances. However, these dependencies
can be seen as means for understanding algorithm's behavior. Based on modern
statistical techniques we demonstrate how to tune and understand algorithms. We present a comprehensive,
effective and very efficient
methodology for the design
and experimental analysis
of direct search techniques
such as evolutionary algorithms,
differential evolution,
pattern search or even
classical deterministic
methods such as the Nelder-Mead
simplex algorithm. Our
approach extends the sequential
parameter optimization
(SPO) method that has been
successfully applied as
a tuning procedure to numerous
heuristics for practical
and theoretical optimization
problems.
Optimization practitioners
receive valuable hints
for choosing an adequate
heuristic for their optimization
problems---theoreticians
receive guidelines for
testing results systematically
on real problem instances.
Based on several examples
from theory and practice
we demonstrate how SPO
improves the performance
of many search heuristics
significantly. However,
this performance gain is
not available for free.
Therefore, costs of this
tuning process are discussed
as well as its limitations
and a number of currently
unresolved open issues
in experimental research
on algorithms.

Mike
Preuss
|
Biosketches:
Mike Preuss, is Research Associate at the Computer Science
Department, University of Dortmund, Germany (since 2000), where he also
received his Diploma degree in 1998.
His research interests focus on the field of evolutionary algorithms
for real-valued problems, namely on multimodal and multiobjective niching
and the experimental methodology for (non-deterministic) optimization
algorithms.
He is currently working on the adaptability and applicability of computational
intelligence techniques for various engineering domains and computer
games.
|
|
Thomas
Bartz-Beielstein Dr.
Thomas Bartz-Beielstein
is a professor for
Applied Mathematics
at Cologne University
of Applied Sciences.
He has published
more than 50 research
papers, conference
publications, and
several books in
the field of Computational
Intelligence. His
research interests
include optimization,
simulation, and statistical
analysis of complex
real-world problems.
|

Thomas Bartz-Beielstein
|

|
|
Specialized Techniques
and Applications
|
Evolving Quantum Computer Algorithms:
Description
of Tutorial
Computer science will be radically transformed if ongoing efforts to
build large-scale quantum computers eventually succeed and if the
properties of these computers meet optimistic expectations.
Nevertheless, computer scientists still lack a thorough understanding
of the power of quantum computing, and it is not always clear how best
to utilize the power that is understood. This dilemma exists because
quantum algorithms are difficult to grasp and even more difficult to
write. Despite large-scale international efforts, only a few important
quantum algorithms are documented, leaving many essential questions
about the potential of quantum algorithms unanswered.
These unsolved problems
are ideal challenges for
the application of automatic
programming technologies.
Genetic programming techniques,
in particular, have already
produced several new quantum
algorithms and it is reasonable
to expect further discoveries
in the future. These methods
will help researchers to
discover how additional practical
problems can be solved using
quantum computers, and they
will also help to guide theoretical
work on both the power and
limits of quantum computing.
This tutorial will provide
an introduction to quantum
computing and an introduction
to the use of evolutionary
computation for automatic
quantum computer programming.
No background in physics
or in evolutionary computation
will be assumed. While the
primary focus of the tutorial
will be on general concepts,
specific results will also
be presented, including human-competitive
results produced by genetic
programming. Follow-up material
is available from the presenter's
book, Automatic Quantum Computer
Programming: A Genetic Programming
Approach, published by Springer
and Kluwer Academic Publishers.

Lee Spector
|
Biosketch:
Lee Spector is
a Professor of Computer
Science in the School of
Cognitive Science at Hampshire
College in Amherst, Massachusetts.
He received a B.A. in Philosophy
from Oberlin College in
1984 and a Ph.D. from the
Department of Computer
Science at the University
of Maryland in 1992. His
areas of teaching and research
include genetic and evolutionary
computation, quantum computation,
and a variety of intersections
between computer science,
cognitive science, evolutionary
biology, and the arts.
He is a member of the SIGEVO
executive committee and he was named a Fellow of the International
Society for Genetic and Evolutionary Computation.
|
He is an active
author and editor in the field of genetic programming and he serves on
the editorial boards of the journals Genetic Programming and Evolvable
Machines and Evolutionary Computation.
More info: http://hampshire.edu/lspector

|
Accelerating
Evolutionary Computation with Graphics
Processing Units
Description
of Tutorial
Graphics Processing Units
(GPUs) have become a
major source of computational
power for numerical applications.
Originally designed for
application of
time-consuming graphics
operations, GPUs are
stream processors that
implement the SIMD paradigm.
Modern programming tools
allow developers to
access the parallelism
of the GPU in a flexible
and convenient way, hiding
many low level details
from the user.
In this tutorial we shall
provide a gentle introduction
of how to put GPUs
to good use with Evolutionary
Computation.

Wolfgang Banzhaf
|
Biosketches:
Wolfgang Banzhaf is a Professor of Computer Science and head
of the Department of Computer Science at Memorial University of Newfoundland,
St. John's, Canada. He received a diploma in Physics from Ludwig-Maximilians
University in Munich, Germany in 1982 and a Ph.D. in Physics from the
Department of Physics at the University of Karlsruhe, Germany in 1985.
His areas of teaching and research include genetic and evolutionary computation,
artificial life, self-organization, and applications of computer science
to economics, and the natural and social sciences. He was named a Senior
Fellow of the International Society for Genetic and Evolutionary Computation
(2003), is a recipient of the EvoStar award for outstanding contributions
to this field (2007), a member of the SIGEVO executive committee (since
its foundation) and is SIGEVO's current
treasurer.
|
He is an active author in
the field of genetic programming,
served as the first editor-in-chief
of the journal Genetic Programming
and Evolvable Machines, and
is member of a number of journal
editorial and advisory boards.
More info: http://www.cs.mun.ca/~banzhaf/
Simon
Harding is a post doctoral
research fellow in the
Department of Computer
Science at Memorial University,
Canada. His research
interests include genetic
programming, unconventional
computation and parallel
computing on consumer
grade hardware. He co-chaired
a WCCI2008 track on Computational
Intelligence on Consumer
Games and Graphics Hardware
and is co-chair of a
GECCO 2009 special session
on this topic.
More info: http://www.cs.mun.ca/~simonh/
|

Simon Harding
|
For further info on GPGPGPU
Computing, see: http://www.gpgpgpu.com/

|
Generative
and Developmental Systems
Description
of Tutorial
In evolutionary computation it is common for the genotype to map directly
to
the phenotype such that each gene in the genotype represents a single
discrete component of the phenotype. While this convention is widespread, in
nature the mapping is not direct. Instead, the genotype maps to the
phenotype through a process of development, which means that each gene may
be activated multiple times and in multiple places in the process of
constructing the phenotype.
This tutorial will examine why
such indirect mapping may be
critical to the continuing success
of evolutionary computation.
Rather than just an artifact
of nature, indirect mapping means
that vast phenotypic spaces (e.g.
the 100 trillion connection human
brain) can be searched effectively
in a space of far fewer genes
(e.g. the 30,000 gene human genome).
The tutorial will introduce this
new research area, called Generative
and Developmental Systems (GDS),
by surveying milestones in the
field and introducing its most
profound puzzles. Most importantly,
what is the right abstraction
of natural development to capture
its essential advantages without
introducing unnecessary overhead
into the search?

Kenneth O. Stanley
|
Biosketch:
Kenneth O. Stanley is an assistant professor in the School
of
Electrical Engineering and Computer Science at the University of
Central Florida. He received a B.S.E. from the University of
Pennsylvania in 1997 and received a Ph.D. in 2004 from the University
of Texas at Austin. He is an inventor of the Neuroevolution of
Augmenting Topologies (NEAT) and HyperNEAT algorithms for evolving
complex artificial neural networks. His main research contributions
are in neuroevolution (i.e. evolving neural networks), generative and
developmental systems, coevolution, machine learning for video games,
and interactive evolution. He has won best paper awards for his work
on NEAT, NERO, NEAT Drummer, and HyperNEAT. He is the chair of the
IEEE Task Force on Computational Intelligence and Video Games, and has
chaired the Generative and Developmental Systems track at GECCO for
the last three years.
|

|
(Genetic and) Evolutionary
Computer Vision
Description
of Tutorial
Computer vision and image processing related disciplines are one of
the major fields of applications for Evolutionary Algorithms (EAs).
Over the past few years, EAs
have been applied to an ever-increasing
number of problems in the area
of image and vision computing.
The problems range from low-level
preprocessing tasks, through
segmentation, transformation
and feature extraction tasks,
to high-level object detection,
object tracking and object recognition/classification
tasks. EAs have themselves 'evolved'
from a basic use as tools for
optimizing parameters of some
pre-defined algorithms, to a
more prominent role as elements
of the so-called 'vision chain'.
The tutorial aims at providing
attendees with a review of this
area, illustrating several practical
examples of the different facets
of evolutionary computer vision.
A part of the tutorial, in particular,
will be dedicated to some very
recent applications of genetic
programming and particle swarm
optimization techniques to computer
vision. We will show how such
techniques can be effectively
applied to image and vision problems
and often provide promising results.

Mengjie Zhang
|
Biosketches:
Mengjie Zhang received
a BE and an ME in 1989 and
1992 from China and a PhD
in Computer Science from
RMIT University, Melbourne,
Australia
in 2000. Since 2000, he has been working as a Lecturer then a Senior
Lecturer now a Reader/Associate Professor at Victoria University of
Wellington, New Zealand. His research is focused on Genetic
Programming and Evolutionary Computing, Evolutionary Computer Vision,
and Data Mining and Machine learning. He has published over 100
academic papers in refereed international journals and conferences in
these areas.
|
He has been supervising over 20 research
students. He has been serving as
an associated editor or editorial
board member for four international
journals and as a reviewer of over
ten international journals. He has
also been serving as a steering committee
member and a program committee member
for over 40 international conferences
in the areas of Evolutionary Computation
and Artificial Intelligence. He has
been an organizer of the special
session of Evolutionary Computer
Vision at IEEE Congress on Evolutionary
Computation.
He is a member of ACM, IEEE, IEEE
Computer Society, IEEE Computational
Intelligence Society, IEEE Systems,
Man and Cybernetics Society, and
the ACM SIGEVO group. He is also
serving as a committee member of
the IEEE New Zealand Central Section.
Further Information: http://homepages.mcs.vuw.ac.nz/~mengjie/
|
Stefano
Cagnoni graduated in Electronic
Engineering at the University
of Florence in 1988 where
he has been a PhD student
and a post-doc until 1997,
working in the Bioengineering
Lab of the Department of
Electronic Engineering
and Telecommunications.
He received the PhD degree
in Bioengineering in 1993.
In 1994 he was a visiting
scientist at the Whitaker
College Biomedical Imaging
and Computation Laboratory
at the Massachusetts Institute
of Technology. Since 1997,
he has been with the Department
of Computer Engineering
of the University of Parma,
where he is currently Associate
Professor.
|

Stefano Cagnoni
|
Stefano Cagnoni's main research
interests are in the fields of
Computer vision, Robotics, Evolutionary
Computation and Neural Networks.
He is editor-in-chief of the "Journal
of Artificial Evolution and Applications",
chair of EvoIASP, the European
Workshop on applications of Evolutionary
Computation to Image Analysis
and Signal Processing, and member
of the Programme Committee of
several International Conferences
and Workshops. He has been reviewer
for over 15 journals.
Further Information: http://www.ce.unipr.it/people/cagnoni/

Gustavo Olague
|
Gustavo
Olague is
a research scientist at the
CICESE Research Center working
within the Computer Science
Department of the Applied
Physics Division. He hold
a Bachelor's degree (Honors)
in Electronics Engineering
and a Master's degree in
Computer Science from the
Instituto Tecnológico
de Chihuahua, Mexico. He
received the
Ph.D. (Diplôme de Doctorat en Imagerie, Vision et Robotique) in Computer
Graphics, Vision and Robotics from Institut National Polytechnique de Grenoble,
France, working at the INRIA Rhône Alpes within the MOVI Research team.
He is presently a Professor of
Computer Science at Centro de Investigación Científica y de Educación
Superior de Ensenada, B.C.
|
Olague's
research focuses on the principles
of computational intelligence
applied to close-range photogrammetry
and computer vision. He is a
member of the EvoNET, RSPSoc,
ASPRS, SIGEVO, IEEE, IEEE Computer
Society, and is listed in Who's
Who in Science and Engineering.
Dr. Olague is recipient of the "2003
First Honorable Mention for the
Talbert Abrams Award", offered
by the American Society for Photogrammetry
and Remote Sensing. He has been
awarded the Bronze Medal at the
Human Competitive Awards for
his work on synthesis of interest
points in 2006.
Further information: http://cienciascomp.cicese.mx/evovision/

|
Large
scale data mining using Genetics-Based
Machine Learning:
Description
of Tutorial
We are living in the peta-byte era.We have larger and larger data to
analyze, process and transform into useful answers for the domain
experts. Robust data mining tools, able to cope with petascale volumes
and/or high dimensionality producing human-understandable solutions are
key on several domain areas. Genetics-based machine learning (GBML)
techniques are perfect candidates for this task, among others, due to
the recent advances in representations, learning paradigms, and
theoretical modeling. If evolutionary learning techniques aspire to be a
relevant player in this context, they need to have the capacity of
processing these vast amounts of data and they need to process this data
within reasonable time. Moreover, massive computation cycles are getting
cheaper and cheaper every day, allowing researchers to have access to
unprecedented parallelization degrees. Several topics are interlaced in
these two requirements: (1) having the proper learning paradigms and
knowledge representations, (2) understanding them and knowing when are
they suitable for the problem at hand, (3) using efficiency enhancement
techniques, and (4) transforming and visualizing the produced solutions
to give back as much insight as possible to the domain experts are few
of them.
This tutorial will try
to answer this question,
following a roadmap that
starts with the questions
of what large means, and
why large is a challenge
for GBML methods. Afterwards,
we will discuss different
facets in which we can overcome
this challenge: Efficiency
enhancement techniques, representations
able to cope with large dimensionality
spaces, scalability of learning
paradigms. We will also review
a topic interlaced with all
of them: how can we model
the scalability of the components
of our GBML systems to better
engineer them to get the
best performance out of them
for large datasets. The roadmap
continues with examples of
real applications of GBML
systems and finishes with
an analysis of further directions.

Jaume Bacardit
|
Biosketches:
Dr. Jaume Bacardit received his Ph.D. in 2004
from the Ramon Llull University in Barcelona, Spain. His thesis studied
the adaptation
of the
Pittsburgh approach of Learning Classifier Systems (LCS) to Data Mining
tasks. In 2005 he joined the University of Nottingham, UK as a
postdoctoral researcher to work on the application of LCS to data mine
large-scale bioinformatics datasets and extract interpretable
explanations from the learning process. In 2008 he was appointed as a
Lecturer in Bioinformatics at the University of Nottingham. This is a
joint post between the schools of Biosciences and Computer Science with
the aim of developing interdisciplinary research at the interface of
both disciplines.
|
In the School of Computer Science
he is part of the ASAP research
group. In the School of Biosciences
he is part of the Multidisciplinary
Centre for Integrative Biology
(MyCIB). His research interests
include the application of Learning
Classifier Systems and other kinds
of Evolutionary Learning to data
mine large-scale challenging datasets
and, in a general sense, the use
of data mining and knowledge discovery
for biological domains.
|
Dr. Xavier
Llorá interests
and work on genetics-based
machine learning (GBML)
have earned him a spot
among the leaders of
the renaissance of
learning classifier systems (LCSs). His 2002 PhD dissertation challenged
traditional data-mining techniques by showing the effectiveness of GBML
approaches. He has help organize two edition of the International
Workshop of Learning Classifier Systems books and their proceedings. He
served as LCS/GBML track chair in the ACM SIGEVO organized GECCO 2005
conference and also organized the NCSA/IlliGAL Gathereing on
Evolutionary Learning in 2006. Llorá, awarded with a young research
fellowship by the Catalan Government, started his academic career at the
Ramon Llull University (Barcelona, Spain) where he earned his PhD degree
with honors. In 2003 he moved to the University of Illinois at
Urbana-Champaign where he joined the Illinois Genetic Algorithms
Laboratory.
|

Xavier Llorá
|
Since then,
Llorá has headed the
DISCUS project
(http://www-discus.ge.uiuc.edu/),
a project to support collaborative human-innovation and creativity. In summer
2005, Llorá was named research assistant professor at the University of
Illinois at Urbana-Champaign and in 2006 a NCSA affiliate where he pursues the
development of data-intensive computing infrastructure for large-scale data and
text mining under the SEASR project (http://seasr/org).

|
Evolutionary
Multiobjective Combinatorial
Optimization (EMCO):
Description
of Tutorial
Discrete/combinatorial
optimization has been an interesting
and challenging area to researchers
belonging to development of algorithms
and OR techniques for real-world
applications. Most such problems
abstracted/taken from graph theory
are NP-complete in single objective,
and NP-hard for multiple objectives.
There are many applications of
such problems in communication
topology and VLSI design.
For most such problems, there
do not exist good heuristics/algorithms,
therefore, black-box optimization
techniques, e.g., EAs are usually
preferred for getting effective
solutions. In this tutorial,
we will do a quick review of
some of the standard combinatorial
optimization problems, and then
include the techniques and design
of operators to effectively solve
EMCO problems taken from real-world
applications. This would be followed
by some case-studies taken from
standard problems. This interdisciplinary
tutorial provides a practical
foundation for:
• an
introduction to combinatorial
problems and their multiobjective
versions,
• problem challenges and solution through EMO techniques,
• need of designing specialized genetic operators, representation and techniques
for embedding of (limited) problem specific knowledge,
• few case studies taken form well known EMCO, and comparison of EMO results
with the existing heuristics/approximation algorithms, and
•improvement of the EMO solutions through hybridization using local search
and others.
Key
design considerations to designing
operators/representations so
as to effectively explore the
search space will be looked
into. In general, the tutorial
is aimed at quickly laying
a foundation that can be used
as the basis for further study,
research and development in
this emerging area.
Rajeev Kumar
|
Biosketch:
Rajeev Kumar is
a Professor of Computer
Science & Engineering
(CSE) at Indian Institute
of Technology (IIT),
Kharagpur. He received
his Ph.D. from University
of Sheffield, and M.Tech.
from University of Roorkee
(now, IIT - Roorkee)
both in Computer Science & Engineering.
He had been a faculty
of CSE at IIT Kanpur
and BITS Pilani. He had
worked for National Semiconductors,
Germany and DRDO, India.
His main research interests
include evolutionary
algorithms & multiobjective
combinatorial optimization,
embedded & multimedia
systems, and programming
language & software
engineering.
|
He
has published over 150 research
articles in lead journals and
conferences. He is a senior
member of ACM, senior member
of IEEE and a fellow of IETE.
More info about him can be
found at http://www.facweb.iitkgp.ernet.in/~rkumar/.

|
Experimental
Optimization by Evolutionary
Algorithms:
Description
of Tutorial
This
tutorial addresses applications
of evolutionary algorithms to
optimization tasks where the
function evaluation cannot be
done through a computer simulation,
but requires the execution of
an experiment in the real world
(i.e., cosmetics, detergents,
wind tunnel experiments, taste
experiments, to mention a few).
Those applications can, as we
show in the tutorial, be handled
very successfully by means of
evolutionary algorithms.
To start
with, the tutorial provides an
overview of the landmark
experimental optimization carried
out by means of evolutionary
algorithms, since their foundation
in the late 60's. The main characteristics
of such experimental work, in
comparison to optimization of
simulated systems, are discussed
and practical guidelines for
real-world experiments with EAs
are outlined.
Special emphasis will
be put on the first experimental
work
carried out at the Technical
University Berlin, as well as
on experimental quantum control,
an emerging field in physics,
with a presentation of state-of-the-art
experiments.
In addition, some arguments
are given for the usefulness
of evolutionary algorithms in
comparison to classical approaches
(Design of Experiments), and
we will discuss the role of noise
in experimental optimization.

Ofer Shir
|
Biosketch:
Ofer Shir, PhD, is currently
a Postdoctoral Research
Associate at Princeton
University, conducting
research at the Rabitz
Group, within the Department
of Chemistry.
Ofer received his B.Sc.
in Physics and Computer
Science from the Hebrew
University of Jerusalem
in June 2003.
His graduate studies
were in Leiden University,
The Netherlands, where
he studied Niching in
Evolution Strategies
under Prof. Thomas Bäck.
He also played a role
of a Teaching Assistant
in the Computer Science
Department of Leiden
University during the
years 2005-2007.
|
His
research activities included
a joint project with Amolf-Amsterdam,
where he collaborated with the
XUV group of Prof. Marc Vrakking.
His PhD in Computer Science from
Leiden University was conferred
in June 2008. His
current topics of interest include
Natural Computing, Evolutionary
Algorithms, Experimental Optimization,
Quantum Control and Quantum Information.
|
Thomas
Bäck, PhD, is
Professor for Natural
Computing at the Leiden
Institute for Advanced
Computer Science (LIACS)
at Leiden University,
The Netherlands, and
Chief Scientist of
the Qe3 group at Netezza
Corporation. Thomas
received his PhD in
Computer Science from
Dortmund University,
Germany, in 1994. Thomas Bäck
has more than 150 publications
on natural computing
technologies, as well
as a book on evolutionary
algorithms, entitled
Evolutionary Algorithms:
Theory and Practice.
He is editorial board
member and associate
editor of a number
of journals on evolutionary
and natural computation,
and has served as program
chair for all major
conferences in evolutionary
computation. |

Thomas Bäck
|
His expertise
lies in adaptive technologies
for optimization and data-driven
modeling, predictive analytics,
and bioinformatics. He received
the best
dissertation award from the Gesellschaft
für Informatik (GI) in 1995
and is an elected fellow of the
International Society for Genetic
and Evolutionary
Computation for his contributions
to the field 
|
Synthetic
Biology II: Modeling & Optimization:
Description
of Tutorial
The ultimate goal
of systems biology is the development
of executable in silico models
of cells and organisms. Systems
biology attempts to provide an
integrative methodology, which
while able to cope with -on the
one hand- the data deluge that
is being generated through high
throughput experimental technologies
-and on the other hand- emerging
technologies that produce scarce
often noisy data, would allow
to capture within human-understandable
models and simulations novel
biological knowledge.
In its more modest instantiations,
Systems Biology seeks to *clarify*
current biological understandings
by formalizing what the constitutive
elements of a biological system
are and how they interact with
each other and also it seeks
to aid in the *testing* of current
understandings against experimental
data. In its most ambitious incarnations,
however, it aims at *predicting*
the behavior of biological systems
beyond current understanding
and available data thus shedding
light onto possible new experimental
routes that could lead to better
theoretical insights.
Synthetic biology, on the other
hand, aims to implement, in vitro/vivo,
organisms whose behavior is engineered.
The field of synthetic biology
holds a great promise for the
design, construction and development
of artificial (i.e. man-made)
biological (sub)systems thus
offering viable new routes to
genetically modified organisms,
smart drugs as well as model
systems to examine artificial
genomes and proteomes. The informed
manipulation of such biological
(sub)systems could have an enormous
positive impact on our societies,
with its effects being felt across
a range of activities such as
the provision of healthcare,
environmental protection and
remediation, etc. The basic premise
of synthetic biology is that
methods commonly used to design
and construct non-biological
systems, such as those employed
in the computational sciences
and the engineering disciplines,
could also be used to model and
program novel synthetic biosystems.
Synthetic biology thus lies at
the interface of a variety of
disciplines ranging from biology
through chemistry, physics, computer
science, mathematics and engineering.
In this tutorial I will provide
an entry level understanding
to Systems and Synthetic Biology,
it goals, methods and limitations.
Furthermore I will describe the
many potential applications of
evolutionary computation to these
two fields. Indeed, I believe
that the EC community has a beautiful
new application domain in which
its methods could be both valued
and challenged.
More information is available
at www.cs.nott.ac.uk/~nxk

Natalio Krasnogor
|
Biosketch:
Natalio Krasnogor is
an Associate Professor
in the School of Computer
Science at the University
of Nottingham. He is
a member of the Automated
Scheduling, Optimisation
and Planning Research
Group (ASAP) and also
leads the Interdisciplinary
Optimisation Laboratory
(IOL). Krasnogor's research
activities lie at the
interface of Computer
Science and the Natural
Sciences, e.g. Biology,
Physics, Chemistry. In
particular, he develops
innovative and competitive
search methodologies
and intelligent decision
support systems for transdisciplinary
optimisation, modelling
of complex systems and
very-large datasets
processing.
|
He has applied
his expertise to Bioinformatics,
Systems Biology, Synthetic Biology,
Nanoscience and Chemistry. He
is member of the editorial boards
for the journal Modelling and
Simulation in Engineering and
the journal Artificial Evolution
and Applications. He is associate
editor of the Evolutionary Computation
journal and founding technical
editor-in-chief of the new journal
Memetic Computing. Krasnogor
has acted as grant reviewer for
the EPSRC (UK), BBSRC(UK), European
Science Foundation, European
Union, The Israel Science Foundation,
DOE Computational Biology Programme
(USA), CNRS (France), etc.He
co-chairs the 2nd European Conference
on Synthetic Biology (ECSB II).
More details are available at
www.cs.nott.ac.uk/~nxk

|
Synthetic
Biology I: Fundamentals & Devices:
Description
of Tutorial
The new field of
Synthetic Biology has opened the
doors to engineers to design, model,
build and test cells with particular
behaviours, including computational
behaviours. The attraction of using
cells as devices stem from the
size, robustness, reproducibility
of cells, as well as the many built-in
input and output interfaces that
cells have or can be made to have.
This talk is divided to two parts.
First comes (Varin) with an introduction
to the biological fundamentals
necessary for the full appreciation
of the second part (given by Kharma).
Topics touched upon in the first
part include:
* Gene structure and function;
* Gene regulation;
* Recombinant
DNA technologies;
* Construction
of synthetic gene networks.
The second part provides examples
of how biologists and engineers
have successful modified the genomes
of various cell types to synthesize
cellular computing devices and
systems. These examples include
(but are not limited to):
* Basic and more complex logic
circuits;
* Clocks and sequential
circuits;
* Extension of functionality
via inter-cellular signaling;
*
Means of configuring cells to give
them different functionalities.
The importance of synthetic biology
to evolutionary computation is
that it provides engineers with
an opportunity to study and be
inspired by the intricacies of
real life, while forging ahead
with their endeavour to emulate
it, simulate it, as well as ultimately
create it.
Biosketches:
Dr. Luc
Varin is an
associate professor of Biology
at Concordia University. In June
2000, he received the C.D. Nelson
award from the Canadian Society
of Plant Physiology. This award
is given in recognition of outstanding
contribution by a young scientist
to Plant Physiology in Canada.
Dr. Varin was a member of the administrative
board of FQRNT from 2001 to 2005,
and of the FQRNT Finance Committee
from 2004 to 2005. Dr. Varin has
strong ties with the biotechnology
industry and acted as consultant
for the Fonds de Solidarité du
Québec to provide scientific
evaluations prior to investment.
As previous CEO of Florisys Inc.,
he acquired expertise in the management
of R&D programs and in IP protection.
The main objective of his research
program is to characterize the
biological function of sulfotransferases
in plant growth, development and
adaptation to stress.
|
|
Dr. Nawwaf
Kharma got his
B.Eng. degree in Computer
Engineering from The City University,
London, receiving the Jarogate
Prize for best overall academic
performance. He got his PhD in
Machine Learning form Imperial
College, London. Since joining
Concordia University in 2000, he
has co-authored and authored a
textbook on Character Recognition
and a chapter on Evolvable Developmental
Systems. He has published more
than 25 peer-reviewed articles.
He has two Canadian software copyrights.
Kharma has received a Human-Competitive
award from the Genetic and Evolutionary
Computation Conference (GECCO)
2006 for his work on multiple ellipse
detection (in microscopic images)
using multi-population genetic
algorithms. He is now leading a
small team developing that work
into a software prototype, in a
1-year project funded by NSERC?s
Idea to Innovation (I2I) programme.
He heads the Computational Intelligence Lab
at Concordia University.
|

Nawwaf Kharma
|

|
Cartesian
Genetic Programming:
Description
of Tutorial
Cartesian
Genetic Programming is a form
of genetic programming. It is
increasing in popularity. It
was developed by Julian Miller
with Peter Thomson in 1997. In
its classic form it uses a very
simple integer based genetic
representation of a program in
the form of a directed graph.
In a number of studies, it has
been shown to be comparatively
efficient to other GP techniques.
Since then, the classical form
of CGP has been enhanced in various
ways by to include automatically
defined functions. Most recently
it has been developed by Julian
Miller, Wolfgang Banzhaf and
Simon Harding to include self-modification
operators. This again has increased
its efficiency.
The tutorial will cover the basic
technique, advanced developments
and applications to a variety
of problem domains.

Julian Miller
|
Biosketches:
Julian Miller is
a lecturer in the Department
of Electronics
at the University of York.
His main research interests
are genetic programming (GP),
and computational development.
He has published over 130
refereed papers on evolutionary
computation, genetic programming,
evolvable hardware, and computational
development. He has been
chair or co-chair of twelve
conferences or workshops
in genetic programming, computational
development, evolvable hardware
and evolutionary techniques.
|
Dr. Miller chaired of the Evolvable Hardware tracks at the Genetic and Evolutionary
Computation Conference in 2002-2003 and the Genetic Programming track chair
in 2008. He was co-chair the Generative and Developmental Systems(GDS) track
in 2007 and is track co-chair of GDS in 2009. He is an associate editor of
the journals IEEE Transactions on Evolutionary Computation, and Genetic Programming
and Evolvable Machines. He is an editorial board member of the journals Evolutionary
Computation and Unconventional Computing. He has given 35 invited talks at
conferences, universities, research institutions and commercial companies.
Simon
Harding is a post doctoral
research fellow in the
Department of Computer
Science at Memorial University,
Canada. His research
interests include genetic
programming, unconventional
computation and parallel
computing on consumer
grade hardware. He co-chaired
a WCCI2008 track on Computational
Intelligence on Consumer
Games and Graphics Hardware
and is co-chair of tracks
at CEC 2009 and GECCO
2009 on this topic.
|

Simon Harding
|
For
further info on Cartesian GP
see:
http://www.cartesiangp.co.uk/

|
Bio-inspired
Telecommunications:
Description
of Tutorial
Biological
systems show a number of properties,
such as self-organization, adaptivity,
scalability, robustness, autonomy,
locality of interactions, distribution,
which are highly desirable to
deal with the growing complexity
of current and next generation
networks. Therefore, in recent
years a growing number of effective
solutions for problems related
to networked systems have been
proposed by taking inspiration
from the observation of natural
systems and processes such as
natural selection, insect societies,
immune systems, cultural systems,
collective behaviors of groups
of
animals/cells, etc.
The aim of the tutorial is to
provide the cutting edge research
on Bio/Nature inspired approaches
to network-related problems.
The tutorial will also focus
on protocol engineering in order
to introduce different frameworks
that have been developed to realize
such Bio/Nature inspired protocols
inside the network stack of the
Linux kernel. The designers of
routing protocols can use it
to verify their Linux model by
comparing important performance
values obtained from Linux model
with the simulation model. Moreover,
we also focus on "Formal
modeling of Bio-inspired algorithms".
The tutorial will also highlight
Bio/Natue inspired solutions
to MANETs and Sensor Networks.
Finally, we will conclude the
tutorial with an emerging domain
of "Bio-inspired Security
Solutions for Enterprize Security".
We believe that the tutorial
will be instrumental in highlighting
the potential of Bio/Nature inspired
protocols in real world networks.
The tutorial is intended for
telecommunication managers, protocol
developers, network engineers,
network software developers and
optimization researchers and
graduate students who want to
work in non-linear real time
dynamic problems. For further
details of the tutorial refer
to the web site
http://www.nexginrc.org/muddassar/tutorial-gecco09.html
The following references would
significantly help in understanding
the tutorial:
1.http://www.springer.com/computer/artificial/book/978-3-540-85953-6 2.http://www.springer.com/computer/artificial/book/978-3-540-74088-9 [Chapter 4]

Muddassar Farooq
|
Biosketch:
Muddassar Farooq received
his B.E. degree in Avionics
Engineering from National
University of Sciences
and Technology (NUST),
Pakistan, in 1996. He
completed his M.S. in
Computer Science and
Engineering from University
of New South Wales (UNSW),
Australia, in 1999. He
completed his D.Sc in
Informatics from Technical
University of Dortmund,
Germany, in 2006. In
2007, he joined the National
University of Computer & Emerging
Sciences (NUCES), Islamabad,
Pakistan, as an associate
professor. He is also
the Director of Next
Generation Intelligent
Networks Research Center
(nexGIN RC) at NUCES.
He is the author of the
book “Bee-inspired
Protocol Engineering:
from Nature to Networks” that
is published by Springer
in 2008.
|
He
has also has coauthored two book
chapters in different books on
swarm intelligence. He is on
the editorial board of
Springer’s Journal of Swarm Intelligence. He
is also the workshop chair of
European Workshop on Nature-inspired
Techniques for Telecommunication
and Networked Systems (EvoCOMNET)
held with EuroGP. He also serves
on the PC of well-known EC conferences
like GECCO, CEC, ANTS. He is
the guest editor of a special
issue of Journal of System Architecture
(JSA) on Nature-inspired algorithms
and applications. His research
interests include agent based
routing protocols for fixed and
mobile ad hoc networks (MANETs),
nature inspired applied systems,
natural computing and engineering
and nature inspired computer
and network security systems
i.e.
artificial immune systems.

|
Theory
of Randomized Search Heuristics
in Combinatorial Optimization :
Description
of Tutorial
The rigorous mathematical
analysis of randomized search
heuristics (RSHs) with respect to their expected runtime is a growing
research area where many results have been obtained in recent years.
This class of heuristics contains well-known approaches such as
Randomized Local Search (RLS), the Metropolis Algorithm (MA),
Simulated Annealing (SA), and Evolutionary Algorithms (EAs) as well as
more recent approaches such as Ant Colony Optimization (ACO). Such
heuristics are often applied to problems whose structure is not known
or if there are not enough resources such as time, money, or knowledge
to obtain good specific algorithms. It is widely acknowledged that a
solid mathematical foundation for such heuristics is needed.
Most designers of RSHs, however,
rather focused on mimicking processes
in nature (such as evolution)
rather than making the heuristics
amenable to a mathematical analysis.
This is different to the classical
design of (randomized) algorithms
which are developed with their
theoretical analysis of runtime
(and proof of correctness) in
mind. Despite these obstacles,
research from the last about
15 years has shown how to apply
the methods for the probabilistic
analysis of randomized algorithms
to RSHs. Mostly, the expected
runtime of RSHs on selected problems
is analzyed. Thereby, we understand
why and when RSHs are efficient
optimizers and, conversely, when
they cannot be efficient.
The tutorial will give an overview
on the analysis of RSHs for solving
combinatorial optimization problems.
Starting from the first toy examples
such as the OneMax function,
we approach more realistic problems
and arrive at analysis of the
runtime and approximation quality
of RSHs even for NP-hard problems.
Our studies treat not only simple
RLS algorithms and SA but also
more complex population-based
EAs and even ACO algorithms.
The combinatorial optimization
problems that we discuss include
the maximum matching problem,
the partition problem and, in
particular, the minimum spanning
tree problem as an example where
Simulated Annealing beats the
Metropolis algorithm in combinatorial
optimization. Important concepts
of the analyses will be described
as well.

Carsten Witt
|
Biosketch:
Carsten Witt
studied Computer
Science at
the University of Dortmund,
Germany, where he received
his diploma and Ph.D.
in 2000 and
2004, respectively. Since spring 2009, he is an assistant professor at
the Technical University of Denmark in Copenhagen. Carsten's main
research interests are the theoretical aspects of randomized search
heuristics, in particular evolutionary algorithms and ant colony
optimization. More information: http://ls2-www.cs.uni-dortmund.de/~witt
|

|
Fitness
Landscapes and Graphs: Multimodularity,
Ruggedness and Neutrality:
Description
of Tutorial
The
performances of evolutionary algorithms
(genetics algorithms, genetic programming,
etc.) or local search algotihms
(Simulated annealing, tabu search,
etc.) depends on the properties
of seach space structure. One concept
to analyse the search space is
the fitness landscapes in which
the problem to optimize and the
search algorithm are taken into
account. The fitness landscape
is a graph where the nodes are
the potential solutions. The study
of the fitness landscape consists
in analysing this graph or a relevant
partition of this graph according
to the dynamic or search difficulty.
This tutorial will give an overview,
after an historical review of concept
of fitness landscape, of the different
ways to define fitness landscape
in the field of evolutionary computation.
Following, the two mains geometries
(multimodal and neutral landscapes)
corresponding to two different
partitions of the graph, meets
in optimization problems and the
dynamics of metaheuristics on these
will be given. The relationship
between problems difficulty and
fitness landscapes measures (autocorrelation,
FDC, neutral degree, etc.) or the
properties of the local optima
networks, studied in recent work,
will be deeply analysed. Finally,
the tutorial will conclude with
a brief survey of open questions
and the recent researchs on fitness
landscapes.
Sébastien
Verel
|
Biosketch:
Sébastien Verel
- Born in Lisieux (France)
on October 29th, 1976.
- PhD in Computer Science
at the University of
Nice Sophia-Antipolis
(France), ``Study and
exploitation of neutral
networks in fitness landscapes
for hard optimization'',
achieved in December
12th, 2005.
- Current position: Permanent
Researcher since 2006
by computer science departement,
university of Nice Sophia-Antipolis
(France)
|
Research Interests
web page : http://www.i3s.unice.fr/~verel
- Theory of evolutionary computation:
study and measure of hardness of fitness landscapes specialy neutral fitness
landscapes.
- Proposition of fitness/fitness correlation to study dynamic and hardness
(fitness cloud),
- Proposition and analysis of the local optima networks,
- Analysis of complex neutral fitness landscapes in GA and GP (majority
problem, firing squad problem, etc.),
- Proposition of metaheuristic for neutral fitness landscapes (scuba
search).
- Complex Systems:
where some "global" properties of the system comes from a large number of "local" interactions.
- Study of the majority problem in cellular automata,
- Design of new GA for majority problem.
- Cellular Genetic Algorithm:
evolutionary algorithms where the population is structured by a grid or a
graph.
- Proposition of new selection selection method in Cellular GA,
- Analysis of diversity and convergence time in Cellular GA.
- Cognitive sciences:
design of models in cognitive area with the help of evolutionary computation
- Proposition of Eyes tracking model
- Proposition of a model on a combinaison of two cognitive tasks: memorization
and information retrieval

|
Fitness Landscapes
and Problem Hardness in Genetic Programming
Biosketch:
Leonardo
Vanneschi
• Born
in Florence (Italy) on
October 3rd, 1970.
• Laurea (maitrise) “summa cum laude” in Computer Science at
the University of Pisa, achieved on February 23rd, 1996.
• PhD in Computer Science at the University of Lausanne (Switzerland),
achieved on September 06th, 2004.
• Current Position: Assistant Professor (Ricercatore) by the Dipartimento
di Informatica, Sistemistica e Comunicazione (D.I.S.Co.) of the Science Faculty
of the University of Milano-Bicocca (Italy).
|

Leonardo
Vanneschi
|
Main scientific contributions:
• Problem Hardness in Genetic Programming (GP). I have proposed some new
hardness indicators for GP and other measures to characterize fitness landscapes.
•Parallel and Distributed GP. I have studied how the island GP model enables
to maintain a higher diversity in the whole GP system, thus producing better
quality solutions for many applications.
• Other Techniques to Improve GP Performance. The techniques that I have
studied are: (1) reducing the number of test cases to evaluate fitness by means
of statistics, (2) using populations which dynamically adapt their size according
to some events happening during the evolution, (3) coevolving GP terminal symbols
by means of Genetic Algorithms (GAs).
• Drug Discovery by Genetic Programming: I have realized a GP framework
to mine large datasets of molecule aimed at discovering the model explaining
the relationship between molecular descriptors and some important drug features,
like toxicity, bioavailability or docking energy.
• Traffic Control Applications using Machine Learning. Using image sequences
taken by a fixed camera, I have realized a system to recognize, classify and
track the vehicles traveling along a street.
Techniques used are Neural Networks, GAs and IPAN Tracker. The system has been
embedded with a television camera system built by the Comerson Srl and it is
presently in use in some city crossroads and motorways.
• Genetic Algorithms for the Chemical Formulation Problem. This problem
is very important to build new and performing chemical compounds and drugs. I
have formalized it as an optimization problem and solved it in efficient way
by means of Evolutionary Algorithms.
Web page:
http://personal.disco.unimib.it/Vanneschi

|
|
|
|
|
|