General
Information
Tutorial slides will be available on CD-ROM and distributed
at the conference.
Tutorials will be presented on Saturday, July 7 and Sunday,
July 8, 2007. View
the tutorial schedule.
For more information about tutorials,
contact Anikó Ekárt
at
Call for tutorial proposals closed on 1 December 2006.
Notification of acceptance: 16 December 2006
Camera
ready file preparation and submission instructions
Planned Free Tutorials
Introductory
|
Erik Goodman |
|
John Koza |
Evolution
Strategies |
Martin Schütz |
Evolutionary
Computation: A Unified Approach
more info: |
Kenneth De Jong |
|
Christian Blum |
|
Martin V. Butz |
|
Martin Pelikan |
Grammatical
evolution |
Conor Ryan |
|
Edwin de Jong,
Kenneth Stanley,
Paul Wiegand |
|
Andries Engelbrecht,
Xiaodong Li |
Beowulf Clusters for Evolutionary
Computation |
Arun Khoshla,
Pramod Kumar
Singh,
Diptanu Gon Chowdhary |
Advanced
GA Theory |
Jonathan Rowe |
|
Riccardo Poli,
Bill Langdon |
Representations for
Evolutionary Algorithms
more
info |
Franz Rothlauf |
|
Darrell Whitley |
|
Jason Moore |
Evolutionary Multiobjective Optimization
more info |
Eckart Zitzler,
Kalyanmoy Deb |
Industrial Evolutionary
Computation
more info |
Arthur Kordon,
Guido Smits,
Mark Kotanchek |
Constraint Handling Techniques
Used with EAs
more
info |
Carlos Coello Coello |
Statistics for EC |
Mark Wineberg |
Coevolution |
Sevan Ficici,
Anthony Bucci |
Evolutionary Practical
Optimisation
more
info |
Kalyanmoy Deb |
|
Thomas Jansen,
Frank Neumann |
Complex networks and Evolutionary Computation
more info: |
J.J. Merelo,
Carlos Cotta |
Particle Swarm Optimization for Fuzzy Models |
Arun Khosla |
Fitness Landscapes and Problem Hardness in
Evolutionary Computation
more info: |
Leonardo Vanneschi,
Sebastien Verel |
An Information Perspective on Evolutionary Computation
more info:
|
Yossi Borenstein |
Specialized Techniques and Applications
|
Mike Preuss,
Thomas Bartz- Beielstein |
Symbolic Regression in GP |
Maarten Keijzer |
|
Risto Miikkulainen |
Quantum Computing |
Lee Spector |
|
Lukas Sekanina |
Artificial Development |
Pauline Haddow,
Gunnar Tufte |
|
Marco Tomassini |
|
Ian Parmee |
Evolutionary Multiobjective Combinatorial Optimization
more info:
|
Rajeev Kumar |
Evolving Neural Network Ensembles |
Xin Yao |
|
Gustavo Olague |
|
Muddassar Farooq |
Distributed Evolutionary Computation for Fun and Profit
more info: |
J.J. Merelo,
J.L.J. Laredo |
Introductory
Genetic Algorithms:
Description of Tutorial
The Introduction to Genetic Algorithms Tutorial is aimed at GECCO attendees
with limited knowledge of genetic algorithms, and will start “at
the beginning,” describing first a “classical” genetic
algorithm in terms of the biological principles on which it is loosely
based, then present some of the fundamental results that describe its
performance. It will cover many variations on the classical model, some
successful applications of genetic algorithms, and advances that are
making genetic algorithms more useful.
Erik Goodman
|
Biosketch:
Erik Goodman is a Professor of Electrical and
Computer Engineering and of Mechanical Engineering at Michigan
State University. He studied genetic algorithms under John Holland
at the University of Michigan, before they had been named. His
use of a genetic algorithm in 1971 to solve for 40 coefficients
of a highly nonlinear model of a bacterial cell was the first known
GA application on a real-world problem, and ran for nearly a year
on a dedicated computer. He has used genetic algorithms in his
research ever since, including for parameterization of complex
ecosystem models, for evolution of cooperative behavior in artificial
life, for factory layout and scheduling, for protein folding and
ligand docking, for design of shape and layup of composite structures,
and for data mining and pattern classification. |
Much of
his recent research has been in development of new types of parallel
genetic algorithms and in design methods for mechatronic systems using
genetic programming. He is co-founder and VP Technology of Red Cedar
Technology, which provides tools for automated engineering design based
on evolutionary computation. He chaired ICGA-97 and GECCO-2001, chaired
GECCO’s sponsoring organization, ISGEC, from 2001-2004, and is
now chair of ACM SIGEVO, its successor.
to top
|
Genetic Programming:
John R. Koza
|
Biosketch:
John R. Koza received his Ph.D. in computer
science from the University of Michigan in 1972, working under
John Holland. From 1973 through 1987, he was co-founder, chairman,
and CEO of Scientific Games Inc. where he co-invented the rub-off
instant lottery ticket used by state lotteries. He has taught
a course on genetic algorithms and genetic programming at Stanford
University since 1988, where he is currently a Consulting Professor
in the Biomedical Informatics Program in the Department of Medicine
and in the Department of Electrical Engineering at Stanford University.
He is Vice-Chair of SIGEVO, and has been on the Business Committee
of GECCO since it started in 1999. He has written four books
on genetic programming and one on presidential elections.
|
to top
|
Evolutionary Computation: A Unified Approach:
Description of
Tutorial
The field of Evolutionary Computation has experienced tremendous growth over
the past 20 years, resulting in a wide variety of evolutionary algorithms and
applications. The result poses an interesting dilemma for many practitioners
in the sense that, with such a wide variety of algorithms and approaches, it
is often hard to se the relationships between them, assess strengths and weaknesses,
and make good choices for new application areas.
This tutorial is intended to give an overview of a general EC framework that
can help compare and contrast approaches, encourages crossbreeding, and facilitates
intelligent design choices. The use of this framework is then illustrated by
showing how traditional EAs can be compared and contrasted with it, and how
new EAs can be effectively designed using it.
Finally, the framework is used to identify some important open issues that
need further research.
Kenneth A. De Jong
|
Biosketch:
Kenneth A. De Jong received his Ph.D. in computer
science from the University of Michigan in 1975. He joined George Mason
University in 1984 and is currently a Professor of Computer Science,
head of the Evolutionary Computation laboratory, and the Associate
Director of the Krasnow Institute. In addition, he serves as the Chief
Scientist for the Navy Center for Applied Research in AI. His research
interests include genetic algorithms, evolutionary computation, machine
learning, and adaptive systems. He is currently involved in research
projects involving the development of new evolutionary algorithm (EA)
theory, the use of EAs as heuristics for NP-hard problems, and the
application of EAs to the problem of learning task programs in domains
such as robotics, diagnostics, navigation and game playing.
|
He is also interested in experience-based learning in which systems
must improve their performance while actually performing the desired tasks
in environments not directly their control or the control of a benevolent teacher.
Support for these projects is provided by DARPA, ONR, and NRL. He is an active
member of the Evolutionary Computation research community and has been involved
in organizing many of the workshops and conferences in this area. He is the
founding editor-in-chief of the journal Evolutionary Computation (MIT Press),
and a member of the board of the ACM SIGEVO. He is the recipient of an IEEE
Pioneer award in the field of Evolutionary Computation and a lifetime achievement
award from the Evolutionary Programming Society.
to top |
Ant Colony Optimization:
Description of Tutorial
Ant colony optimization (ACO) is a metaheuristic method that was first
proposed in the early 90's by Marco Dorigo and colleagues. The inspiring
source of ACO is the foraging behavior of real ants. When searching for
food, ants initially explore the area surrounding their nest in a random
manner. As soon as an ant finds a food source, it evaluates quantity and
quality of the food and carries some of the found food to the nest. During
the return trip, the ant deposits a chemical pheromone trail on the ground.
The quantity of pheromone deposited, which may depend on the quantity and
quality of the food, will guide other ants to the food source. The indirect
communication between the ants via the pheromone trails allows them to
find shortest paths between their nest and food sources. This functionality
of real ant colonies is exploited in artificial ant colonies in order to
solve discrete
optimization problems.
The tutorial consists of two parts. The first part is concerned with
the basics of ACO algorithms. We deal with questions such as the translation
of the biological inspiration into a technical algorithm. The most successful
ACO variants are presented. In the second part, some of the more advanced
features of ACO algorithms are presented in the context of interesting
applications. Finally, the tutorial concludes with examples of how to
hybridize ACO algorithms with other optimization techniques.
Christian Blum
|
Biosketch:
Christian Blum received the Diploma (Master
of Science) degree in mathematics in 1998 from the Universität Kaiserslautern
(Germany), and the doctoral degree in applied sciences in 2004
from the Université Libre de Bruxelles (ULB, Belgium).
From 1999 to 2000 he was with the Advanced Computation Laboratory
(ACL) of the Imperial Cancer Research Fund (ICRF) in London
(UK), and from 2000 to 2004 he was with IRIDIA at the Université Libre
de Bruxelles (ULB, Belgium). He currently is a " Ramon
y Cajal" research fellow at the ALBCOM research group
of the Universitat Politècnica de Catalunya (UPC, Spain).
Current subject of his research is the hybridization of metaheuristics
with more classical artificial intelligence and operations
research methods. He is co-founder of the International Workshops
on Hybrid Metaheuristics (HM).
|
to top |
Learning Classifier Systems
Description of Tutorial
When Learning Classifier Systems (LCSs) were introduced by John H.
Holland in the 1970s, the intention was the design of a highly
adaptive cognitive system. Since then, LCSs came a long way. Interest
strongly
decreased in the late 80s and early 90s due the complex interactions
of several learning mechanisms. However, since the introduction
of the accuracy-based XCS classifier system by Stewart W. Wilson in
1995 and the modular analysis of several LCSs thereafter, interest
re-gained momentum. Current research has shown that LCSs can effectively
solve data-mining problems, reinforcement learning problems, other
predictive problems, and cognitive control problems. Hereby, it
was
shown that performance is machine learning competitive, but learning
is taking place online and is often more flexible and highly adaptive.
Moreover, system knowledge can be easily extracted.The Learning
Classifier System tutorial provides a gentle introduction to LCSs and
their
general functioning. It then surveys the current theoretical understanding
of the systems and their proper application to various problem
domains. Finally, we provide a suite of current successful LCS applications
and discuss the most promising areas for future applications.
Martin V. Butz
|
Biosketch:
Martin V. Butz received his Diploma in
computer science from the University of Wuerzburg, Germany
in 2001 with the thesis topic: “Anticipatory Learning
Classifier Systems”. He then moved on to the University
of Illinois at Urbana-Champaign for his PhD studies under the
supervision of David E. Goldberg. Butz finished his PhD in
computer science on “Rule-based evolutionary online learning
systems: Learning Bounds, Classification, and Prediction” in
October, 2004. The thesis puts forward a modular, facet-wise
system analysis for Learning Classifier Systems (LCSs) and
analyzes and enhances the XCS classifier system.
|
Since October 2004, Butz is working back at the University
of Würzburg at the Department of Cognitive Psychology III on the
interdisciplinary cognitive systems project “MindRACES: From reactive
to anticipatory cognitive embodied systems”.
Butz is the co-founder of the “Anticipatory Behavior
in Adaptive Learning Systems (ABiALS)” workshop series and is currently
also co-organizing the “International Workshop on Learning Classifier
Systems (IWLCS 2007)”. Moreover, Butz has published and co-edited
four books on learning classifier systems and anticipatory systems.
Currently, Butz is focussing on the design of highly flexible and adaptive
cognitive
system architectures, which are inspired by studies in cognitive psychology,
evolutionary biology, and behavioral neuroscience. http://www.psychologie.uni-wuerzburg.de/i3pages/butz to top
|
Probabilistic Model-Building Algorithms (PMBGAs)
Description of Tutorial
Probabilistic model-building algorithms (PMBGAs) replace traditional
variation of genetic and evolutionary algorithms by (1) building a
probabilistic model of promising solutions and (2) sampling the built
model to generate new candidate solutions. PMBGAs are also known as
estimation of distribution algorithms (EDAs) and iterated density-estimation
algorithms (IDEAs).
Replacing traditional crossover and mutation operators by building and
sampling a probabilistic model of promising solutions enables the use
of machine learning techniques for automatic discovery of problem regularities
and exploitation of these regularities for effective exploration of the
search space. Using machine learning in optimization enables the design
of optimization techniques that can automatically adapt to the given
problem. There are many successful applications of PMBGAs, for example,
Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature
subset selection, forest management, groundwater remediation design,
telecommunication network design, antenna design, and scheduling.
The tutorial Probabilistic Model-Building GAs will provide a gentle
introduction to PMBGAs with an overview of major research directions
in this area. Strengths and weaknesses of different PMBGAs will be discussed
and suggestions will be provided to help practitioners to choose the
best PMBGA for their problem.
Martin Pelikan
|
Biosketch:
Martin Pelikan received Ph.D. in Computer Science
from the University of Illinois at Urbana-Champaign in 2002.
Since 2003 he has been an assistant
professor at the Dept. of Mathematics and Computer Science at the
University of Missouri in St. Louis. In 2006 Pelikan founded the Missouri
Estimation of Distribution Algorithms Laboratory (MEDAL). Prior to joining
the University of Missouri, he worked at the Illinois Genetic Algorithms
Laboratory (IlliGAL), the German National Center for Information
Technology in Sankt Augustin, the Slovak University of Technology in
Bratislava, and the Swiss Federal Institute of Technology (ETH) in Zurich.
|
Pelikan has worked as a researcher in genetic and evolutionary
computation since 1995. His most important contributions to genetic and
evolutionary computation are the Bayesian optimization algorithm (BOA),
the hierarchical BOA (hBOA), the scalability theory for BOA and hBOA,
and the efficiency enhancement techniques for BOA and hBOA. His current
research focuses on extending BOA and hBOA to other problem domains,
applying genetic and evolutionary algorithms to real-world problems with
the focus on physics and machine learning, and designing new efficiency
enhancement techniques for BOA and other evolutionary algorithms. to top |
Coevolution
Kenneth O. Stanley
|
Biosketch:
Kenneth O. Stanley is an Assistant Professor in
the School of Computer Science at the University of Central Florida.
His research focuses
on artificially evolving complex solutions to difficult real-world
tasks. He graduated with a B.S.E. in Computer Science Engineering
and a minor in Cognitive Science from the University of Pennsylvania
in 1997. He received an M.S. in Computer Science in 1999, and a Ph.D.
in 2004 at the University of Texas at Austin. He has won best paper
awards for his work on NEAT (at the 2002 Genetic and Evolutionary
Computation Conference) and for his work on NERO (at the IEEE 2005
Symposium on Computational Intelligence and Games), and also won
the Independent Games Festival Student Showcase Award (at the 2006
Game Developers conference) for NERO. |
He has published papers in JAIR, Evolutionary Computation, IEEE Transactions
on Evolutionary Computation,
and
Artificial Life journals.
to top |
Particle Swarm Optimization:
Description of Tutorial
Since the first publication of Particle Swarm Optimization (PSO) in 1995,
the number of research papers on PSO, and the number of researcher in PSO,
have exploded. Many variations of the PSO have been developed to improve
its performance, studies have been done to understand the dynamics of particles,
and adaptations have been developed to apply the PSO to different optimization
problem types.
This tutorial is planned in two parts, one introductory, and one addressing
more advanced topics. The introductory part will have as its objective
to provide the attendee with an overview of PSO and its basic variations.
A significant problem with the standard PSO will be illustrated, and
a few results from studies of particle trajectories will be presented.
The advanced part of the tutorial will consider PSO models that are specifically
designed for multimodal optimization, multiobjective optimization, optimization
in dynamic environments, and constraint handling.
In more detail, the tutorial will cover the following topics, in the
order listed below:
Introduction:
1. Basic PSO: The philosophy of PSO will be discussed, and the basic
(original) PSO algorithms will be explained and illustrated. The need
for social network structures will be discussed, as well as the importance
of PSO control parameters, basic variations (velocity clamping, inertia,
constriction). An overview of performance criteria will be given.
2. Particle Trajectories: Illustrations of particle trajectories and
the influence of parameter choices on trajectories will be given. Heuristics
for the selection of control parameter values will be given.
3. PSO problem: A problem with the PSO that causes premature convergence
will be discussed, and solutions proposed.
Advanced:
4. Multimodal optimization: speciation and niching techniques used in
PSO will be described. Illustration of a speciation-based PSO will be
provided, as well as analysis of its performance and some research issues.
5. Multiobjective optimization: an overview of existing multiobjective
PSO models will be provided. In addition, a multiobjective PSO will be
shown to have a fast convergence and nice spread of solutions across
the Pareto optimal front.
6. Optimization in dynamic environments: The rationale of PSO being suitable
for optimization in a dynamic environment will be given. Recent developments
on this topic will be presented, with illustrations of simulation runs
of a PSO model on the Moving Peaks benchmark test functions.
7. Constraint handling: a brief overview of existing constraint handling
techniques adopted in PSO will be provided.
Andries Engelbrecht
|
Biosketches:
Andries Engelbrecht is a Full Professor in Computer Science
at the Department of Computer Science, University of Pretoria.
He manages a research group of 40 Masters and PhD students,
most of whom do research in swarm intelligence. He has recently
authored a book, “Fundamentals of Computational Swarm
Intelligence”, published by Wiley. He is also the author
of a book, “Computational Intelligence: An Introduction”,
also published by Wiley. He has presented tutorials on PSO
and Coevolutionary methods for evolving game agents at IEEE
CEC 2005 and IEEE CIG 2005 respectively. He is co-presenter
of a tutorial on PSO and DE at IEEE CEC 2007. He has published
approximately 100 papers in the last decade, serves as a reviewer
for a number of conferences and journals, and is an associate-editor
of IEEE TEC, and serves on the editorial board of two other
journals. He served as a member of a large number of conference
program committees, and is in the organizing committee of several
conferences.
|
Dr Xiaodong
Li is a Senior Lecturer in the School of Computer Science and IT,
RMIT University, Melbourne, Australia. His research interest
includes Artificial Intelligence, Evolutionary Computation, Artificial
Neutral Networks, Swarm Intelligence, Multiobjective Optimization,
Optimization in Dynamic Environments, and their applications
to real-world problems. Dr. Li is an IEEE member, and a member
of SIGEVO. He serves as a technical committee member of Working
Group on Swarm Intelligence, IEEE Computational Intelligence
Society. He was the organizing chair for special session on Swarm
Intelligence in CEC'03, CEC'04, and CEC'06. He has recently presented
a tutorial on Particle Swarm Optimization at SEAL’06. He
is the publicity chair and a member of the steering committee
for the IEEE Swarm Intelligence Symposium 2007 (SIS'07). He is
the general and programme chair for SEAL’08, which will
be held in Melbourne, Australia in 2008.
|
Xiaodong Li
|
to top
|
Advanced
Genetic Programming Theory
Description of Tutorial
GP is a complex adaptive system with zillions of degrees of freedom.
Experimental studies require the experimenter to choose which problems,
parameter settings and descriptors to use. Plotting the wrong data increases
the confusion about GP's behaviour, rather than clarify it.
We treat GP as a search process and explain its behaviour by considering
the search space, in terms of its size, its limiting fitness distributions
and also the halting probability. Then we shall use modern schema theory
to characterise GP search. We finish with lessons and implications.
Knowledge of genetic programming will be assumed.
Riccardo Poli
|
Biosketches:
Riccardo Poli is a professor in the Department
of Computer Science at Essex. His main research interests include
genetic
programming (GP) and the theory of evolutionary algorithms. In July
2003 Prof.Poli was elected a Fellow of The International Society for
Genetic and Evolutionary Computation (ISGEC) ``... in recognition of
sustained and significant contributions to the field and the
community''. He has published over 180 refereed papers on
evolutionary algorithms (particularly genetic programming), neural
networks and image/signal processing. With W.B.Langdon, he has
co-authored the book Foundations of Genetic Programming
. He has been co-founder and co-chair of EuroGP,
the European Conference on Genetic Programming for 1998, 1999, 2000
and 2003.
|
Prof.Poli was the chair of the GP theme
at the Genetic and Evolutionary Computation Conference (GECCO) 2002 (the
largest conference in the field) and was co-chair of the prestigious
Foundations of Genetic Algorithms (FOGA) Workshop in 2002.
He has been
(the first non-US) general chair of GECCO in 2004, and served as a member
of the business committee for GECCO 2005.
He is technical chair of the
International workshop on Ant Colony Optimisation and Swarm Intelligence
(ANTS 2006) and competition chair for GECCO 2006. He is an associate
editor of ``Evolutionary Computation'' (MIT Press), ``Genetic Programming
and Evolvable Machines'' (Springer) and the ``International Journal of
Computational Intelligence Research'' (IJCIR). Prof. Poli has been programme
committee member of over 50 international events. He has presented invited
tutorials on GP at 10 international conferences. He is a member of the
EPSRC Peer Review College and has attracted, as principal Investigator
or co-investigator, funding for over GBP 1.8M from EPSRC, DERA, Leverhulme
Trust, Royal Society, and others.
W. B. Langdon is a senior research
fellow in computer systems engineering at Essex University.
He worked on distributed real time databases for control and
monitoring of power stations at
the Central Electricity Research Laboratories.
He then joined Logica to work on
distributed control of gas pipelines and
later on computer and telecommunications networks.
After returning to academe to gain a PhD in genetic programming
(sponsored by National Grid plc.),
he has worked at the University of Birmingham, the CWI, UCL and,
most recently, in Essex University.
|
W. B. Langdon
|
to top
|
Representations for Evolutionary Algorithms:
Description of Tutorial
Successful and efficient use of evolutionary algorithms
(EAs) depends on the choice of the problem representation - that is,
the genotype and the mapping from genotype to phenotype - and on the
choice of search operators that are applied to this representation. These
choices cannot be made independently of each other. The question whether
a certain representation leads to better performing EAs than an alternative
representation can only be answered when the operators applied are taken
into consideration. The reverse is also true: deciding between alternative
operators is only meaningful for a given representation. In EA practice one can distinguish two complementary approaches. The
first approach uses indirect representations where a solution is encoded
in a standard data structure, such as strings, vectors, or discrete permutations,
and standard off-the-shelf search operators are applied to these genotypes.
To evaluate the solution, the genotype needs to be mapped to the phenotype
space. The proper choice of this genotype-phenotype mapping is important for
the performance of the EA search process. The second approach, the direct representation,
encodes solutions to the problem in its most 'natural' space and designs search
operators to operate on this representation.
Research in the last few years has identified a number of key concepts
to analyse the influence of representation-operator combinations on EA
performance.
These concepts are
• locality,
• redundancy, and
• bias.
Locality is a result of the interplay between the search operator and
the genotype-phenotype mapping. Representations are redundant if the
number of phenotypes exceeds the number of possible genotypes. Furthermore,
redundant representations can lead to biased encodings if some phenotypes
are on average represented by a larger number of genotypes. Finally,
a bias need not be the result of the representation but can also be caused
by the search operator.
The tutorial gives a brief overview about existing guidelines for representation
design, illustrates the different aspects of representations, gives a
brief overview of theoretical models describing the different aspects,
and illustrates the relevance of the aspects with practical examples.
It is expected that the participants have a basic understanding of EA principles.
Franz Rothlauf |
Biosketch:
Franz Rothlauf (M'98) received a Diploma in Electrical
Engineering from the University of Erlangen, Germany, and a Ph.D.
in Information Systems from the University of Bayreuth, Germany,
in 1997, and 2001, respectively.
He is currently Professor at the Department of Information Systems,
Mainz University, Germany. He has published more than 45 technical
papers in the context of evolutionary computation, co-edited
several conference proceedings, and is author of the book "Representations
for Genetic and Evolutionary Algorithms" which is published
in a second edition in 2006. |
His main research interests are problem representations for heuristic
search approaches especially evolutionary algorithms. For several years
he was a visiting researcher at the Illinois Genetic Algorithm Laboratory
(IlliGAL). He is a member of the Editorial Board of Evolutionary Computation
Journal and Information Sciences.
He has been organizer of several workshops on representations issues,
chair of EvoWorkshops in 2005 and 2006, co-organizer of the European
workshop series on "Evolutionary Computation in Communications,
Networks, and Connected Systems", topic chair for IEEE CEC 2005,
and co-chair of the program commitee of the GA track at GECCO 2006.
to top
|
No Free Lunch:
Description of Tutorial
The "No Free Lunch" theorem is now more than
10 years old; it asserts that all possible search algorithms have the
same expected behavior over all possible discrete search problems. But
there is more than this to No Free Lunch (NFL). NFL also holds over finite
sets of functions, some of which are compressible and some of which are
not. A focused form of NFL also holds for very small finite groups of
parameter optimization problems encoded using Binary and Gray Code representations.
Some observations that hold when NFL is applied over all possible discrete
functions are not true when NFL holds over a finite set.
Variants of local search are capable of beating random enumeration (and
side stepping NFL) over large classes of problems of bounded complexity.
The talk will also explore what NFL tells us about how to construct,
compare and evaluate search algorithms.
Darrell Whitley
|
Biosketch:
Darrell Whitley is Professor and Chair of Computer
Science at Colorado State University. He was the Chair of the Governing
Board of the International Society for Genetic Algorithms from
1993 to 1997. He was editor-in-chief of the journal Evolutionary
Computation from 1997 to 2003.
|
|
Bioinformatics:
Description of Tutorial
Bioinformatics is an interdisciplinary field of study that blends computer
science and statistics with the biological sciences. Important objectives
of bioinformatics include the storage, management and retrieval of high-dimensional
biological data and the detection, characterization, and interpretation
of patterns in such data. The goal of this tutorial is to provide an
entry-level introduction to bioinformatics for those hoping to apply
genetic and evolutionary computation to solving complex biological problems.
Specific examples of how methods such as genetic programming have been
applied in
bioinformatics will be presented.
Jason H. Moore
|
Biosketch:
Jason H. Moore, Ph.D.: Dr. Moore received
his M.S. in Statistics and his Ph.D. in Human Genetics from
the University
of Michigan. He then served as Assistant
Professor of Molecular Physiology and Biophysics (1999-2003) and
Associate Professor of Molecular Physiology and Biophysics with tenure
(2003-2004) at Vanderbilt University. While at Vanderbilt, Dr. Moore
held an endowed position as an Ingram Asscociate Professor of Cancer
Research. He also served as Director of the Bioinformatics Core and
Co-Founder and Co-Director of the Vanderbilt Advanced Computing Center
for Research and Education (ACCRE).
|
In 2004, Dr. Moore accepted a position as the Frank Lane Research Scholar
in Computational Genetics, Associate Professor of Genetics, and Adjunct
Associate Professor of Community and Family Medicine, and Director of Bioinformatics
at Dartmouth Medical School. He also holds adjunct positions in the Department
of Biological Sciences at Dartmouth College, the Department of Computer
Science at the University of New Hampshire, and the Department of Computer
Science at the University of Vermont. Dr. Moore serves as Director of the
Bioinformatics Shared Resource for the Norris-Cotton Cancer Center and
Founder and Director of The Discovery Resource, a 300-processor parallel
computer cooperatively operated for the Dartmouth community. His research
has been communicated
in more than 130 scientific publications.
to top
|
Evolutionary Multiobjective Optimization:
Description of Tutorial
Many real-world search and optimization problems are
naturally posed as non-linear programming problems having multiple conflicting
objectives.
Due to lack of suitable solution techniques, such problems are usually
artificially converted into a single-objective problem and solved. The
difficulty arises because multi-objective optimization problems give
rise to a set of Pareto-optimal solutions, each corresponding to a certain
trade-off among the objectives. It then becomes important to find not
just one Pareto-optimal solution but as many of them as possible.
Classical methods are found to be not efficient because
they require repetitive applications to find multiple Pareto-optimal
solutions and in some occasions repetitive applications do not guarantee
finding distinct Pareto-optimal solutions. The population approach of
evolutionary algorithms (EAs) allows an efficient way to find multiple
Pareto-optimal solutions simultaneously in a single simulation run.
In this tutorial, we shall contrast the differences
in philosophies between classical and evolutionary multi-objective
methodologies and provide adequate fundamentals needed to understand
and use both methodologies in practice.
Particularly, major state-of-the-art evolutionary multi-objective
optimization (EMO) methodologies will be presented and various related
issues such as performance assessment and preference articulation
will be discussed. Thereafter, three main application areas of EMO
will be discussed with adequate case studies from practice -- (i)
applications showing better decision-making abilities through EMO,
(ii) applications exploiting the multitude of trade-off solutions
of EMO in extracting useful information in a problem, and (iii) applications
showing better problem-solving abilities in various other tasks (such
as, reducing bloating, solving single-objective constraint handling,
and others).
Clearly, EAs have a niche in solving multi-objective optimization problems
compared to classical methods. This is why EMO methodologies are getting
a growing attention in the recent past. Since this is a comparatively
new field of research, in this tutorial, a number of future challenges
in the research and application of multi-objective optimization will
also be discussed.
This tutorial is aimed for both novices and users of EMO. Those without
any knowledge in EMO will have adequate ideas of the procedures and their
importance in computing and problem-solving tasks. Those who have been
practicing EMO will also have enough ideas and materials for future research,
know state-of-the-art results and techniques, and make a comparative
evaluation of their research.
Eckart Zitzler |
Biosketches:
Eckart Zitzler received degrees from University
of Dortmund in Germany (diploma in computer science) and ETH Zurich
in Switzerland (doctor of technical sciences). Since 2003, he has
been Assistant Professor for Systems Optimization at the Computer
Engineering and Networks Laboratory at the Department of Information
Technology and Electrical Engineering of ETH Zurich, Switzerland.
His research focuses on bio-inspired computation, multiobjective
optimization, computational biology, and computer engineering applications.
Dr. Zitzler was General Co-Chairman of the first three international
conferences on evolutionary multi-criterion optimization (EMO 2001,
EMO 2003, and EMO 2005)
|
Kalyanmoy
Deb is currently a Professor of Mechanical Engineering at Indian
Institute of Technology
Kanpur, India and is the director of
Kanpur Genetic Algorithms Laboratory (KanGAL). He is the recipient of
the prestigious Shanti Swarup Bhatnagar Prize in Engineering Sciences
for the year 2005. He has also received the `Thomson Citation Laureate
Award' from Thompson Scientific for having highest number of citations
in Computer Science during the past ten years in India.
He is a fellow of Indian National Academy of
Engineering (INAE), Indian National Academy of Sciences,
and International Society of Genetic and Evolutionary
Computation (ISGEC). He has received Fredrick Wilhelm Bessel Research
award from Alexander von Humboldt Foundation in 2003.
|
Kalyanmoy
Deb
|
His main research interests are in the area of computational
optimization, modeling and
design, and evolutionary algorithms. He has written two text books on optimization and
more than 180 international journal and conference research papers.
He has pioneered and a leader in the field of evolutionary multi-objective
optimization. He is associate editor of two major international journals
and an editorial board members of five major journals. More information
about his research can be found from http://www.iitk.ac.in/kangal/deb.htm. Addresses:
Eckart Zitzler
Computer Engineering (TIK), ETH Zurich
Gloriastr. 35, 8092 Zurich, Switzerland
Phone:+41-1-6327066 Fax:+41-1-6321035
http://www.tik.ee.ethz.ch/~zitzler/
Kalyanmoy Deb
Professor Phone : +91 512 2597205 (O)
Department of Mechanical Engineering +91 512 2598310 (H)
Indian Institute of Technology, Kanpur Fax : +91 512 2597205, 2590007
Kanpur, Pin 208 016, INDIA
http://www.iitk.ac.in/kangal/deb.htm
|
Industrial Evolutionary Computation
Description of Tutorial
The tutorial gives a systematic view, based on the experience
from The Dow Chemical Company, of the key issues for applying Evolutionary
Computation (EC) in industrial problems. The competitive advantages and
the benefits of using EC in industry are defined as well as the criteria
for successful practical applications. The implementation issues of several
computational intelligence techniques, such as analytic neural networks,
support vector machines, Pareto-front genetic programming, and particle
swarm optimizers are discussed.
An integrated methodology, which uses the discussed methods for variable
selection, data compression, and robust empirical model building is proposed
and illustrated with successful industrial applications in the area of
process monitoring, optimization, and new product design. The open technical
and non-technical issues of applied EC are addressed and selected areas
of future research are recommended.
Arthur Kordon
|
Biosketches:
Arthur Kordon is a Research&Development
Leader in the Modeling Group in Engineering& Process Sciences
Department of Core R&D, The Dow Chemical Company in Freeport,
Texas, USA. He is an internationally recognized expert in applying
computational intelligence technologies in industry. Dr. Kordon
has successfully introduced several novel technologies for improved
manufacturing and new product design, such as robust inferential
sensors, automated operating discipline, accelerated fundamental
model building, etc. His research interests include application
issues of computational intelligence, robust empirical modeling,
intelligent process monitoring and control, and data mining. He
has published more than 60 papers and 7 book chapters in the area
of applied computational intelligence and advanced control.
|
Dr. Kordon holds a Master of Science degree in Electrical Engineering from
the Technical University of Varna, Bulgaria and a Ph.D. degree in Electrical
Engineering from the Technical University of Sofia, Bulgaria.
Guido
Smits is a Research Leader in the Engineering Sciences within the Core
R&D Department in The Dow Chemical Company. His main area of
expertise is in industrial applications of computational intelligence
techniques like neural nets, genetic algorithms, genetic programming,
support vector machines, and modeling in general. He has more than
50 technical papers and currently holds 15 patents. He has a Ph.D.
in Mathematics and Physical Sciences from the University of Leiden,
The Netherlands and holds degrees in Organic Chemistry (University
of Antwerp, Belgium), Informatics (University of Leuven, Belgium),
Polymer Physics and Chemistry (University of Eindhoven, The Netherlands),
and Knowledge Technology (University of Ghent, Belgium). He lives
in Wijnegem, Belgium. |
Guido Smits
|
|
Constraint Handling Techniques Used with EAs:
Description of Tutorial
When used for global optimization, Evolutionary Algorithms
(EAs) can be seen as unconstrained optimization techniques. Therefore,
they require an additional mechanism to incorporate constraints of
any kind (i.e., inequality, equality, linear, nonlinear) into their
fitness function. Although the use of penalty functions (very popular
with mathematical programming techniques) may seem an obvious choice,
this sort of approach requires a careful fine tuning of the penalty
factors to be used. Otherwise, an EA may be unable to reach the feasible
region (if the penalty is too low) or may reach quickly the feasible
region but being unable to locate solutions that lie in the boundary
with the infeasible region (if the penalty is too severe).
This has motivated the development of a number of approaches to incorporate
constraints into the fitness function of an EA. This tutorial will
cover the main proposals in current use, including novel approaches
such as the use of tournament rules based on feasibility, multiobjective
optimization concepts, hybrids with mathematical programming techniques
(e.g., Lagrange multipliers), cultural algorithms, and artificial immune
systems, among others. Other topics such as the importance of maintaining
diversity, current benchmarks and the use of alternative search engines
(e.g., particle swarm optimization, differential evolution, evolution
strategies, etc.) will be also discussed (as time allows).
Carlos Coello Coello |
Biosketch:
Carlos Artemio Coello Coello received a BSc in Civil Engineering
from the Universidad Autonoma de Chiapas in Mexico in 1991
(graduating Summa Cum Laude). Then, he was awarded a scholarship
from the Mexican government to pursue graduate studies in Computer
Science at Tulane University (in the USA). He received a MSc
and a PhD in Computer Science in 1993 and 1996, respectively.
Dr. Coello has been a Senior Research Fellow in the Plymouth Engineering Design
Centre (in England) and a Visiting Professor at DePauw University (in the USA).
He is currently full professor at CINVESTAV-IPN in Mexico City, Mexico.
|
He has published over 170 papers in international
peer-reviewed journals and conferences. He has also
co-authored the book "Evolutionary Algorithms for Solving Multi-Objective
Problems" (Kluwer Academic
Publishers, 2002) and has co-edited the book "Applications of Multi-Objective
Evolutionary Algorithms (World Scientific, 2004).
He has delivered invited talks,
keynote speeches and tutorials at international conferences held in Spain, USA,
Canada, Switzerland, UK, Chile, Colombia, Brazil, Argentina, Uruguay and Mexico.
Dr. Coello has served as a technical reviewer for over 40 international journals
and for more than 40 international conferences and actually serves as associate
editor of the journals "IEEE Transactions
on Evolutionary Computation" and "Evolutionary
Computation" and as a member of the editorial boards
of the journals "Soft Computing", "Engineering
Optimization", "Journal of Heuristics", "Computational Optimization
and Applications" and the "International Journal of Computational Intelligence
Research". He is member of the Mexican Academy of Science, Senior Member
of the IEEE, and member of Sigma Xi, The Scientific Research Society. He is also
a member
of the Council of Authors of the "International Society for Genetic and
Evolutionary Computation". His current research interests are: evolutionary
multiobjective optimization, constraint-handling techniques for evolutionary
algorithms and evolvable
hardware. to top
|
Evolutionary Practical Optimization:
Description of Tutorial
Classical optimization methods are often too restrictive
to be applied to practical problem solving due to a number of limitations
in their working principles: convexity requirement, continuity of search
space, unimodality, single-objectiveness etc. Unfortunately, most practical
optimization problems have all but the above properties. Evolutionary
algorithms belong to a class of stochastic optimization algorithms which,
although do not always guarantee finding the optimal solution, often
are the only viable choices to solve those problems to near-optimality.
In this tutorial, we shall discuss a number of properties and operators
which provide EAs the necessary platform to solve such complex optimization
problems. Every aspect will be contrasted with competitive classical
methods and will be illustrated with interesting case studies.
Evolutionary optimization has a bright future beyond optimization
in terms of their abilities in deciphering innovative principles
for being optimal - a matter which will be well-illustrated with a number
of interesting practical case studies.
Contents:
1. Practical optimization problems and their characteristics
1.1 Large dimension (variables and constraints)
1.2 Non-linear constraints
1.3 Discontinuities, non-differentiability, discreteness of search space
1.4 Multi-modalities (multiple optima, local and global)
1.5 Multi-objectivity (multiple conflicting objectives, trade-off solutions)
1.6 Uncertainties in variables and objectives (robust design, reliability-based
design, handling noise)
1.7 Large computational time for evaluation (need for approximate eval. using
RSM, Kriging, neural nets etc.)
1.8 Imprecise description of variables and objectives (need for fuzzy logic
and rough set descriptions)
1.9 Dynamic optimization problems in which optimization problem changes with
time (iteration counter)
2. Classical optimization principles and their shortcomings for handling practical
problems
3. Efficient evolutionary algorithms for practical optimization (Modifications
to a basic EA will be discussed to handle each issue of item 1. Reasons
for their success will be given. Moreover, to illustrate at least one
case study for each case will be discussed to drive the point home)
4. Need for hybrid EA-Classical methods
5. Beyond optimization: Unveiling innovation and innovative principles
for being optimum (EAs allow finding multiple optimal solutions in one
simulation. These solutions can be analyzed for commonality principles
which often give rise to innovative principles about solving the problem
which were not known before and not possible to achieve by other means.
A number of interesting case studies will be discussed to illustrate
the innovative principles which can be deciphered by this process. This
task has a tremendous application in engineering design activities.)
Who should attend? All those who are interested in practical optimization
and practical problem solving will be interested in this tutorial. Besides
getting a feel for the differences between evolutionary optimization
and their classical counterparts, participants will get a feel that evolutionary
optimization provides a more (w)holistic optimization which can not only
be used to just find the optimum or near-optimum solution, but to look
beyond and gain a plethora of important insights about solving the problem
at hand.
Kalyanmoy Deb |
Biosketch:
Kalyanmoy Deb is currently a Professor of Mechanical
Engineering at Indian Institute of Technology Kanpur, India and is
the director of Kanpur Genetic Algorithms Laboratory (KanGAL). He is
the recipient of the prestigious Shanti Swarup Bhatnagar Prize in Engineering
Sciences for the year 2005. He has also received the `Thomson Citation
Laureate Award' from Thompson Scientific for having highest number
of citations in Computer Science during the past ten years in India.
He is a fellow of Indian National Academy of Engineering (INAE), Indian
National Academy of Sciences, and International Society of Genetic
and Evolutionary Computation (ISGEC). He has received Fredrick Wilhelm
Bessel Research
award from Alexander von Humboldt Foundation in 2003.
|
His main research interests are in the area of computational optimization,
modeling and design, and evolutionary algorithms. He has written two
text books on optimization and more than 180 international journal and
conference research papers. He has pioneered and a leader in the field
of evolutionary multi-objective optimization. He is associate editor
of two major international journals and an editorial board members of
five major journals. More information about his research can be found
from http://www.iitk.ac.in/kangal/deb.htm.
Kalyanmoy Deb
Professor of Mechanical Engineering
Department of Mechanical Engineering
Indian Institute of Technology Kanpur
Kanpur, PIN 208016, India
Email:
Web: http://www.iitk.ac.in/kangal/deb.htm
to top
|
Computational Complexity and Evolutionary Computation:
Description of Tutorial
Evolutionary algorithms and other nature-inspired search heuristics like
ant colony optimization have been shown to be very successful when dealing
with real-world applications or problems from combinatorial optimization.
In recent years, analyses has shown that these general randomized search
heuristics can be analyzed like "ordinary" randomized algorithms
and that such analyses of the expected optimization time yield deeper insights
in the functioning of evolutionary algorithms in the context of approximation
and optimization. This is an important research area where a lot of interesting
questions
are still open.
The tutorial enables attendees to analyze the computational complexity
of evolutionary algorithms and other search heuristics in a rigorous
way. An overview of the tools and methods developed within the last 15
years is given and practical examples of the application of these analytical
methods are presented.
Thomas Jansen
|
Biosketches:
Thomas Jansen, born 1969, studied Computer Science at the University
of Dortmund, Germany. He received his diploma (1996) and Ph.D. (2000)
there. From
September 2001 to August 2002 he stayed as a Post-Doc at Kenneth De Jong's
EClab at the George Mason University in Fairfax, VA. Since September 2002
he holds a position as Juniorprofessor for Computational Intelligence at the
University of Dortmund. He is an associate editor of Evolutionary Computation
(MIT Press). His research is centered around the theoretical analysis
of evolutionary algorithms.
|
Frank Neumann studied
Technical Computer Science in Dortmund and Kiel, Germany. He received
his diploma (2002) and Ph.D. (2006) from the Christian-Albrechts-University
of Kiel. Since November 2006 he is a Post-Doc in the research group "Algorithms
and Complexity" (headed by Kurt Mehlhorn)
of the Max-Planck-Institute for Computer Science in Saarbrücken, Germany.
A central topic in his work are theoretical aspects of randomized search heuristics
in particular for problems from combinatorial optimization. He is the co-chair
of a newly founded track on "Formal Theory" at GECCO 2007 which aims
to be a new forum for strong theoretical work dealing with evolutionary computation
methods.
|
Frank Neumann
|
to top
|
Complex networks and evolutionary computation
Description of Tutorial
This tutorial will try to explain the main concepts in complex network analysis.
Networks (i.e. graphs) that have been created by, or constructed from, natural
phenomena, present some striking characteristics and similarities that make
them worthwhile researching. The initial interest on on graphs dates back to
the fifties, with work carried out by Eötvös and partners, but were
revived at the end of the nineties by researches such as Mark Newman, Watts
and Strogatz and Barabási by describing scale-free networks and small-world
networks. These concepts have been put to work in many disciplines, including
evolutionary computation. The tutorial will be illustrated with examples drawn
from the EC co-authorship networks, and how complex network analysis has been
used in several evolutionary computation applications.
Juan Julián Merelo
|
Biosketches:
Juan Julián Merelo is Associate
Professor at the University of Granada. His main research interests
are distributed evolutionary computation and complex systems. He's
been coorganizer or organizer of the last 4 PPSN conferences, and
participated in program committees of all major EC conferences.
|
Carlos Cotta is Associate Professor at the University of Málaga,
He received the MSc and PhD degrees in 1994 and 1998, respectively, in
Computer Science from the University of Málaga (Spain). His research
interests are primarily in evolutionary algorithms, especially memetic
algorihtms, and cross-paradigm hybridization. He is also interested in
combinatorial optimization, and in bioinformatics.
|
Carlos Cotta
|
to top |
Fitness Landscapes and Problem Hardness
in Evolutionary Computation:
Description of Tutorial
The performance of searching agents, or metaheuristics,
like evolutionary algorithms (genetics algorithms,
genetic programming, etc.) or local search algorithms
(simulated annealing, tabu search, etc.) depend on
some properties of the search space structure. One
concept that allows us to analyse the search space
is the fitness landscape; in its most general definition,
a fitness landscape takes into account both the optimization
problem and some characteristics of the search algorithm
used to solve it. This tutorial presents some definitions
of fitness landscape. Subsequently, the concept of
landscape geometry will be introduced and two of the
most common landscape geometries (multimodal and neutral)
and the dynamics of the metaheuristics on those landscapes
will be discussed. After that, the binding between
fitness landscapes and problem difficulty will be discussed
and a set of measures that characterize the difficulty
of a metaheuristic in searching solutions in a fitness
landscape are analysed.
Among those measures, particular relevance will be
given to Fitness Distance Correlation (FDC), Negative
Slope Coefficient (NSC), a set of measures bound to
the concept of Neutrality and some distance metrics
and/or similarity measures that are consistent with
the most commonly used genetic operators. Finally,
some open questions about fitness landscapes are discussed.
Leonardo Vanneschi
|
Biosketches:
Leonardo Vanneschi
Born in Florence (Italy) on October 3rd, 1970.
•
Laurea (maitrise) “summa cum laude” in
Computer Science at the University of Pisa, achieved
on February 23rd, 1996.
•
PhD in Computer Science at the University of Lausanne
(Switzerland), achieved on September 06th, 2004.
•
Current Position: Permanent Researcher by the Dipartimento
di Informatica, Sistemistica e Comunicazione (D.I.S.Co.)
of the Science Faculty of the University of Milano-Bicocca
(Italy).
|
Main scientific contributions:
• Problem Hardness in Genetic Programming
(GP). I have
proposed some new hardness indicators for GP
and other measures to characterize fitness landscapes.
• Parallel and Distributed GP. I have studied how the
island GP model enables to maintain a higher
diversity in the whole GP system, thus producing better quality
solutions for many applications.
• Other Techniques to Improve GP Performance. The techniques
that I have studied are: (1) reducing the number
of test cases to evaluate fitness by means of statistics,
(2) using populations which dynamically adapt
their size according to some events happening during the
evolution, (3) coevolving GP terminal symbols
by means of Genetic Algorithms (GAs).
• Traffic Control Applications using Machine Learning. Using image sequences taken by a fixed camera,
I have realized a system to recognize, classify and track
the vehicles traveling along a street. Techniques
used are Neural Networks, GAs and IPAN Tracker. The system
has been embedded with a television camera system
built by the Comerson Srl and it is presently in use in some
city crossroads and motorways.
• Genetic Algorithms for the Chemical Formulation Problem. This problem is very important to build new and
performing chemical compounds and drugs. I have formalized it
as an optimization problem and solved it in efficient
way by means of Evolutionary Algorithms.
Web page: http://personal.disco.unimib.it/Vanneschi/
Sébastien Verel
• Born in Lisieux (France) on October 29th, 1976.
• PhD in Computer Science at the University of Nice Sophia-Antipolis (France), “Study
and exploitation of neutral networks in fitness landscapes for hard optimization”,
achieved in December 12th, 2005.
• Current position: Permanent Researcher since 2006 by computer science
departement of the Science Faculty of Nice Sophia-Antipolis (France).
|
Sébastien Verel
|
Research Interests:
• Theory of evolutionary computation
|
study and measure of hardness
of fitness landscapes specialy neutral fitness
landscapes.
– Proposition of fitness/fitness correlation to study dynamic and hardness,
– Analysis of complex neutral fitness landscapes in GA and GP,
– Proposition of metaheuristic for neutral fitness landscapes. |
• Cellular Genetic Algorithm:
|
evolutionary algorithms where
the population is structured by a grid or
a graph.
– Proposition of new selection selection method in Cellular GA,
– Analysis of diversity and convergence time in Cellular GA. |
• Complex Systems:
|
where some ”global” properties
of the system comes from a large number of ”local” interactions.
– Study of the majority problem in cellular automata,
– Design of new GA for majority problem. |
Web page:
http://www.i3s.unice.fr/~verel
to top |
An Information Perspective on Evolutionary Computation
Description of Tutorial
"Information is the tennis ball of communication…" this
sentence
concluded Keith Devlin's talk trying to answer the question: "Does information
Really Exist? ". Whether information exists or not, the discussion about
information surely does.
Even though the term information has a formal definition (i.e., Kolmogorov
complexity, Shannon's information theory), in the EC community (and, as it
seems, in many other fields), the term information is used, more often than
not, in an informal way (e.g., the problem contains no information and hence
it cannot be solved efficiently).
This tutorial focuses mainly on Kolmogorov's notion of information – that
is, the information content of a binary string is the length of the shortest
program that can produce this string and halt – but more importantly,
it concentrates on the applicability of this notion to Optimisation problems
and Black-Box algorithms. For example, we will discuss how informal observations
of the kind, "ONEMAX contains good information", "NIAH does
not contain any" connects with the formal definition.
The tutorial covers the following major issues: decomposition of a fitness-function,
the entropy of a fitness-function as a bound on the expected performance, Kolmogorov
complexity (KC) and its relation to Shannon information theory, KC and problem
hardness, the relation between KC and other (applicable) predictive measures
to problem difficulty (e.g., auto-correlation, ruggedness) and KC vs. the no-free-lunch
theorems.
Borenstein Yossi
|
Biosketch:
Borenstein Yossi is completing his PhD in Computer Science
(Natural and Evolutionary Computation Group) at the University of Essex.
He is Harold-Hyam Wingate, and AJA scholar. His research is also supported
by the ORS award scheme and the University of Essex. After spending a few
years working both in the industry and as a freelance lecturer he turned
his effort to research. His PhD thesis, Information Landscapes explores
the connection between information theory and optimisation problems in
the black box scenario. During the PhD he published 11 papers in international
conferences, and organized a workshop in PPSN of which extracted papers
will be published in a special issue of Evolutionary Computation Journal.
In addition to that he is a reviewer for IEEE TEC, ECJ, GECCO and IEEE
FOCI.
|
to top |
Specialized Techniques and Applications
Experimental Research in EC
Description of Tutorial
The Future of Experimental Research
We present a comprehensive, effective and very efficient methodology for the
design and experimental analysis of search heuristics such as evolutionary
algorithms, differential evolution, pattern search or even classical methods
such as the Nelder-Mead simplex algorithm.
Our approach extends the sequential parameter optimization
(SPO) method that has been successfully applied as a
tuning procedure to numerous heuristics for practical
and theoretical optimization problems.
The benefit of combining modern and classical statistical methods is demonstrated.
Optimization practitioners receive valuable hints for choosing an adequate
heuristic for their optimization problems -- theoreticians receive guidelines
for testing results systematically on real problem instances.
We demonstrate how SPO improves the performance of many
search heuristics significantly. However, this performance
gain is not available for free.
Therefore, costs of this tuning process are discussed.
Several examples from theory and practice are used to
illustrate typical pitfalls in experimentation. Software
tools implementing procedures described in this tutorial
are freely available.
Mike Preuss
|
Biosketches:
Mike Preuss is Research Associate at
the Computer Science Department, University
of Dortmund, Germany (since 2000), where he
also received his diploma degree in 1998.
His current research interests focus on the
field of evolutionary algorithms and include
structured populations, niching, multiobjective
optimization, and real-world applications.
Additionally, he has published about information
distribution in peer-to-peer systems, experimental
methodology and parameter tuning, simple EA
hybrids, EAs for classification problems, and
basic models for EA behavior on multimodal test
problems.
|
Thomas
Bartz-Beielstein Dr. Bartz-Beielstein is
Professor for Applied Mathematics a Cologne
University
of
Applied
Sciences. He studied Mathematics, Computer Science
and Pedagogics
and worked as a researcher at the collaborative
research center “Computational Intelligence” at
the University Dortmund where he also received
his PhD. Thomas is experienced in consulting
to industry and teaching university courses.
His interests
include optimization, modeling, simulation,
statistical analysis, and experimental design
of complex real-world
problems.
|
Thomas Bartz-Beielstein
|
to top
|
Evolutionary Neural Networks
Description of Tutorial
Neuroevolution, i.e. evolution of artificial neural networks,
has recently emerged as a powerful technique for solving
challenging
reinforcement learning problems. Compared to traditional
(e.g. value-function based) methods, neuroevolution is especially
strong in domains where the state of the world is not fully known: the
state can be disambiguated through recurrency, and novel situations
handled through pattern matching. In this tutorial, we will review (1)
neuroevolution methods that evolve fixed-topology networks, network
topologies, and network construction processes, (2) ways of combining
traditional neural network learning algorithms with evolutionary
methods, and (3) applications of neuroevolution to game playing, robot
control, resource optimization, and cognitive science.
Risto Miikkulainen
|
Biosketch:
Risto Miikkulainen
is a Professor of Computer Sciences
at the University of Texas at Austin. He received
an M.S. in Engineering from the Helsinki University
of Technology, Finland, in 1986, and a Ph.D.
in Computer Science from UCLA in 1990. His
recent research focuses on methods for evolving
neural networks and applying these methods
to game playing, robotics, and intelligent
control. He is an author of over 200 articles
on neuroevolution, connectionist natural language
processing, and the computational neuroscience
of the visual cortex. He is an editor of the
Machine Learning Journal and Journal of Cognitive
Systems Research.
|
to top
|
Evolvable Hardware
Description of Tutorial
In this tutorial, fundamental concepts of evolutionary
circuit design and evolvable hardware will be introduced.
In particular, we will deal with evolutionary algorithms
and reconfigurable devices utilized for hardware
evolution as well as with their interactions in typical
applications of evolvable hardware. Traditionally,
evolvable hardware is interpreted from the perspective
of electrical engineering and computer engineering.
However, existing results in the field of evolvable
hardware have impacts on theoretical computer science.
Hence the goal of this tutorial is also to present
a non-traditional view on evolvable hardware reflecting
these theoretical issues.
Lukas Sekanina
|
Biosketch:
Lukas Sekanina (MSc
- 1999, PhD - 2002) received all his degrees
from Brno University of Technology, Czech
Republic. He was awarded the Fulbright
scholarship and worked on the evolutionary
circuit design with NASA Jet Propulsion
Laboratory in Pasadena in 2004. He was
a visiting lecturer with Pennsylvania State
University and visiting researcher with
Department of Informatics, University of
Oslo in 2001. Lukas Sekanina is author
of the monograph Evolvable Components (Springer
Verlag, 2004). He co-authored more than
50 papers mainly on evolvable hardware.
He has served as a program committee member
of several conferences (including CEC,
ICES, AHS, DDECS, EvoHOT, EuroGP, WEAH
and GECCO) and received several awards
for his work.
|
Research interests:
theory, design and hardware implementation of bio-inspired
computational systems. Currently he is an associate
professor and deputy head of the Department of
Computers Systems at the Faculty of Information
Technology, Brno University of Technology. Member
of IEEE. For more information, see http://www.fit.vutbr.cz/~sekanina
to top
|
Evolutionary Games:
Description of Tutorial
1- Introduction to Standard Non-cooperative Game Theory
• Intelligent and rational agents and elements of
decision theory:
complete and
incomplete information. Bayesian decision theory. • Decision
under conflicting situations. Basic terminology of
game theory. • Extended and normal forms of a
game. Equivalence. Two-person games.
Examples. • Dominance and Nash equilibrium concepts.
Pure and randomized strategies.
Examples. • Limitations of the Nash equilibrium
concept. Iterated games.
2- Introduction to evolutionary game theory
• Evolutionary processes. Evolutionary population
games. • The concept of evolutionary stable
strategies. Examples. • The replicator dynamics • Replicator
dynamics analysis of some simple two-person, symmetric
games • Evolutionary games in spatially structured
populations • Graph characterization of non-mixing
populations • Evolutionary games on graphs •
Evolutionary
games on social networks: equilibrium states and
stability. Examples.
3- Discussion and conclusions
Marco Tomassini
|
Biosketch:
Marco Tomassini is a professor of Computer
Science at the Information Systems Department
of the University of Lausanne,
Switzerland. He graduated in physical and chemical sciences in Mendoza,
Argentina, and
got a Doctor's degree in theoretical chemistry from the University of
Perugia, Italy, working on computer simulations of condensed matter
systems. His current research interests are centered
around the application of biological ideas to artificial systems.
He is active in evolutionary computation, especially
spatially structured systems, genetic programming, and the structure of
program
search
spaces.
He is also interested in machine learning, evolutionary games,
and the dynamical properties of networked complex systems. He has been
Program
Chairman of several international events and has published many
scientific papers and several authored and edited books in these fields.
|
to top
|
Evolutionary Design:
Description of Tutorial The tutorial explores and describes
the integration of evolutionary search,
exploration and optimization across
a wide spectrum of design activities
and thus investigates the manner
in which evolutionary computation
can be utilized to generate design
concepts and achieve meaningful designs
in addition to more standard evolutionary
optimization processes. The support
of innovation and creativity during
preliminary design and novel EC strategies
and problem representations that
best handle design complexity and
support people-centred aspects are
also presented. Although the focus
is upon development and integration
of appropriate evolutionary computing
strategies in engineering design
the transfer of the technologies
into other domains such as drug design
and discovery and software design
will also be included. Generic design
aspects across several such domains
that can particularly benefit from
EC application / integration and
illustrate EC potential across these
areas will be presented.
The tutorial will therefore include
the following aspects:
•
The integration of both deterministic
design approaches and computational
intelligence techniques with evolutionary
design environments
•
The extraction and processing of
evolutionary design data and its
visualization within appropriate
designer interfaces
•
Human-centred aspects and interactive
evolutionary design systems
•
Evolutionary search and exploration
across uncertain / poorly-defined
design environments
•
Supporting innovative and creative
design
•
Development and integration of
subjective fitness measures
•
Qualitative and quantitative multi-objective
design satisfaction
•
Search and optimization within
heavily constrained design domains
•
Reducing computational expense
during detailed design, analysis
and optimisation
•
Evolutionary design systems involving
high-performance and distributed
computing, problem-solving environments
and grid-based application
Ian Parmee
|
Biosketch:
Formerly
Director of the EPSRC
Engineering Design Centre
at the University of
Plymouth, Ian
Parmee established
the Advanced Computation
in Design
and Decision-making Lab
(www.ad-comtech.co.uk/ACDDM_Group.htm)
at Bristol, UWE in 2001.
The research within the
Lab focuses upon the
integration of people-centred,
evolutionary and other
computational intelligence
technologies with complex
design and decision-making
processes. From an engineering
perspective his work
has extended into mechanical,
aerospace, electronic,
power system and marine
engineering design.
|
From this basis
his work now encompasses financial
systems, software design and
drug design. Enabling computational
technologies such as high performance,
distributed computing and Grid
technology; data-mining, analysis,
processing and visualisation
and aspects relating to human-computer
interaction also play a major
role. Experience across these
various domains has resulted
in an understanding of generic
issues, degrees of complexity
and the difficulties facing both
researchers and practitioners
when operating within them. Ian
has published extensively across
engineering and evolutionary
computing domains and has presented
plenary talks at leading conferences
in both areas. His own book ‘Evolutionary
and Adaptive Computing in Engineering
Design’ was published by
Springer-Verlag in 2001. He has
also organised and chaired the
biennial international conference ‘Adaptive
Computing in Design and Manufacture’ since
1994. The seventh event took
place in Bristol, UK in April
2006 (www.ad-comtech.co.uk/ACDM06).
He is also the Scientific Director of ACT (www.ad-comtech.co.uk) which specialises
in a range of computational intelligence strategies to provide search, optimisation
and modelling capabilities across complex industrial and commercial problem domains.
These areas of activity are underpinned by seventeen years experience of application
of these technologies (especially evolutionary computation) in close collaboration
with industry. to top |
Evolutionary Multiobjective Combinatorial
Optimization:
Description of Tutorial
Discrete/combinatorial optimization has been an interesting
and challenging area to researchers belonging to development
of algorithms and OR techniques for real-world applications.
Most such problems abstracted/taken from graph theory
are NP-complete in single objective, and NP-hard for
multiple objectives. There are many applications of
such problems in communication topology and VLSI design.
For most such problems, there do not exist good heuristics/algorithms,
therefore, black-box optimization techniques, e.g.,
EAs are usually preferred for getting effective solutions.
In this tutorial, we will do a quick review of some
of the standard combinatorial optimization problems,
and then include the techniques and design of operators
to effectively solve EMCO problems taken from real-world
applications. This would be followed by some case-studies
taken from standard problems. This interdisciplinary
tutorial provides a practical foundation for:
• an introduction to combinatorial problems and their
multiobjective versions,
• problem challenges and solution
through EMO techniques,
• need of designing specialized
genetic operators, representation and techniques
for embedding of (limited) problem specific
knowledge,
• few case studies taken form well known
EMCO, and comparison of EMO results with the existing
heuristics/approximation
algorithms, and
• improvement of the EMO solutions
through hybridization using local search and others.
Key design considerations to designing operators/representations
so as to effectively explore the search space will
be looked into. In general, the tutorial is aimed
at quickly laying a foundation that can be used
as the
basis for further study, research and development
in this emerging area.
Rajeev Kumar
|
Biosketch:
Rajeev Kumar is an Associate
Professor of Computer Science & Engineering
(CSE) at Indian Institute of Technology (IIT),
Kharagpur. He received his Ph.D. from University
of Sheffield, and M.Tech. from University
of Roorkee (now, IIT - Roorkee) both in Computer
Science & Engineering. He had been a
faculty of CSE at IIT Kanpur and BITS Pilani.
He had worked for National Semiconductors,
Germany and DRDO, India. His main research
interests include evolutionary algorithms & multiobjective
combinatorial optimization, embedded & multimedia
systems, and programming language & software
engineering. He has published over 100 research
articles in lead journals and conferences.
He is a member of ACM, senior member of IEEE
and a fellow of IETE. More info about him
can be found at http://www.facweb.iitkgp.ernet.in/~rkumar/.
|
Email:
Further Information: http://www.facweb.iitkgp.ernet.in/~rkumar/emco.html
to top |
Evolutionary Computer Vision:
Description of Tutorial
The term Evolutionary Computer Vision
is a recent research subject in genetic
and evolutionary algorithms. This
tutorial aims to provide a survey
about the main topics that have been
developed in the literature. The
goals are to introduce the genetic
and evolutionary community to the
research activities of endowing a
machine with visual abilities, applying
the paradigm of evolutionary algorithms.
The tutorial will be focused in giving
examples of real working systems
and how the paradigm could be implemented
to solve problems such as: recognition,
feature extraction, 3D measurement,
and segmentation.
The tutorial addresses all the
above mentioned issues and is articulated
in three parts. The introduction
summarizes the new idea of evolving
structures that solves computer
vision problems in terms of optimization
using the framework of evolutionary
computation. The second part is
focused in details to successfully
implement an Evolutionary Algorithm
to solve computer vision problems.
In particular, it illustrates the
crucial role of encoding the solution,
the definition of the fitness function,
and how the structures are evolved
by the system. Examples are provided
to illustrate aspects related to
multimodal fitness functions, synthesis
of feature detectors, and problem
decomposition. Finally, a discussion
about the methodological issues
addressed in evolutionary computer
vision should give the participants
a clear picture that a successful
application requires a balance
between computer vision and evolutionary
algorithms knowledge.
Scientists, Engineers, Mathematicians,
Programmers, and Technical Managers
will all benefit by learning about
how to evaluate this new computational
research field.
Olague's
research focuses on the principles of computational intelligence applied to close-range
photogrammetry and computer vision. He is a member of the EvoNET, RSPSoc, ASPRS, SIGEVO, IEEE, IEEE
Computer Society, and is listed in Who's
Who in Science and Engineering. Dr. Olague is recipient of the "2003 First
Honorable Mention for the Talbert Abrams Award", offered by the American
Society for Photogrammetry and Remote Sensing. He has been awarded the Bronze
Medal at the Human Competitive Awards for his work on synthesis of interest points
in 2006.
email:
Further information: http://cienciascomp.cicese.mx/evovision/
Expected
enrollment: 30 people
to top
|
Bio-inspired Telecommunications:
Description of Tutorial
The rapid advances in
computing and transmission technologies are giving
im petusto the large-scale deployment of interconnected
systems for communication and transport of data,
voice, video and resources. The global Internet,
and cellular, satellite, and Wi-Fi networks, as
well as power and logistic networks, just to mention
a few remarkable examples, are at the same time ubiquitous
and at the very heart of the functioning and success
of modern societies. On theother hand, all these
networks are increasingly heterogeneous, complex,
and dynamic, such as they present a number of challenging
issues concerning their analysis and design, management
and control, robustness and security. Biological
systems show a number of properties, such as self-organization,
adaptivity, scalability, robustness, autonomy, locality
of interactions,
distribution, which are highly desirable to deal
with the growing complexity of current and future
networks.
Therefore, in recent years a growing number of effective solutions for problems
related to networked systems have been proposed by taking inspiration from the
observation of natural systems and processes such as natural selection, insect
societies, immune systems, cultural systems, collective behaviors of groups of
animals/cells, etc.
The aim of the tutorial is to provide the cutting
edge research on Bio/Nature inspired approaches to
network-related
problems. The tutorial will also focus on protocol
engineering in order to introduce different frameworks
that have
been developed to realize such Bio/Nature inspired
protocols inside the network stack of the Linux kernel.
The presentation
will also introduce acomprehensive performance evaluation
framework that is crucial for getting an insight into
the behavior of a Bio/Nature inspired routing algorithm
over wide operational landscape of a real network.
The designers of the routing protocols can use it
to verify
their Linux model by comparing important performance
values obtained from Linux model with the simulation
model. The presentation will also introduce a novel
testing framework for MANETs to compare and verify
the results
of MANET routing protocol in real world MANETs. Last
but not least tutorial will also introduce security
threats that a designer has to be aware of while deploying
such protocols in real world fixed and MANETS. We believe
that the tutorial will be instrumental in highlighting
the potential of Bio/Nature inspired protocolsin real
world networks.
The tutorial is intended for telecommunication managers, protocol developers,
network engineers, network software developers and optimization researchers
who want to work in non-linear real time dynamic problems.
Muddassar Farooq
|
Biosketch:
Muddassar Farooq is currently
working as an Assistant Professor at National
University of Sciences and Technology (NUST),
Pakistan.
He has worked as a research associate at The
University of Dortmund (Germany) from Oct 2001
to March 2006. His research interests include
application of Bio/Nature inspired principles
to telecommunication networks and developing
Nature inspired
engineering solutions to real world problems without the need of additional hardware
or software resources. He was the project Manager
for BeeAdHoc, which deals with designing a
secure, scalable and energy efficient routing
protocol for MANETs (Mobile Ad-Hoc Networks).
|
He recently completed
the BeeHive project that dealt with designing a robust,
fault-tolerant and efficient protocol for fixed telecommunication
networks. His paper on BeeHive won the best paper
award at ANTS 2004 conference. He is a member of ACM
and IEEE. He is the workshop chair for EvoCoMNet 2007
to be held in Valencia Spain. He also acted as the
guest editor for a for a special issue on Nature Inspired
Applied Systems (NIAS) of Elsevier Journal of System
Architecture (Volume 52, Numbers 8-9) August/Sep tember
2006.
to top
|
Distributed Evolutionary Computation for Fun and Profit
Description of Tutorial
Distributed evolutionary computation
is almost a given nowadays. Computer networks are composed
of heterogeneous nodes linked by different bandwidth
and latencies. Even off-the-shelf laptops include dual-core
microprocessors, which, to be taken full advantage of,
must be programmed using distributed computing techniques.
Moreover, current processors such as the latest by Intel
and AMD include hyperthreading architectures that allow
several threads to run concurrently. All these things
have to be taken into account to fully leverage the current
computational
environment.
In general, programming in this environment presents
several challenges:
• Achieve as many participants as
possible in a distributed computing experiment.
• Distribute
the experiment in the best possible way.
• Scale with
the addition of new processing elements.
• Take advantage
of all participants in the experiment despite their
capability and connection quality
In general, that means that the usual, plain-vanilla
generational-cum-island/farming model is not adequate
for this kind of environment. Even if some improvement
can be obtained for several processors, even in a single
processor better performance can be achieved through
an adaptation of the algorithm to the architecture.
In this tutorial, we will first present the state of
the art in distributed evolutionary computation, presenting
what are the issues in _normal_ parallel/distributed
EC, and how they have been classically solved. Then we
will present an overview of _modern_ distributed computing,
including P2P computing and more esoteric topics such
as sneaky/parasitic/volunteer computation.
We will present and, if possible, let the audience run
examples using systems such as DREAM, a P2P system designed
with evolutionary computation (but not only) in mind,
and DCOR (Distributed Computation on Rails, available
from http://rubyforge.org/projects/dconrails/, a sneaky
distributed evolutionary computation system based on
Ruby on Rails). Then, we will try to address the challenges
of modern evolutionary computation as written above,
and how different authors have approached them.
Finally, we will present open issues in distributed
evolutionary computation.
Juan Julián Merelo
|
Biosketches:
Juan Julián Merelo is Associate Professor at the University of
Granada. His main research interests are distributed
evolutionary
computation and complex systems. He's been coorganizer
or organizer of
the last 4 PPSN conferences, and participated
in program committees of
all major EC conferences.
|
Juan
Luis Jiménez Laredo is a researcher and
PhD student at the
University of Granada. He's currently working on
adaptive distributed
evolutionary computation system, and has also participated
in the HPC
program and as an Erasmus student at the University
of Ulm, in Germany.
|
Juan
Luis Jiménez
|
to top
|
|