Plenary Speakers

Wolfgang Banzhaf

Michigan State University, USA

Title

The Role of Neutrality in Genetic Programming, Machine Learning and Problem Solving

Abstract

Early on in the development of Genetic Programming, the subfield of evolutionary computation dealing with the evolution of arbitrary structures, a curious phenomenon was found: bloat. With that term code/structures were described that would not contribute to the fitness or quality of a solution, yet were persistently present and growing in proportion as the search went on. Later it was realized that this is an emergent phenomenon in Genetic Programming, now called 'neutral code'. The neutrality of code has profound consequences for search efficiency of evolution, as well as for the generalization ability and size of the resulting solutions. In this talk, I shall discuss the role of neutrality in Genetic Programming and point out related phenomena in other types of Machine Learning like neural networks and problem solving in general.

Biography

Wolfgang Banzhaf holds the first endowed chair dedicated to Evolutionary Computation in the United States, the John R. Koza Chair for Genetic Programming, in the Department of Computer Science and Engineering at Michigan State University. Previously, he was a University Research Professor in the Department of Computer Science, Memorial University of Newfoundland, where he also served as head of department from 2003 to 2009 and from 2012 to 2016. After a PhD and postdoc in Physics, he worked as a research scientist at Mitsubishi Electric in Japan and the US before joining academia in 1993 as associate professor of Applied Computer Science at the Technical University of Dortmund in Germany. His research interests are in the field of bio-inspired computing, notably evolutionary computing and complex adaptive systems. A recurrent theme of his work is Linear Genetic Programming and neutrality in evolution (particularly Genetic Programming). He recently co-edited Springer's "Handbook on Evolutionary Machine Learning".

Website:https://engineering.msu.edu/faculty/Wolfgang-Banzhaf




Günter Rudolph

TU Dortmund, Germany

Title

Theory of Evolutionary Computation: Impacts and Prospects

Abstract

The reputation of evolutionary algorithms (EAs) was and is ambivalent in the optimization community. Similar to direct search methods, the main arguments against EAs were (1) that they were developed without a mathematical foundation, (2) that there is no proof of convergence and (3) that they are slow in finding better solutions. The first point is not really an argument against EAs, since they use principles of biological evolution as pool of inspiration purposely to overcome traditional lines of thought and to get new classes of optimization algorithms. These new ideas may be good or bad, and there is a clear need to analyze them. The second point can be refuted by providing conditions for convergence whereas the third point is disabled by proving runtime bounds for certain problem classes.

Once it was known that a theoretical foundation of EAs is possible, many new questions arose that were waiting for a theoretically based answer. We will examine what impact the theoretical results had on the design of evolutionary algorithms. However, this is only a partial success story, as the gap between theory and practice is widening. We will discuss the reasons behind this and which measures might be taken to counteract this progression.

Biography

Günter Rudolph studied Computer Science at the Universities of Karlsruhe (now KIT) and Dortmund (now TU Dortmund University), Germany. He received the Diplom-Informatiker degree (equivalent to Master degree in Computer Science) and a doctoral degree (Dr. rer. nat.) at Dortmund in 1991 and 1996, respectively. After completing his studies, he worked at the Informatics Center Dortmund (ICD) until the end of 1996 and then returned to the University of Dortmund as a post-doc in the Collaborative Research Center on Computational Intelligence. From 2001 to 2005 he was hired by Parsytec AG in Aachen, Germany, where he was involved in application and software development in the field of optical inspection systems for paper and strip steel production. In 2005 Günter was appointed professor for Computational Intelligence at TU Dortmund University in the computer science department. The main research interests are the theoretical analysis and the systematic design of evolutionary algorithms.

Günter served as Associate Editor of the IEEE Transactions on Evolutionary Computation from 1998 for ten years and he is currently in the Editorial Board of Evolutionary Computation Journal (MIT) and ACM Transactions on Evolutionary Learning and Optimization. He was involved in the organization of many scientific conferences as general, program and technical chair. Currently, he the Chairman of the PPSN Steering Committee.

He received the Best Paper Award for theoretical work at IEEE Int’l Conf. on Evolutionary Computation (ICEC) in 1996 and later several BPAs together with his students at international conferences. Moreover, he is co-author of the paper on Explorative Landscape Analysis (ELA) that received the SIGEVO Impact Award in 2021. He is the recipient  for the 2025 IEEE CIS Evolutionary Computation Pioneer Award.

Website:https://ls11-www.cs.tu-dortmund.de/people/rudolph/index.jsp?userLanguage=en




Kate Smith-Miles

The University of Melbourne, Australia

Title

“Optimization in the Darkness of Uncertainty: when you don't know what you don't know, and what you do know isn't much!”

Abstract

 How do we find the optimal solution for a constrained multi-objective optimisation problem when we have no analytical expression for the objective functions, and very limited function evaluations within the huge search space due to the expense of measuring the objective functions? Calculus can’t help you, and trial and error is not an option! This talk will describe a common practical optimisation problem found in many industrial settings with these challenges, and introduce some methods for expensive black-box optimisation. Finally, we will address the question of how best to evaluate the performance of such methods by generating new test instances with controllable characteristics.

Biography

Kate Smith-Miles is a Melbourne Laureate Professor and Director of the ARC Training Centre in Optimisation Technologies, Integrated Methodologies and Applications (OPTIMA). She is also Pro Vice-Chancellor (Research Capability) at The University of Melbourne. Kate has published over 300 refereed journal and international conference papers in the areas of neural networks, optimisation, machine learning, and various applied mathematics topics. She has supervised to completion over 30 PhD students, and has been awarded over AUD$30 million in competitive grants, including a Georgina Sweet ARC Laureate Fellowship. She has received medals for her research from the Australian Mathematical Society (AustMS), the Australian and New Zealand Industrial and Applied Mathematics Society (ANZIAM), and the Australian Society for Operations Research (ASOR). Kate is a Fellow of the Australian Academy of Science, the Institute of Engineers Australia and the Australian Mathematical Society, and a past President of the Australian Mathematical Society. She is frequently invited as keynote speaker at leading international conferences, including IFORS, GECCO, and CPAIOR, to discuss her Instance Space Analysis methodology. In 2024, Kate was appointed an Officer of the Order of Australia (AO) for her distinguished service to tertiary education, applied mathematics research, and as a role model and advocate for women in STEM.

Website:https://findanexpert.unimelb.edu.au/profile/811703-kate-smith-miles




Zhi-Hua Zhou

Nanjing University, China

Title

Learnware: Small models do big

Abstract

"Learnware = Model + Specification". Let's consider the following questions: First, do we believe that in the future (A) there will be a big model that is able to cope with all possible learning tasks, or (B) it is crucial to have many models to collaborate? Second, are these models to be developed by (A) one developer (or one company), or (B) lots of developers all over the world? Third,  are training data used to train these models to be (A) freely shared, or (B) mostly not? If we choose (B) for the answers, it seems that we will encounter a mission impossible: how to identify helpful models from a growing huge pool of trained models developed by developers all over the world, and reuse or even reassemble the models to tackle new user's task, given that we could not touch developers' and users' training data?  "Learnware" makes this possible. A key ingredient is the specification which enables a trained model to be adequately identified to reuse according to the requirement of new user who knows nothing about the model, while model developers' training data are preserved. Learnwares are accommodated in a learnware dock system, which enables small models do big, and enables models do things even beyond their original development purposes.  This talk will briefly introduce some preliminary research advances in this direction.

Biography

Zhi-Hua Zhou is Professor of Computer Science and Artificial Intelligence and Vice President of Nanjing University. His research interests are mainly in machine learning and data mining, with significant contributions to ensemble learning, multi-label and weakly supervised learning, etc. He has authored the books "Ensemble Methods: Foundations and Algorithms", "Machine Learning", etc., and published more than 200 papers in top-tier journals or conferences. Many of his inventions have been successfully deployed in industry. He founded ACML (Asian Conference on Machine Learning), serves as series editor of Springer Lecture Notes in Artificial Intelligence, advisory board member of AI Magazine, editor-in-chief of Frontiers of Computer Science, associate editor of AIJ, MLJ, etc. He has served as the Chair of the IEEE Computational Intelligence Society Data Mining Technical Committee. He is President of IJCAI Trustee, Fellow of the ACM, AAAI, AAAS, IEEE, member of the Academia Europaea, and recipient of the National Natural Science Award of China, the IEEE Computer Society Edward J. McCluskey Technical Achievement Award, the IEEE Computational Intelligence Society Outstanding Early Career Award, the CCF-ACM Artificial Intelligence Award, etc.

Website:https://cs.nju.edu.cn/zhouzh/ 

© Copyright IEEE - All rights reserved. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.