Header image  
Tutorials: Nov 13, 2011, Shanghai, China
Main Conference: Nov 14-17, 2011, Shanghai, China
Workshops: Nov 18, 2011, Hangzhou, China
line decor
   Final program of ICONIP2011 and book of abstracts are available now.  
line decor

Plenary Speakers

Kunihiko Fukushima

Fuzzy Logic Systems Institute, Japan.

Recent Advances in the Neocognitron: Robust Recognition of Visual Patterns


        The neocognitron is a neural network model proposed by Fukushima (1980). Its architecture was suggested by neurophysiological findings on the visual systems of mammals. It is a hierarchical multi-layered network. It acquires the ability to robustly recognize visual patterns through learning. Although the neocognitron has a long history, modifications of the network to improve its performance are still going on.

        In this talk, after a brief introduction of the neocognitron, we discuss several improvements applied recently to the neocognitron.

        They are: a) competitive learning with winner-kill-loser rule; b) subtractive inhibition for feature-extracting S-cells, which increases robustness against background noise; c) several methods for extracting oriented edges; d) blurring operation with root-mean-square by C-cells; and so on.

        We also show that several new functions can be realized by introducing top-down connections to the neocognitron: a) recognition and completion of partly occluded patterns, b) restoring occluded contours, and so on.

Biographical Sketch

        Kunihiko Fukushima received a B.Eng. degree in electronics in 1958 and a PhD degree in electrical engineering in 1966 from Kyoto University, Japan. He was a professor at Osaka University from 1989 to 1999, at the University of Electro-Communications from 1999 to 2001, at Tokyo University of Technology from 2001 to 2006, and a visiting professor at Kansai University from 2006 to 2010. Prior to his Professorship, he was a Senior Research Scientist at the NHK Science and Technical Research Laboratories. He is now a Senior Research Scientist at Fuzzy Logic Systems Institute.

        He is one of the pioneers in the field of neural networks and has been engaged in modeling neural networks of the brain since 1965.

        His special interests lie in modeling neural networks of the higher brain functions, especially the mechanism of the visual system.

        He was the founding President of JNNS (Japanese Neural Network Society) and a founding member on the Board of Governors of INNS (International Neural Network Society). He is a former President of APNNA (Asia-Pacific Neural Network Assembly).

AiKe Guo

Institute of Neuroscience, SIBS, CAS and 2Institute of Biophysics, CAS,China.

Dopamine reveals neural circuit mechanisms of value based decision making in fruit fly’s brain


        Why fruit Fly’s brain? Drosophila Brain is the geometrical mean between the simplest and the more complicated nervous system. Drosophila transformed developmental genetics and cell biology. Now it is poised to help biologists decipher how the brain works ( Claude Desplan, , 2007).

        Why decision making? "Life is endless series of Decision, regardless of Drosophila or Human beings" to survive, is to make decision. There are two conceptual frameworks of decision-making: simple perception decision and simple value based decision. The neuroeconomics is to explore the brain mechanisms responsible for these evaluative processes, and to explore an animal’s internal valuation of competing alternatives (Sugrue et al., 2005).Here we show that even Drosophila can make clear-cut, salience-based decisions when faced with competing alternatives.

        Why mushroom body? Where does the decision making occur in fly’s brain? Mushroom bodies might be the centers of endowing an insect with a degree of “free will?or “intelligent control?over instinctive actions (F.Dujardin, 1850). We found genetic silencing of Mbs impairs decision.

        Why Dopamine ? The neurotransmitter dopamine (DA) plays a crucial role in motivational control: rewarding, aversive, and alerting (Bromberg-Martin et al, 2010). Using binary expression system: “GAL4-Enhancer Trapping & Region-Specific Gene Expression? we revealed that the decision making in Drosophila consists of two phases: an early phase that requiring DA and MB activities and a late phase independent of these activities. Thus, we suggested that the DA-MB circuit regulates salience-based decision making in Drosophila by both gating inhibition and gain-control mechanisms (Zhang et al., 2007; Guo et al., 2009; Guo et al., 2010; Wu and Guo, 2011).

Biographical Sketch

        Dr. Aike Guo,biophysicist and neuroscientist,graduated from Moscow State University, Dept. of Biophysics, in 1965. He received Dr.rer.nat.from Munich University, Germany, in 1979. He has been a visiting scholar funded by Max-Planck-Society in MPI for Biological Cybernetics from 1982 to 1984. He became an academician of the Chinese Academy of Sciences (CAS) in 2003. He is currently a senior investigator and head of the Laboratory of Learning and Memory at Institute of Neuroscience (ION), Shanghai Institutes for Biological Sciences (SIBS), from 1999. He is also a research professor of Institute of Biophysics (IBP), CAS.

        During 1979-1992, Dr. Guo has been working in the field of Biophysics, Visual Information Processing and Computational Neuroscience. In the past of 17 years, he has been engaged in the study about the learning and memory using fruit flies (Drosophila) as a model organism from Gene-Brain-Behavior perspective.

        His current research interests are focusing in leaning/ memory and visual cognition in Drosophila. His long term goal is to explore the “first principles?of high level cognitive activities in Drosophila brain from an evolutional perspective, to shed new light on the “Brain-Mind Problem?

Nikola Kasabov

Auckland University of Technology, New Zealand

EvoSpike: Evolving Probabilistic Spiking Neural Networks and Neuro-Genetic Systems for Spatio- and Spectro-Temporal Data Modelling and Pattern Recognition


        Spatio- and spectro-temporal data (SSTD) are the most common data in many domain areas, including bioinformatics, neuroinformatics, ecology, environment, medicine, engineering, economics, etc. Still there are no sufficient methods to model such data and to discover complex spatio-temporal patterns from it. The brain is functioning as a spatio-temporal information processing machine and brilliantly deals with spatio-temporal data, thus being a natural inspiration for the development of new methods for SSTD. This research aims at the development of new methods for modeling and pattern recognition of SSTD, called evolving probabilistic spiking neural networks (epSNN), along with their applications.

        epSNN are built on the principles of evolving connectionist systems [1] and eSNN in particular [2,3] and on probabilistic neuronal models (e.g. [4]). The latter extent the popular leaky integrate-and-fire spiking model with the introduction of some biologically plausible probabilistic parameters. The epSNN are evolving structures that learn and adapt to new incoming data in a fast incremental way.

The presented research explores several approaches to creating epSNN for SSTD, from a single neuron, to reservoir computing and neuro-genetic systems. A single neuronal model can capture SSTD and it can also generate a precise spike time sequence in response to a SST pattern of spikes from hundreds and thousands of inputs/synapses [5]. The research explores different types of neuronal models and dynamic synapses, including a SPAN model [5], Fusi's algorithm implemented on the INI Zurich (www.ini.unizh.ch) SNN chip, and a novel stepSNN model that implements the time-to-first spike principle and probabilistic synapses [4].

The presented research explores further ensembles of neurons and neuronal structures that may be called 'reservoirs'. Here they are recurrent SNN that are evolving and deep learning structures, capturing spatial- and temporal components in their interaction and integration. The epSNN spatio-temporal states can be identified and classified for pattern recognition tasks, which is illustrated through some preliminary experiments on gesture- and sign language recognition [6], moving object recognition [7], EEG data recognition [8]. The epSNN can learn data in an on-line manner using a frame-based input information representation, or alternatively - an event-address based representation (EAR), the latter implemented in the INI Zurich silicon retina chip and DVS camera and the silicon cochlea chip. The project also explores how epSNN can be used to implement finite automata models and associative memories.

A main problem in the EvoSpike model and system development is the optimization of numerous parameters. For this purpose three approaches are proposed: using evolutionary computation methods [9]; using a gene regulatory network (GRN) model [10,11], or using both in one system [10,11], depending on the application. Linking gene/protein expression to epSNN parameters may also lead to new types of neuron-synapse-astrocyte models inspired by new findings in neuroscience. Neurogenetic models are promising for modeling and prognosis of neurodegenerative diseases such as Alzheimer's disease and for personalized medicine in general [12]. Future research is expected to continue through tighter integration of knowledge and methods from information science, bioinformatics and neuroinformatics [13,14]. The research is relevant to the future development in the neuromorphic engineering area [15].

The research is funded by the EU FP7 Marie Curie project, the Knowledge Engineering and Discovery Research Institute KEDRI (www.kedri.info) of the Auckland University of Technology and the Institute for Neuroinformatics, University of Zurich and ETH (INI, www.ini.unizh.ch).


  1. N.Kasabov (2007) Evolving Connectionist Systems: The Knowledge Engineering Approach, Springer, London (www.springer.de) (first edition published in 2002)
  2. S.Wysoski, L.Benuskova, N.Kasabov, Evolving Spiking Neural Networks for Audio-Visual Information Processing, Neural Networks, vol 23, issue 7, pp 819-835, September 2010.
  3. S.Schliebs, M. Defoin-Platel, S. Worner and N. Kasabov, Integrated Feature and Parameter Optimization for Evolving Spiking Neural Networks: Exploring Heterogeneous Probabilistic Models, Neural Networks, 22, 623-632, 2009.
  4. N.Kasabov, To spike or not to spike: A probabilistic spiking neural model, Neural Networks, Volume 23, Issue 1, January 2010, Pages 16-19
  5. Mohemmed, A., Schliebs, S., & Kasabov, N. (2011) SPAN: Spike Pattern Association Neuron for Learning Spatio-Temporal Sequences, Int. J. Neural Systems, (to appear).
  6. Schliebs, S., Hamed, H. N. A., & Kasabov, N. (2011). A reservoir-based evolving spiking neural network for on-line spatio-temporal pattern learning and recognition. In: Proc. 18th Int. Conf. on Neural Information Processing, ICONIP, Shanghai, Springer LNCS.
  7. Kasabov, N., Dhoble, K., Nuntalid, N., & Mohemmed, A. (2011). Evolving probabilistic spiking neural networks for spatio-temporal pattern recognition: A preliminary study on moving object recognition. In: Proc. 18th Int. Conf. Neural Information Processing, ICONIP 2011, Shanghai, Springer LNCS.
  8. Nuntalid, N., Dhoble, K., & Kasabov, N. (2011). EEG Classification with BSA Spike Encoding Algorithm and Evolving Probabilistic Spiking Neural Network. in: Proc. 18th Int. Conf. on Neural Information Processing, Shanghai, Springer LNCS.
  9. S. Schlebs, M.Defoin-Platel, N.Kasabov, On The Probabilistic Optimization Of Spiking Neural Networks, Int. J. of Neural Systems, Vol. 20, No. 6 (2010) 481?00, World Scientific Publ.Comp.
  10. L.Benuskova and N.Kasabov (2007) Computational Neurogenetic Modelling, Springer, New York
  11. N.Kasabov, R.Schliebs, H.Kojima (2011) Probabilistic Computational Neurogenetic Framework: From Modelling Cognitive Systems to Alzheimer's Disease, IEEE Transactions on Autonomous Mental Development, vol.3, No.3, September 2011, 1-12.
  12. N.Kasabov, Y. Hu (2010) Integrated optimisation method for personalised modelling and case study applications, Int. Journal of Functional Informatics and Personalised Medicine, vol.3,No.3,236-256.
  13. N.Kasabov (ed) (2012) The Springer Handbook of Bio- and Neuroinfortics, Springer, in print
  14. N. Kasabov (2012) Spiking neural networks and neurogenetic systems, Springer Series of Bio-and Neuroinformatics, Heidelberg, (to appear).
  15. G.Indiviery and T.Horiuchi (2011) Frontiers in Neuromorphic Engineering, Frontiers in Neuroscience, 5:118.

Biographical Sketch

        Professor Nikola Kasabov, FIEEE, FRSNZ is the Director of the Knowledge Engineering and Discovery Research Institute (KEDRI), Auckland. He holds a Chair of Knowledge Engineering at the School of Computing and Mathematical Sciences at Auckland University of Technology. Currently he is an EU FP7 Marie Curie Visiting Professor at the Institute of Neuroinformatics, ETH and University of Zurich. Kasabov is a Past President of the International Neural Network Society (INNS) and also of the Asia Pacific Neural Network Assembly (APNNA). He is a member of several technical committees of IEEE Computational Intelligence Society and a Distinguished Lecturer of the IEEE CIS. He has served as Associate Editor of Neural Networks, IEEE TrNN, IEEE TrFS, Information Science, J. Theoretical and Computational Nanosciences, Applied Soft Computing and other journals. Kasabov holds MSc and PhD from the Technical University of Sofia, Bulgaria. His main research interests are in the areas of neural networks, intelligent information systems, soft computing, bioinformatics, neuroinformatics. He has published more than 450 publications that include 15 books, 130 journal papers, 60 book chapters, 28 patents and numerous conference papers. He has extensive academic experience at various academic and research organisations in Europe and Asia. Prof. Kasabov has received the AUT VC Individual Research Excellence Award (2010), Bayer Science Innovation Award (2007), the APNNA Excellent Service Award (2005), RSNZ Science and Technology Medal (2001), and others. He is an Invited Guest Professor at the Shanghai Jiao Tong University (2010-2012). More information of Prof. Kasabov can be found on the KEDRI web site: http://www.kedri.info.

Soo-Young Lee

Korea Advanced Institute of Science and Technology, Korea

Implicit Intention Recognition and Hierarchical Knowledge Development for Artificial Cognitive Systems


        Artificial Cognitive Systems (ACS) is under development based on mathematical models of higher cognitive functions for human-like functions such as vision, auditory, inference, and behaviour. Although the final goal is to provide human-like decision making and behaviour, we are currently focusing on hierarchical knowledge development and recognition of both explicit and implicit human intention. In real world applications these are the core components for robust situation awareness capability, which is directly related to the decision making and behaviour. We propose to utilize multimodal cognitive neuroscience data such as fMRI, EEG, eye gaze, and GSR.

        Recognition of human intention is very critical for the awareness of situation involving people. Although current human-machine interfaces have been developed to utilize explicitly-represented human intention such as keystrokes, gesture, and speech, the actual hidden human intention may be different from the explicit one. Also, people may not want to go through tedious processes to present their intentions explicitly, especially for routine sequential tasks and/or sensitive personal situations. Therefore, it is desirable to understand the hidden or unrepresented intention, i.e., 'implicit' intention, for the next-generation intelligent human-oriented user interface. We had measured multimodal signals, i.e., EEG, ECG, GSR, video, and eye gaze signals, while the subjects are asked both obvious and non-obvious questions. The latter includes sensitive personal questions which may incur differences between the explicit and implicit intentions. Also, the subjects made 'Yes' or "No' answer for each question by speech. The measured signals for the obvious questions are regarded as the references, which are used to understand the non-obvious cases. We had separately trained SVM classifiers for each modality from the obvious questions, and tested the SVMs for the non-obvious questions. The agreement (or disagreement) among different modality, and the explicitly-represented intention are analyzed. It demonstrated the possibility of understanding human implicit intention, i.e., classifying into categories, from brain and/or speech signals, which may be utilized as a next-generation human-machine interface.

        For robust situation awareness in unknown environments it is required to estimate a confidence measure on the situation awareness results and make different decisions. If the confidence measure is not high enough, active learning should improve its knowledge by itself though internet and other means of interaction. Hierarchical feature representation and knowledge development are the essential components of this active learning. Based on non-negative matrix factorization we will show a hierarchical feature representation may be leant from images and speeches, and the extracted features resemble those found at human vision and auditory pathways.

Biographical Sketch

        Soo-Young Lee received B.S., M.S., and Ph.D. degrees from Seoul National University in 1975, Korea Advanced Institute of Science in 1977, and Polytechnic Institute of New York in 1984, respectively. From 1982 to 1985 he also worked for General Physics Corporation at Columbia, MD, USA. In early 1986 he joined the Department of Electrical Engineering, Korea Advanced Institute of Science and Technology. In 1997 he established Brain Science Research Center, which is the main research organization for the Korean Brain Neuroinformatics Research Program from 1998 to 2008. His research interests have resided in Artificial Brain, alias Artificial Cognitive Systems, i.e., the human-like intelligent systems based on biological information processing mechanism in our brain. Especially, he is interested in combining computational neuroscience and information theory for feature extraction, blind signal separation, and top-down attention. He is a Past-President of Asia-Pacific Neural Network Assembly. He received Leadership Award and Presidential Award from International Neural Network Society in 1994 and 2001, respectively, and APPNA Service Award and APNNA Outstanding Achievement Award from Asia-Pacific Neural Network Assembly in 2004 and 2009, respectively. From SPIE he also received Biomedical Wellness Award and ICA Unsupervised Learning Pioneer Award in 2008 and 2010, respectively.

De-Rong Liu

University of Illinois at Chicago, USA

Self-Learning Control of Nonlinear Systems based on Iterative Adaptive Dynamic Programming Approach


        Unlike the optimal control of linear systems, the optimal control of nonlinear systems often requires solving the nonlinear Hamilton-Jacobi-Bellman (HJB) equation instead of the Riccati equation. The discrete-time HJB (DTHJB) equation is more difficult to work with than the Riccati equation because it involves solving nonlinear partial difference equations. Though dynamic programming has been an useful computational technique in solving optimal control problems for many years, it is often computationally untenable to run it to obtain the optimal solution, due to the backward numerical process required for its solutions, i.e., the well-known "curse of dimensionality". A self-learning control scheme for unknown nonlinear discrete-time systems with discount factor in the cost function is developed for this purpose. An iterative adaptive dynamic programming algorithm via globalized dual heuristic programming technique is developed to obtain the optimal controller with convergence analysis. Neural networks are used as parametric structures to facilitate the implementation of the iterative algorithm, which will approximate at each iteration the cost function, the optimal control law, and the unknown nonlinear system, respectively. Simulation examples are provided to verify the effectiveness of the present self-learning control approach.

Biographical Sketch

        Derong Liu received the Ph.D. degree in electrical engineering from the University of Notre Dame in 1994. He was a Staff Fellow with General Motors R&D Center, Warren, MI, from 1993 to 1995. He was an Assistant Professor in the Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, from 1995 to 1999. He joined the University of Illinois at Chicago in 1999, where he became a Full Professor of Electrical and Computer Engineering and of Computer Science in 2006. He was selected for the ?00 Talents Program?by the Chinese Academy of Sciences in 2008. He has published nine books. Dr. Liu is the Editor-in-Chief of the IEEE Transactions on Neural Networks and an Associate Editor of the IEEE Transactions on Control Systems Technology, Neurocomputing and the International Journal of Neural Systems. He was an elected AdCom member of the IEEE Computational Intelligence Society (2006-2008). He received the Harvey N. Davis Distinguished Teaching Award from Stevens Institute of Technology (1997), the Faculty Early Career Development (CAREER) Award from the National Science Foundation (1999), the University Scholar Award from University of Illinois (2006), and the Overseas Outstanding Young Scholar Award from the National Natural Science Foundation of China (2008).

De-Liang Wang

The Ohio State University, USA

A Classification Approach to the Cocktail Party Problem


        The cocktail party problem, also known as the speech segregation problem, has evaded a solution for decades in speech and audio processing. Motivated by recent advances in psychoacoustics and computational auditory scene analysis, I will advocate a new formulation to this old problem: instead of aiming at extracting the target speech, it classifies time-frequency units into two classes: those dominated by the target speech and the rest. This new formulation shifts the emphasis from signal estimation to signal classification, with an important implication that the cocktail party problem is now open to a plethora of binary classification techniques in neural networks and machine learning. I will discuss recent speech segregation algorithms that adopt the binary classification formulation, and the segregation performance of these systems represents considerable progress towards solving the cocktail party problem.

Biographical Sketch

        DeLiang Wang received the B.S. degree in 1983 and the M.S. degree in 1986 from Peking (Beijing) University, Beijing, China, and the Ph.D. degree in 1991 from the University of Southern California, Los Angeles, CA, all in computer science.

        From July 1986 to December 1987 he was with the Institute of Computing Technology, Academia Sinica, Beijing. Since 1991, he has been with the Department of Computer Science & Engineering and the Center for Cognitive Science at The Ohio State University, Columbus, OH, where he is a Professor. From October 1998 to September 1999, he was a visiting scholar in the Department of Psychology at Harvard University, Cambridge, MA. From October 2006 to June 2007, he was a visiting scholar at Oticon A/S, Denmark.

        Dr. Wang's research interests include machine perception and neurodynamics. Among his recognitions are the Office of Naval Research Young Investigator Award in 1996, the 2005 Outstanding Paper Award from IEEE Transactions on Neural Networks, and the 2008 Helmholtz Award from the International Neural Network Society. He is an IEEE Fellow, and currently serves as Co-Editor-in-Chief of Neural Networks.

Jun Wang

Chinese University of Hong Kong, Hong Kong

The State of the Art of Neurodynamic Optimization


        As an important tool for science research and engineering applications, optimization is omnipresent in a wide variety of settings. It is computationally challenging when optimization procedures have to be performed in real time to optimize the performance of dynamical systems. For such applications, classical optimization techniques may not be competent due to the problem dimensionality and stringent requirement on computational time. New paradigms are needed. One very promising approach to dynamic optimization is to apply artificial neural networks. Because of the inherent nature of parallel and distributed information processing in neural networks, the convergence rate of the solution process is not decreasing as the size of the problem increases. This talk will present the state of the art of neurodynamic optimization models and selected applications. Specifically, starting from the motivation of neurodynamic optimization, we will review various recurrent neural network models for optimization. Theoretical results about the stability and optimality of the neurodynamic optimization models will be given along with illustrative examples and simulation results. It will be shown that many computational problems can be readily solved by using the neurodynamic optimization models.

Biographical Sketch

        Jun Wang is a Professor in the Department of Mechanical and Automation Engineering at the Chinese University of Hong Kong. Prior to this position, he held various academic positions at Dalian University of Technology, Case Western Reserve University, and University of North Dakota. He also held various short-term visiting positions at USAF Armstrong Laboratory (1995), RIKEN Brain Science Institute (2001), Universite Catholique de Louvain (2001), Chinese Academy of Sciences (2002), Huazhong University of Science and Technology (2006?007), and Shanghai Jiao Tong University (2008-2011) as a Changjiang Chair Professor. He received a B.S. degree in electrical engineering and an M.S. degree in systems engineering from Dalian University of Technology, Dalian, China. He received his Ph.D. degree in systems engineering from Case Western Reserve University, Cleveland, Ohio, USA. His current research interests include neural networks and their applications. He published about 150 journal papers, 12 book chapters, 10 edited books, and numerous conference papers in these areas. He has been an Associate Editor of the IEEE Transactions on Systems, Man, and Cybernetics ?Part B since 2003 and a member of the Editorial Advisory Board of the International Journal of Neural System since 2006. He also served as an Associate Editor of the IEEE Transactions on Neural Networks (1999-2009) and IEEE Transactions on Systems, Man, and Cybernetics ?Part C (2002?005). He is an IEEE Fellow, an IEEE Distinguished Lecturer, and a recipient of the Outstanding Paper Award for a paper published in the IEEE Transactions on Neural Networks in 2008, Research Excellence Award from the Chinese University of Hong Kong for 2008-2009 and Shanghai Natural Science Award (first class) in 2009.

Lei Xu

Chinese University of Hong Kong, Hong Kong

Automatic Model Selection During Learning: A Comparative Overview


        A conventional implementation of model selection consists of two stages. One stage enumerates a set of candidate models with the unknown parameters of each candidate estimated by the maximum likelihood learning. The other stage selects the best candidate by one of typical criteria, e.g., AIC, BIC/MDL,HQC. This implementation suffers huge computation and provides unreliable parameter estimation on oversized candidates. One road to tackle this challenge is automatic model selection, i.e., the implementation of a learning principle yields an intrinsic mechanism that drives certain indicators of redundant substructures towards zero, by which these substructures are discarded. This talk makes a comparative overview on several streams of efforts related to this topic in two decades. One consists of heuristic learning rules featured by a mechanism of rival penalized competitive learning and further extensions. Others are Bayesian approaches with help of appropriate priors, including sparse learning, minimum message length, and variational Bayes. Another Bayesian related framework is Bayesian Ying-Yang harmony learning, which is capable of automatic model selection even without imposing priors, and can be further improved with priors incorporated. Empirical comparisons are illustrated on simulated and real datasets for tasks of clustering analyses, image segmentation, speech recognition, and radar target recognition.

Biographical Sketch

        Lei Xu, chair professor of Chinese Univ Hong Kong (CUHK), Fellow of IEEE (2001-), Fellow of International Association for Pattern Recognition (2002-), and Academician of European Academy of Sciences (2002-). He completed his Ph.D thesis at Tsinghua Univ by the end of 1986, became postdoc at Peking Univ in 1987, then promoted to associate professor in 1988 and a professor in 1992. During 1989-93 he was research associate and postdoc in Finland, Canada and USA, including Harvard and MIT. He joined CUHK as senior lecturer in 1993, professor in 1996, and chair professor in 2002. He published several well-cited papers on neural networks, statistical learning, and pattern recognition, e.g., his papers got over 3400 citations (SCI) and over 6300 citations by Google Scholar (GS), with the top-10 papers scored over 2100(SCI) and 4100(GS). One paper scored 790(SCI) and 1351 (GS). He served as a past governor of international neural network society (INNS), a past president of APNNA, and a member of Fellow committee of IEEE CI Society. He received several national and international academic awards (e.g., 1993 National Nature Science Award, 1995 INNS Leadership Award and 2006 APNNA Outstanding Achievement Award).

Xin Yao

University of Birmingham, UK

Evolving, Training and Designing Neural Network Ensembles


        Combining neural and evolutionary computation has always been an interesting research topic, as learning and evolution are two fundamental forms of adaptation in Nature. Previous work on evolving neural networks has focused on single neural networks. However, monolithic neural networks are often too complex to train or evolve for large and complex problems. It is usually better to design a collection of simpler neural networks that work cooperatively to solve a large and complex problem, which reflects the common problem solving strategy of "divide-and-conquer". The key issue here is how to design such a collection automatically so that it has the best generalisation in learning. This talk introduces some work on evolving neural network ensembles, negative correlation learning, and multi-objective approaches to ensemble learning. The links among different learning algorithms are discussed. Online/incremental learning using ensembles will also be mentioned briefly.

        (Relevant background papers to this talk can be found from http://www.cs.bham.ac.uk/~xin/journal_papers.html.)

Biographical Sketch

        Xin Yao is a Professor (Chair) of Computer Science at the University of Birmingham, UK. He is the Director of CERCIA (the Centre of Excellence for Research in Computational Intelligence and Applications), University of Birmingham, UK, and of the Joint USTC-Birmingham Research Institute of Intelligent Computation and Its Applications. He is an IEEE Fellow and a Distinguished Lecturer of IEEE Computational Intelligence Society (CIS). He won the 2001 IEEE Donald G. Fink Prize Paper Award, IEEE Transactions on Evolutionary Computation Outstanding 2008 Paper Award, and many other best paper awards. He was a Changjian Chair Professor, a Distinguished Visiting Professor (Grand Master Chair Professorship) of University of Science and Technology of China (USTC) in Hefei, and a Distinguished Visiting Professor of Yuan Ze University, Taiwan. In his spare time, he volunteered as the Editor-in-Chief (2003-08) of IEEE Transactions on Evolutionary Computation and Vice President for Publications of the IEEE CIS (2009-12). He has been invited to give more than 60 keynote/plenary speeches at international conferences in many different countries. His major research interests include evolutionary computation, neural network ensembles, and their applications. He has more than 350 refereed publications in international journals and conferences.

Copyright© 2010-2011 International Conference on Neural Information Processing. All rights reserved.