Bruges, Belgium, April 22-23-24
Content of the proceedings
-
Special session: Radial basis networks
Models and architectures
Special session: Neural networks for control
Learning I
Models and architectures
Special session: Self-organising maps for data analysis
Special session: ANN for speech processing
Learning II
Special session: Cellular neural networks
Statistical methods
Learning and models
Special session: ANN for the processing of facial information
Optimization and associative networks
Special session: Radial basis networks
ES1998-305
What are the main factors involved in the design of a Radial Basis Function Network?
I. Rojas, M. Anguita, E. Ros, H. Pomares, O. Valenzuela, A. Prieto
What are the main factors involved in the design of a Radial Basis Function Network?
I. Rojas, M. Anguita, E. Ros, H. Pomares, O. Valenzuela, A. Prieto
ES1998-301
A supervised radial basis function neural network
S. Wu, C. Van den Broeck
A supervised radial basis function neural network
S. Wu, C. Van den Broeck
ES1998-303
A comparison between weighted radial basis functions and wavelet networks
M. Sgarbi, V. Colla, L. Reyneri
A comparison between weighted radial basis functions and wavelet networks
M. Sgarbi, V. Colla, L. Reyneri
ES1998-304
An incremental local radial basis function network
A. Esposito, M. Marinaro, S. Scarpetta
An incremental local radial basis function network
A. Esposito, M. Marinaro, S. Scarpetta
ES1998-306
A Tikhonov approach to calculate regularisation matrices
C. Angulo, A. Catala
A Tikhonov approach to calculate regularisation matrices
C. Angulo, A. Catala
Abstract:
\N
\N
Models and architectures
ES1998-44
Analyses on the temporal patterns of spikes of auditory neurons by a neural network and tree-based models
T. Takahashi
Analyses on the temporal patterns of spikes of auditory neurons by a neural network and tree-based models
T. Takahashi
ES1998-11
Output jitter diverges to infinity, converges to zero or remains constant
J. Feng, B. Tirozzi, D. Brown
Output jitter diverges to infinity, converges to zero or remains constant
J. Feng, B. Tirozzi, D. Brown
ES1998-24
Wavelet interpolation networks
C. Bernard, S. Mallat, J.-J. Slotine
Wavelet interpolation networks
C. Bernard, S. Mallat, J.-J. Slotine
Abstract:
Abstract. We describe a new approach to real time learning of unknown functions based on an interpolating wavelet estimation. We choose a subfamily of a wavelet basis relying on nested hierarchical allocation and update in real time our estimate of the unknown function. Such an interpolation process can be used for real time applications like neural network adaptive control, where learning an unknown function very fast is critical.
Abstract. We describe a new approach to real time learning of unknown functions based on an interpolating wavelet estimation. We choose a subfamily of a wavelet basis relying on nested hierarchical allocation and update in real time our estimate of the unknown function. Such an interpolation process can be used for real time applications like neural network adaptive control, where learning an unknown function very fast is critical.
ES1998-36
Generating arbitrary rhythmic patterns with purely inhibitory neural networks
Z. Yang, F. França
Generating arbitrary rhythmic patterns with purely inhibitory neural networks
Z. Yang, F. França
Abstract:
\N
\N
Special session: Neural networks for control
ES1998-409
Brain-like intelligent control: from neural nets to larger-scale systems
P. Werbos
Brain-like intelligent control: from neural nets to larger-scale systems
P. Werbos
ES1998-403
Control of a subsonic electropneumatic acoustic generator with dynamic recurrent neural networks
J.-P. Draye, L. Blondel, G. Cheron
Control of a subsonic electropneumatic acoustic generator with dynamic recurrent neural networks
J.-P. Draye, L. Blondel, G. Cheron
ES1998-406
Lazy learning for control design
G. Bontempi, M. Birattari, H. Bersini
Lazy learning for control design
G. Bontempi, M. Birattari, H. Bersini
Abstract:
This paper presents two local methods for the control of discrete-time unknown nonlinear dynamical systems, when only a limited amount of input-output data is available. The modeling procedure adopts lazy learning a query-based approach for local modeling inspired to memory-based approximators. In the first method the lazy technique returns the forward and inverse models of the system which are used to compute the control action to take. The second is an indirect method inspired to adaptive control where the self-tuning identification module is replaced by a lazy approximator. Simulation examples of control of nonlinear systems starting from observed data are given.
This paper presents two local methods for the control of discrete-time unknown nonlinear dynamical systems, when only a limited amount of input-output data is available. The modeling procedure adopts lazy learning a query-based approach for local modeling inspired to memory-based approximators. In the first method the lazy technique returns the forward and inverse models of the system which are used to compute the control action to take. The second is an indirect method inspired to adaptive control where the self-tuning identification module is replaced by a lazy approximator. Simulation examples of control of nonlinear systems starting from observed data are given.
ES1998-402
Neural networks for the solution of information-distributed optimal control problems
M. Baglietto, T. Parisini, R. Zoppoli
Neural networks for the solution of information-distributed optimal control problems
M. Baglietto, T. Parisini, R. Zoppoli
ES1998-404
Parsimonious learning feed-forward control
T. de Vries, L. Idema, W. Velthuis
Parsimonious learning feed-forward control
T. de Vries, L. Idema, W. Velthuis
Abstract:
We introduce the Learning Feed-Forward Control configuration. In this configuration, a B-spline neural network is contained, which suffers from the curse of dimensionality. We propose a method to avoid the occurrence of this problem.
We introduce the Learning Feed-Forward Control configuration. In this configuration, a B-spline neural network is contained, which suffers from the curse of dimensionality. We propose a method to avoid the occurrence of this problem.
ES1998-408
Fast orienting movements to visual targets: neural field model of dynamic gaze control
A. Schierwagen, H. Werner
Fast orienting movements to visual targets: neural field model of dynamic gaze control
A. Schierwagen, H. Werner
ES1998-401
Improved generalization ability of neurocontrollers by imposing NLq stability constraints
J. Suykens, J. Vandewalle
Improved generalization ability of neurocontrollers by imposing NLq stability constraints
J. Suykens, J. Vandewalle
ES1998-31
A RNN based control architecture for generating periodic action sequences
T. Kolb, W. Ilg, J. Wille
A RNN based control architecture for generating periodic action sequences
T. Kolb, W. Ilg, J. Wille
Abstract:
\N
\N
Learning I
ES1998-7
NAR time-series prediction: a Bayesian framework and an experiment
M. Crucianu, Z. Uhry, J.-P. Asselin de Beauville, R. Bone
NAR time-series prediction: a Bayesian framework and an experiment
M. Crucianu, Z. Uhry, J.-P. Asselin de Beauville, R. Bone
Abstract:
We extend the Bayesian framework to Multi-Layer Perceptron models of Non-linear Auto-Regressive time-series. The approach is evaluated on an artificial time-series and some common simplifications are discussed.
We extend the Bayesian framework to Multi-Layer Perceptron models of Non-linear Auto-Regressive time-series. The approach is evaluated on an artificial time-series and some common simplifications are discussed.
ES1998-8
Application of a neural net in classification and knowledge discovery
K. Schaedler, F. Wysotzki
Application of a neural net in classification and knowledge discovery
K. Schaedler, F. Wysotzki
ES1998-14
Extending the CMAC model: adaptive input quantization
G. P. Klebus
Extending the CMAC model: adaptive input quantization
G. P. Klebus
ES1998-17
One or two hidden layers perceptrons
M. Fernandez, C. Hernandez
One or two hidden layers perceptrons
M. Fernandez, C. Hernandez
ES1998-18
On the error function of interval arithmetic backpropagation
M. Fernandez, C. Hernandez
On the error function of interval arithmetic backpropagation
M. Fernandez, C. Hernandez
ES1998-302
A neural network for the identification of the dynamic behaviour of a wheelchair
L. Boquete, R. Barea, M. Mazo, I. Aranda
A neural network for the identification of the dynamic behaviour of a wheelchair
L. Boquete, R. Barea, M. Mazo, I. Aranda
Abstract:
\N
\N
Models and architectures
ES1998-9
What is observable in a class of neurodynamics?
J. Feng, D. Brown
What is observable in a class of neurodynamics?
J. Feng, D. Brown
ES1998-12
Polyhedral mixture of linear experts for many-to-one mapping inversion
A. Karniel, R. Meir, G.F. Inbar
Polyhedral mixture of linear experts for many-to-one mapping inversion
A. Karniel, R. Meir, G.F. Inbar
Abstract:
Feed-forward control schemes require an inverse mapping of the controlled system. In adaptive systems as well as in biological modeling this inverse mapping is learned from examples. The biological motor control is very redundant, as are many robotic systems, implying that the inverse problem is ill posed. In this work a new architecture and algorithm for learning multiple inverses is proposed, the polyhedral mixture of linear experts (PMLE). The PMLE keeps all the possible solutions available to the controller in real time. The PMLE is a modified mixture of experts architecture, where each expert is linear and more than a single expert may be assigned to the same input region.The learning is implemented by the hinging hyperplanes algorithm. The proposed architecture is described and its operation is illustrated for some simple cases.
Feed-forward control schemes require an inverse mapping of the controlled system. In adaptive systems as well as in biological modeling this inverse mapping is learned from examples. The biological motor control is very redundant, as are many robotic systems, implying that the inverse problem is ill posed. In this work a new architecture and algorithm for learning multiple inverses is proposed, the polyhedral mixture of linear experts (PMLE). The PMLE keeps all the possible solutions available to the controller in real time. The PMLE is a modified mixture of experts architecture, where each expert is linear and more than a single expert may be assigned to the same input region.The learning is implemented by the hinging hyperplanes algorithm. The proposed architecture is described and its operation is illustrated for some simple cases.
ES1998-4
On-off intermittency in small neural networks with synaptic noise
A. Krawiecki, R.A. Kosinski
On-off intermittency in small neural networks with synaptic noise
A. Krawiecki, R.A. Kosinski
Abstract:
\N
\N
Special session: Self-organising maps for data analysis
ES1998-201
Recurrent SOM with local linear models in time series prediction
T. Koskela, M. Varsta, J. Heikkonen, K. Kaski
Recurrent SOM with local linear models in time series prediction
T. Koskela, M. Varsta, J. Heikkonen, K. Kaski
Abstract:
Recurrent Self-Organizing Map (RSOM) is studied in three different time series prediction cases. RSOM is used to cluster the series into local data sets, for which corresponding local linear models are estimated. RSOM includes recurrent difference vector in each unit which allows storing context from the past input vectors. Multilayer perceptron (MLP) network and autoregressive (AR) model are used to compare the prediction results. In studied cases RSOM shows promising results.
Recurrent Self-Organizing Map (RSOM) is studied in three different time series prediction cases. RSOM is used to cluster the series into local data sets, for which corresponding local linear models are estimated. RSOM includes recurrent difference vector in each unit which allows storing context from the past input vectors. Multilayer perceptron (MLP) network and autoregressive (AR) model are used to compare the prediction results. In studied cases RSOM shows promising results.
ES1998-202
Self-organization and convergence of the one-dimensional Kohonen algorithm
A. Sadeghi
Self-organization and convergence of the one-dimensional Kohonen algorithm
A. Sadeghi
ES1998-203
Finding structure in text archives
A. Rauber, D. Merkl
Finding structure in text archives
A. Rauber, D. Merkl
ES1998-204
Methods for interpreting a self-organized map in data analysis
S. Kaski, J. Nikkila, T. Kohonen
Methods for interpreting a self-organized map in data analysis
S. Kaski, J. Nikkila, T. Kohonen
ES1998-209
Magnification control in neural maps
T. Villmann, M. Herrmann
Magnification control in neural maps
T. Villmann, M. Herrmann
ES1998-205
Self-organizing ANNs for planetary surface composition research
E. Merényi
Self-organizing ANNs for planetary surface composition research
E. Merényi
Abstract:
The mineralogic composition of planetary surfaces is often mapped from remotely sensed spectral images. Advanced hyperspectral sensors today provide more detailed and more voluminous measurements than traditional classification algorithms can efficiently exploit. ANNs, and specifically Self-Organizing Maps, have been used at the Lunar and Planetary Laboratory, University of Arizona,to address these challenges.
The mineralogic composition of planetary surfaces is often mapped from remotely sensed spectral images. Advanced hyperspectral sensors today provide more detailed and more voluminous measurements than traditional classification algorithms can efficiently exploit. ANNs, and specifically Self-Organizing Maps, have been used at the Lunar and Planetary Laboratory, University of Arizona,to address these challenges.
ES1998-206
A new dynamic LVQ-based classifier and its application to handwritten character recognition
S. Bermejo, J. Cabestany, M. Payeras
A new dynamic LVQ-based classifier and its application to handwritten character recognition
S. Bermejo, J. Cabestany, M. Payeras
ES1998-207
The self-organising map, robustness, self-organising criticality and power laws
J.A. Flanagan
The self-organising map, robustness, self-organising criticality and power laws
J.A. Flanagan
ES1998-208
Invariant feature maps for analysis of orientations in image data
S. McGlinchey, C. Fyfe
Invariant feature maps for analysis of orientations in image data
S. McGlinchey, C. Fyfe
ES1998-210
Forecasting time-series by Kohonen classification
A. Lendasse, M. Verleysen, E. de Bodt, M. Cottrell, P. Grégoire
Forecasting time-series by Kohonen classification
A. Lendasse, M. Verleysen, E. de Bodt, M. Cottrell, P. Grégoire
Abstract:
In this paper, we propose a generic non-linear approach for time series forecasting. The main feature of this approach is the use of a simple statistical forecasting in small regions of an input space adequately chosen and quantized. The partition of the space is achieved by the Kohonen algorithm. The method is then applied to a widely known time-series from the SantaFe competition, and the results are compared with the best ones published for this
In this paper, we propose a generic non-linear approach for time series forecasting. The main feature of this approach is the use of a simple statistical forecasting in small regions of an input space adequately chosen and quantized. The partition of the space is achieved by the Kohonen algorithm. The method is then applied to a widely known time-series from the SantaFe competition, and the results are compared with the best ones published for this
Special session: ANN for speech processing
ES1998-456
Introduction to speech recognition using neural networks
C. Wellekens
Introduction to speech recognition using neural networks
C. Wellekens
Abstract:
As an introduction to a session dedicated to neural networks in speech processing, this paper describes the basic problems faced with in automatic speech recognition (ASR). Representation of speech, classification problems, speech unit models, training procedures and criteria are discussed. Why and how neural networks lead to challenging results in ASR is explained.
As an introduction to a session dedicated to neural networks in speech processing, this paper describes the basic problems faced with in automatic speech recognition (ASR). Representation of speech, classification problems, speech unit models, training procedures and criteria are discussed. Why and how neural networks lead to challenging results in ASR is explained.
ES1998-453
Self-organization in mixture densities of HMM based speech recognition
M. Kurimo
Self-organization in mixture densities of HMM based speech recognition
M. Kurimo
ES1998-452
Speech recognition with a new hybrid architecture combining neural networks and continuous HMM
D. Willett, G. Rigoll
Speech recognition with a new hybrid architecture combining neural networks and continuous HMM
D. Willett, G. Rigoll
ES1998-454
Hierarchies of neural networks for connectionist speech recognition
J. Fritsch, A. Waibel
Hierarchies of neural networks for connectionist speech recognition
J. Fritsch, A. Waibel
Abstract:
\N
\N
Learning II
ES1998-1
Training a sigmoidal network is difficult
B. Hammer
Training a sigmoidal network is difficult
B. Hammer
Abstract:
In this paper we show that the loading problem for a $3$-node architecture with sigmoidal activation is NP-hard if the input dimension varies, if the classification is performed with a certain accuracy, and if the output weights are restricted.
In this paper we show that the loading problem for a $3$-node architecture with sigmoidal activation is NP-hard if the input dimension varies, if the classification is performed with a certain accuracy, and if the output weights are restricted.
ES1998-6
Weight saliency regularization in augmented networks
P. Edwards, A. Murray
Weight saliency regularization in augmented networks
P. Edwards, A. Murray
ES1998-21
Selecting among candidate basis functions by crosscorrelations
A. Poncet, A. Deiss, S. Holles
Selecting among candidate basis functions by crosscorrelations
A. Poncet, A. Deiss, S. Holles
ES1998-29
A multistage on-line learning rule for multilayer neural network
P. Thomas, G. Bloch, C. Humbert
A multistage on-line learning rule for multilayer neural network
P. Thomas, G. Bloch, C. Humbert
ES1998-37
Parameter-estimation-based learning for feedforward neural networks: convergence and robustness analysis
A. Alessandri, M. Maggiore, M. Sanguineti
Parameter-estimation-based learning for feedforward neural networks: convergence and robustness analysis
A. Alessandri, M. Maggiore, M. Sanguineti
Abstract:
\N
\N
Special session: Cellular neural networks
ES1998-352
On the robust design of uncoupled CNNs
B. Mirzai, D. Lim, G.S. Moschyts
On the robust design of uncoupled CNNs
B. Mirzai, D. Lim, G.S. Moschyts
ES1998-355
To stop learning using the evidence
Y. Moreau, J. Vandewalle
To stop learning using the evidence
Y. Moreau, J. Vandewalle
ES1998-351
Cellular neural networks: from chaos generation to compexity modelling
P. Arena, L. Fortuna
Cellular neural networks: from chaos generation to compexity modelling
P. Arena, L. Fortuna
ES1998-353
Ultrasound medical image processing using cellular neural networks
I. Aizenberg, N. Aizenberg, E. Gotko, J. Vandewalle
Ultrasound medical image processing using cellular neural networks
I. Aizenberg, N. Aizenberg, E. Gotko, J. Vandewalle
Abstract:
\N
\N
Statistical methods
ES1998-28
Separation of sources in a class of post-nonlinear mixtures
C.G. Puntonet, M.R. Alvarez, A. Prieto, B. Prieto
Separation of sources in a class of post-nonlinear mixtures
C.G. Puntonet, M.R. Alvarez, A. Prieto, B. Prieto
ES1998-50
Improving neural network estimation in presence of non i.i.d. noise
S. Hosseini, C. Jutten
Improving neural network estimation in presence of non i.i.d. noise
S. Hosseini, C. Jutten
Abstract:
\N
\N
Learning and models
ES1998-19
A self-organising neural network for modelling cortical development
M. Spratling, G. Hayes
A self-organising neural network for modelling cortical development
M. Spratling, G. Hayes
Abstract:
This paper presents a novel self-organising neural network. It has been developed for use as a simplified model of cortical development. Unlike many other models of topological map formation all synaptic weights start at zero strength (so that synaptogenesis might be modelled). In addition, the algorithm works with the same format of encoding for both inputs to and outputs from the network (so that the transfer and recoding of information between cortical regions might be modelled).
This paper presents a novel self-organising neural network. It has been developed for use as a simplified model of cortical development. Unlike many other models of topological map formation all synaptic weights start at zero strength (so that synaptogenesis might be modelled). In addition, the algorithm works with the same format of encoding for both inputs to and outputs from the network (so that the transfer and recoding of information between cortical regions might be modelled).
ES1998-20
Learning sensory-motor cortical mappings without training
M. Spratling, G. Hayes
Learning sensory-motor cortical mappings without training
M. Spratling, G. Hayes
Abstract:
This paper shows how the relationship between two arrays of artificial neurons, representing different cortical regions, can be learned. The algorithm enables each neural network to self-organise into a topological map of the domain it represents at the same time as the relationship between these maps is found. Unlike previous methods learning is achieved without a separate training phase; the algorithm which learns the mapping is also that which performs the mapping.
This paper shows how the relationship between two arrays of artificial neurons, representing different cortical regions, can be learned. The algorithm enables each neural network to self-organise into a topological map of the domain it represents at the same time as the relationship between these maps is found. Unlike previous methods learning is achieved without a separate training phase; the algorithm which learns the mapping is also that which performs the mapping.
ES1998-32
Perception and action selection by anticipation of sensorimotor consequences
T. Seiler, V. Stephan, H.-M. Gross
Perception and action selection by anticipation of sensorimotor consequences
T. Seiler, V. Stephan, H.-M. Gross
ES1998-33
Neural networks for financial forecast
G. Rotundo, B. Tirozzi, M. Valente
Neural networks for financial forecast
G. Rotundo, B. Tirozzi, M. Valente
ES1998-39
A neural approach to a sensor fusion problem
V. Colla, M. Sgarbi, L.M. Reyneri, A.M. Sabatini
A neural approach to a sensor fusion problem
V. Colla, M. Sgarbi, L.M. Reyneri, A.M. Sabatini
ES1998-46
Canonical correlation analysis using artificial neural networks
Pei Ling Lai, C. Fyfe
Canonical correlation analysis using artificial neural networks
Pei Ling Lai, C. Fyfe
Abstract:
\N
\N
Special session: ANN for the processing of facial information
ES1998-254
ANN for facial information processing: a review of recent approaches
R. Raducanu, M. Grana, A. D'Anjou, F.X. Albizuri
ANN for facial information processing: a review of recent approaches
R. Raducanu, M. Grana, A. D'Anjou, F.X. Albizuri
ES1998-251
Hybrid Hidden Markow model / neural network models for speechreading
A. Rogozan, P. Deleglise
Hybrid Hidden Markow model / neural network models for speechreading
A. Rogozan, P. Deleglise
Abstract:
This paper describes a new approach for visual speech recognition (also called speechreading) using hybrid HMM/NN models. First, we use the Self-Organising Map (SOM) to merge phonemes that appear visually similar into visemes1. Then we develop an hybrid speechreading system with two communicating components: HMM and NN, to take advantage from the qualities of both. The first component is a classical continuous HMM, while the second one is the Time Delay Neural Network (TDNN) or the Jordan partially recurrent Neural Network (JNN). At the beginning of the recognition process the HMM component segments and labels the visual data. In the case of visemes which are often confused by using the HMM, but rarely with the NN, we use the NN component to label the corresponding boundaries. For the other visemes, the final response is given by the HMM component. Finally, we evaluate the hybrid system on a continuously spelling task and we show that it outperform an HMM system and a NN one.
This paper describes a new approach for visual speech recognition (also called speechreading) using hybrid HMM/NN models. First, we use the Self-Organising Map (SOM) to merge phonemes that appear visually similar into visemes1. Then we develop an hybrid speechreading system with two communicating components: HMM and NN, to take advantage from the qualities of both. The first component is a classical continuous HMM, while the second one is the Time Delay Neural Network (TDNN) or the Jordan partially recurrent Neural Network (JNN). At the beginning of the recognition process the HMM component segments and labels the visual data. In the case of visemes which are often confused by using the HMM, but rarely with the NN, we use the NN component to label the corresponding boundaries. For the other visemes, the final response is given by the HMM component. Finally, we evaluate the hybrid system on a continuously spelling task and we show that it outperform an HMM system and a NN one.
ES1998-252
Face recognition: pre-processing techniques for linear autoassociators
E. Drege, F. Yang, M. Paindavoine, H. Abdi
Face recognition: pre-processing techniques for linear autoassociators
E. Drege, F. Yang, M. Paindavoine, H. Abdi
ES1998-253
Facial image retrieval using sequential classifiers
S. Gutta, H. Wechsler
Facial image retrieval using sequential classifiers
S. Gutta, H. Wechsler
ES1998-40
Grouping complex face parts by nonlinear oscillations
S. Oka, M. Kitabata, Y. Ajioka, Y. Takefuji
Grouping complex face parts by nonlinear oscillations
S. Oka, M. Kitabata, Y. Ajioka, Y. Takefuji
Abstract:
\N
\N
Optimization and associative networks
ES1998-13
On a Hopfield net arising in the modelling and control of over-saturated signalized intersections
F. Maghrebi
On a Hopfield net arising in the modelling and control of over-saturated signalized intersections
F. Maghrebi
ES1998-48
Construction of an interactive and competitive artificial neural network for the solution of path planning problems
E. Mulder, H.A.K. Mastebroeck
Construction of an interactive and competitive artificial neural network for the solution of path planning problems
E. Mulder, H.A.K. Mastebroeck
ES1998-38
Learning associative mappings from few examples
J.A. Walter
Learning associative mappings from few examples
J.A. Walter
Abstract:
\N
\N