Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems by Amanda J.C. SharkeyCombining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems by Amanda J.C. Sharkey

Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems

byAmanda J.C. Sharkey

Paperback | January 22, 1999

Pricing and Purchase Info

$165.87 online 
$193.50 list price save 14%
Earn 829 plum® points

Prices and offers may vary in store

Quantity:

In stock online

Ships free on orders over $25

Not available in stores

about

The past decade could be seen as the heyday of neurocomputing: in which the capabilities of monolithic nets have been well explored and exploited. The question then is where do we go from here? A logical next step is to examine the potential offered by combinations of artificial neural nets, and it is that step that the chapters in this volume represent. Intuitively, it makes sense to look at combining ANNs. Clearly complex biological systems and brains rely on modularity. Similarly the principles of modularity, and of reliability through redundancy, can be found in many disparate areas, from the idea of decision by jury, through to hardware re­ dundancy in aeroplanes, and the advantages of modular design and reuse advocated by object-oriented programmers. And it is not surprising to find that the same principles can be usefully applied in the field of neurocomput­ ing as well, although finding the best way of adapting them is a subject of on-going research.
Title:Combining Artificial Neural Nets: Ensemble and Modular Multi-Net SystemsFormat:PaperbackDimensions:298 pagesPublished:January 22, 1999Publisher:Springer-Verlag/Sci-Tech/TradeLanguage:English

The following ISBNs are associated with this title:

ISBN - 10:185233004x

ISBN - 13:9781852330040

Look for similar items by category:

Reviews

Table of Contents

1. Multi-Net Systems.- 1.0.1 Different Forms of Multi-Net System.- 1.1 Ensembles.- 1.1.1 Why Create Ensembles?.- 1.1.2 Methods for Creating Ensemble Members.- 1.1.3 Methods for Combining Nets in Ensembles.- 1.1.4 Choosing a Method for Ensemble Creation and Combination.- 1.2 Modular Approaches.- 1.2.1 Why Create Modular Systems?.- 1.2.2 Methods for Creating Modular Components.- 1.2.3 Methods for Combining Modular Components.- 1.3 The Chapters in this Book.- 1.4 References.- 2. Combining Predictors.- 2.1 Combine and Conquer.- 2.2 Regression.- 2.2.1 Bias and Variance.- 2.2.2 Bagging - The Pseudo-Fairy Godmother.- 2.2.3 Results of Bagging.- 2.3 Classification.- 2.3.1 Bias and Spread.- 2.3.2 Examples.- 2.3.3 Bagging Classifiers.- 2.4 Remarks.- 2.4.1 Pruning.- 2.4.2 Randomising the Construction.- 2.4.3 Randomising the Outputs.- 2.5 Adaboost and Arcing.- 2.5.1 The Adaboost Algorithm.- 2.5.2 What Makes Adaboost Work?.- 2.6 Recent Research.- 2.6.1 Margins.- 2.6.2 Using Simple Classifiers.- 2.6.3 Instability is Needed.- 2.7 Coda.- 2.7.1 Heisenberg's Principle for Statistical Prediction.- 2.8 References.- 3. Boosting Using Neural Networks.- 3.1 Introduction.- 3.2 Bagging.- 3.2.1 Classification.- 3.2.2 Regression.- 3.2.3 Remarks.- 3.3 Boosting.- 3.3.1 Introduction.- 3.3.2 A First Implementation: Boostl.- 3.3.3 Adaboost.M1.- 3.3.4 AdaBoost.M2.- 3.3.5 AdaBoost.R2.- 3.4 Other Ensemble Techniques.- 3.5 Neural Networks.- 3.5.1 Classification.- 3.5.2 Early Stopping.- 3.5.3 Regression.- 3.6 Trees.- 3.6.1 Training Classification Trees.- 3.6.2 Pruning Classification Trees.- 3.6.3 Training Regression Trees.- 3.6.4 Pruning Regression Trees.- 3.7 Trees vs. Neural Nets.- 3.8 Experiments.- 3.8.1 Experiments Using Boostl.- 3.8.2 Experiments Using AdaBoost.- 3.8.3 Experiments Using AdaBoost.R2.- 3.9 Conclusions.- 3.10 References.- 4. A Genetic Algorithm Approach for Creating Neural Network Ensembles.- 4.1 Introduction.- 4.2 Neural Network Ensembles.- 4.3 The ADDEMUP Algorithm.- 4.3.1 ADDEMUP's Top-Level Design.- 4.3.2 Creating and Crossing-Over KNNs.- 4.4 Experimental Study.- 4.4.1 Generalisation Ability of ADDEMUP.- 4.4.2 Lesion Study of ADDEMUP.- 4.5 Discussion and Future Work.- 4.6 Additional Related Work.- 4.7 Conclusions.- 4.8 References.- 5. Treating Harmful Collinearity in Neural Network Ensembles.- 5.1 Introduction.- 5.2 Overview of Optimal Linear Combinations (OLC) of Neural Networks.- 5.3 Effects of Collinearity on Combining Neural Networks.- 5.3.1 Collinearity in the Literature on Combining Estimators.- 5.3.2 Testing the Robustness of NN Ensembles.- 5.3.3 Collinearity, Correlation, and Ensemble Ambiguity.- 5.3.4 The Harmful Effects of Collinearity.- 5.4 Improving the Generalisation of NN Ensembles by Treating Harmful Collinearity.- 5.4.1 Two Algorithms for Selecting the Component NNs in the Ensemble.- 5.4.2 Modification to the Algorithms.- 5.5 Experimental Results.- 5.5.1 Problem I.- 5.5.2 Problem II.- 5.5.3 Discussion of the Experimental Results.- 5.6 Concluding Remarks.- 5.7 References.- 6. Linear and Order Statistics Combiners for Pattern Classification.- 6.1 Introduction.- 6.2 Class Boundary Analysis and Error Regions.- 6.3 Linear Combining.- 6.3.1 Linear Combining of Unbiased Classifiers.- 6.3.2 Linear Combining of Biased Classifiers.- 6.4 Order Statistics.- 6.4.1 Introduction.- 6.4.2 Background.- 6.4.3 Combining Unbiased Classifiers Through OS.- 6.4.4 Combining Biased Classifiers Through OS.- 6.5 Correlated Classifier Combining.- 6.5.1 Introduction.- 6.5.2 Combining Unbiased Correlated Classifiers.- 6.5.3 Combining Biased Correlated Classifiers.- 6.5.4 Discussion.- 6.6 Experimental Combining Results.- 6.6.1 Oceanic Data Set.- 6.6.2 Probenl Benchmarks.- 6.7 Discussion.- 6.8 References.- 7. Variance Reduction via Noise and Bias Constraints.- 7.1 Introduction.- 7.2 Theoretical Considerations.- 7.3 The BootstrapEnsemble with Noise Algorithm.- 7.4 Results on the Two-Spirals Problem.- 7.4.1 Problem Description.- 7.4.2 Feed-Forward Network Architecture.- 7.5 Discussion.- 7.6 References.- 8. A Comparison of Visual Cue Combination Models.- 8.1 Introduction.- 8.2 Stimulus.- 8.3 Tasks.- 8.4 Models of Cue Combination.- 8.5 Simulation Results.- 8.6 Summary.- 8.7 References.- 9. Model Selection of Combined Neural Nets for Speech Recognition.- 9.1 Introduction.- 9.2 The Acoustic Mapping.- 9.3 Network Architectures.- 9.3.1 Combining Networks for Acoustic Mapping.- 9.3.2 Linear Mappings.- 9.3.3 RBFLinear Networks.- 9.3.4 Multilayer Perceptron Networks.- 9.4 Experimental Environment.- 9.4.1 System Architecture.- 9.4.2 Acoustic Analysis.- 9.4.3 The Speech Recogniser.- 9.4.4 Generation of the Training Set.- 9.4.5 Application 1: Datasets and Recognition Task.- 9.4.6 WER and MSE.- 9.5 Bootstrap Estimates and Model Selection.- 9.5.1 Bootstrap Error Estimates.- 9.5.2 The Bootstrap and Model Selection.- 9.5.3 The Number of Bootstrap Replicates.- 9.5.4 Bootstrap Estimates: Evaluation.- 9.6 Normalisation Results.- 9.7 Continuous Digit Recognition Over the Telephone Network.- 9.8 Conclusions.- 9.9 References.- 10. Self-Organised Modular Neural Networks for Encoding Data.- 10.1 Introduction.- 10.1.1 An Image Processing Problem.- 10.1.2 Vector Quantisers.- 10.1.3 Curved Manifolds.- 10.1.4 Structure of this Chapter.- 10.2 Basic Theoretical Framework.- 10.2.1 Objective Function.- 10.2.2 Stationarity Conditions.- 10.2.3 Joint Encoding.- 10.2.4 Factorial Encoding.- 10.3 Circular Manifold.- 10.3.1 2 Overlapping Posterior Probabilities.- 10.3.2 3 Overlapping Posterior Probabilities.- 10.4 Toroidal Manifold: Factorial Encoding.- 10.4.1 2 Overlapping Posterior Probabilities.- 10.4.2 3 Overlapping Posterior Probabilities.- 10.5 Asymptotic Results.- 10.6 Approximate the Posterior Probability.- 10.7 Joint Versus Factorial Encoding.- 10.8 Conclusions.- 10.9 References.- 11. Mixtures of X.- 11.1 Introduction.- 11.2 Mixtures of X.- 11.2.1 Mixtures of Distributions from the Exponential Family.- 11.2.2 Hidden Markov Models.- 11.2.3 Mixtures of Experts.- 11.2.4 Mixtures of Marginal Models.- 11.2.5 Mixtures of Cox Models.- 11.2.6 Mixtures of Factor Models.- 11.2.7 Mixtures of Trees.- 11.3 Summary.- 11.4 References.