Markov Models For Pattern Recognition: From Theory To Applications by Gernot A. FinkMarkov Models For Pattern Recognition: From Theory To Applications by Gernot A. Fink

Markov Models For Pattern Recognition: From Theory To Applications

byGernot A. Fink

Hardcover | November 16, 2007

Pricing and Purchase Info

$116.95

Earn 585 plum® points

Prices and offers may vary in store

Quantity:

In stock online

Ships free on orders over $25

Not available in stores

about

This comprehensive introduction to the Markov modeling framework describes the underlying theoretical concepts of Markov models as used for sequential data, covering Hidden Markov models and Markov chain models. It also presents the techniques necessary to build successful systems for practical applications. In addition, the book demonstrates the actual use of the technology in the three main application areas of pattern recognition methods based on Markov-Models: speech recognition, handwriting recognition, and biological sequence analysis. The book is suitable for experts as well as for practitioners.
Gernot A. Fink earned his diploma in computer science from the University of Erlangen-Nuremberg, Erlangen, Germany, in 1991. He recieved a Ph.D. degree in computer science in 1995 and the venia legendi in applied computer science in 2002 both from Bielefeld University, Germany. Currently, he is professor for Pattern Recognition in Embe...
Loading
Title:Markov Models For Pattern Recognition: From Theory To ApplicationsFormat:HardcoverDimensions:260 pages, 9.25 × 6.1 × 0.1 inPublished:November 16, 2007Publisher:SpringerLanguage:English

The following ISBNs are associated with this title:

ISBN - 10:3540717668

ISBN - 13:9783540717669

Look for similar items by category:

Reviews

Table of Contents

1. Introduction1.1 Thematic Context1.2 Capabilities of Markov Models1.3 Goal and Structure2. Application Areas2.1 Speech2.2 Handwriting2.3 Biological Sequences2.4 OutlookPart I: Theory3. Foundations of Mathematical Statistics3.1 Experiment, Event, and Probability3.2 Random Variables and Probability Distributions3.3 Parameters of Probability Distributions3.4 Normal Distributions and Mixture Density Models3.5 Stochastic Processes and Markov Chains3.6 Principles of Parameter Estimation3.7 Bibliographical Remarks4. Vector Quantisation4.1 Definition4.2 Optimality4.3 Algorithms for Vector Quantiser Design (LLoyd, LBG, k-means)4.4 Estimation of Mixture Density Models4.5 Bibliographical Remarks5. Hidden-Markov Models5.1 Definition5.2 Modeling of Output Distributions5.3 Use-Cases5.4 Notation5.5 Scoring (Forward algorithm)5.6 Decoding (Viterbi algorithm)5.7 Parameter Estimation (Forward-backward algorithm,  Baum-Welch, Viterbi, and segmental k-means training)5.8 Model Variants5.9 Bibliographical Remarks6. n-Gram Models6.1 Definition6.2 Use-Cases6.3 Notation6.4 Scoring6.5 Parameter Estimation (discounting, interpolation and backing-off)6.6 Model Variants (categorial models, long-distance dependencies)6.7 Bibliographical RemarksPart II: Practical Aspects7. Computations with Probabilities7.1 Logarithmic Probability Representation7.2 Flooring of Probabilities7.3 Codebook Evaluation in Tied-Mixture Models7.4 Likelihood Ratios8. Configuration of Hidden-Markov Models8.1 Model Topologies8.2 Sub-Model Units8.3 Compound Models8.4 Profile-HMMs8.5 Modelling of Output Probability Densities9. Robust Parameter Estimation9.1 Optimization of Feature Representations (Principle component analysis, whitening, linear discriminant  analysis)9.2 Tying (of model parameters, especially: mixture tying)9.3 Parameter Initialization10. Efficient Model Evaluation10.1 Efficient Decoding of Mixture Densities10.2 Beam Search10.3 Efficient Parameter Estimation (forward-backward pruning, segmental Baum-Welch,  training of model hierarchies)10.4 Tree-based Model Representations11. Model Adaptation11.1 Foundations of Adaptation11.2 Adaptation of Hidden-Markov Models (Maximum-likelihood linear regression)11.3 Adaptation of n-Gram Models (cache models, dialog-step dependent models, topic-based  language models)12. Integrated Search12.1 HMM Networks12.2 Multi-pass Search Strategies12.3 Search-Space Copies (context and time-based tree copying strategies,  language model look-ahead)12.4 Time-synchronous Integrated DecodingPart III: Putting it All Together13. Speech Recognition13.1 Application-Specific Processing (feature extraction, vocal tract length normalization, ...)13.2 Systems (e.g. BBN Byblos, SPHINX III, ...)14. Text Recognition14.1 Application-Specific Processing (linearization of data representation for off-line applications,  preprocessing, normalization, feature extraction)14.2 Systems for On-line Handwriting Recognition14.3 Systems for Off-line Handwriting Recognition15. Analysis of Biological Sequences15.1 Representation of Biological Sequences15.2 Systems (HMMer, SAM, Meta-MEME)

Editorial Reviews

"The practice part makes the book unique among many other pattern recognition textbooks. It discusses implementation details that are often ignored in the literature, but are important in constructing a working system. . Overall, the book is well written and clear ... It is suited not to those who want to learn the basics of pattern recognition, but to those who want to learn the state of the art of speech, character, and DNA sequence recognition problems from the perspective of the practitioner and designer. . The depth and breadth of the treatment is right for the intent of the book."(T. Kubota, Lewisburg, PA, in: Computing Reviews, May 2009)