Linked e-resources

Details

Foreword; Preface; Contents; Acronyms; Symbols; 1 Introduction; 1.1 Automatic Speech Recognition: A Bridge for Better Communication; 1.1.1 Human
Human Communication; 1.1.2 Human
Machine Communication; 1.2 Basic Architecture of ASR Systems; 1.3 Book Organization; 1.3.1 Part I: Conventional Acoustic Models; 1.3.2 Part II: Deep Neural Networks; 1.3.3 Part III: DNN-HMM Hybrid Systems for ASR; 1.3.4 Part IV: Representation Learning in Deep Neural Networks; 1.3.5 Part V: Advanced Deep Models; References; Part IConventional Acoustic Models; 2 Gaussian Mixture Models; 2.1 Random Variables

2.2 Gaussian and Gaussian-Mixture Random Variables2.3 Parameter Estimation; 2.4 Mixture of Gaussians as a Model for the Distribution of Speech Features; References; 3 Hidden Markov Models and the Variants; 3.1 Introduction; 3.2 Markov Chains; 3.3 Hidden Markov Sequences and Models; 3.3.1 Characterization of a Hidden Markov Model; 3.3.2 Simulation of a Hidden Markov Model; 3.3.3 Likelihood Evaluation of a Hidden Markov Model; 3.3.4 An Algorithm for Efficient Likelihood Evaluation; 3.3.5 Proofs of the Forward and Backward Recursions

3.4 EM Algorithm and Its Application to Learning HMM Parameters3.4.1 Introduction to EM Algorithm; 3.4.2 Applying EM to Learning the HMM
Baum-Welch Algorithm; 3.5 Viterbi Algorithm for Decoding HMM State Sequences; 3.5.1 Dynamic Programming and Viterbi Algorithm; 3.5.2 Dynamic Programming for Decoding HMM States; 3.6 The HMM and Variants for Generative Speech Modeling and Recognition; 3.6.1 GMM-HMMs for Speech Modeling and Recognition; 3.6.2 Trajectory and Hidden Dynamic Models for Speech Modeling and Recognition

3.6.3 The Speech Recognition Problem Using Generative Models of HMM and Its VariantsReferences; Part IIDeep Neural Networks; 4 Deep Neural Networks; 4.1 The Deep Neural Network Architecture ; 4.2 Parameter Estimation with Error Backpropagation; 4.2.1 Training Criteria; 4.2.2 Training Algorithms; 4.3 Practical Considerations ; 4.3.1 Data Preprocessing ; 4.3.2 Model Initialization; 4.3.3 Weight Decay; 4.3.4 Dropout; 4.3.5 Batch Size Selection; 4.3.6 Sample Randomization; 4.3.7 Momentum; 4.3.8 Learning Rate and Stopping Criterion; 4.3.9 Network Architecture

4.3.10 Reproducibility and RestartabilityReferences; 5 Advanced Model Initialization Techniques; 5.1 Restricted Boltzmann Machines; 5.1.1 Properties of RBMs; 5.1.2 RBM Parameter Learning; 5.2 Deep Belief Network Pretraining; 5.3 Pretraining with Denoising Autoencoder; 5.4 Discriminative Pretraining; 5.5 Hybrid Pretraining; 5.6 Dropout Pretraining; References; Part IIIDeep Neural Network-Hidden MarkovModel Hybrid Systems for AutomaticSpeech Recognition; 6 Deep Neural Network-Hidden Markov Model Hybrid Systems; 6.1 DNN-HMM Hybrid Systems; 6.1.1 Architecture; 6.1.2 Decoding with CD-DNN-HMM

Browse Subjects

Show more subjects...

Statistics

from
to
Export