Linked e-resources

Details

Intro; Preface; Contents; Acronyms; 1 Introduction and Motivation; 1.1 Introduction to Neural Networks; 1.1.1 Brief History; 1.1.2 Tasks Where Neural Networks Succeed; 1.2 Theoretical Contributions to Neural Networks; 1.2.1 Universal Approximation Properties; 1.2.2 Vanishing and Exploding Gradients; 1.2.3 Wasserstein GAN; 1.3 Mathematical Representations; 1.4 Book Layout; References; 2 Mathematical Preliminaries; 2.1 Linear Maps, Bilinear Maps, and Adjoints; 2.2 Derivatives; 2.2.1 First Derivatives; 2.2.2 Second Derivatives; 2.3 Parameter-Dependent Maps; 2.3.1 First Derivatives

2.3.2 Higher-Order Derivatives2.4 Elementwise Functions; 2.4.1 Hadamard Product; 2.4.2 Derivatives of Elementwise Functions; 2.4.3 The Softmax and Elementwise Log Functions; 2.5 Conclusion; References; 3 Generic Representation of Neural Networks; 3.1 Neural Network Formulation; 3.2 Loss Functions and Gradient Descent; 3.2.1 Regression; 3.2.2 Classification; 3.2.3 Backpropagation; 3.2.4 Gradient Descent Step Algorithm; 3.3 Higher-Order Loss Function; 3.3.1 Gradient Descent Step Algorithm; 3.4 Conclusion; References; 4 Specific Network Descriptions; 4.1 Multilayer Perceptron; 4.1.1 Formulation

4.1.2 Single-Layer Derivatives4.1.3 Loss Functions and Gradient Descent; 4.2 Convolutional Neural Networks; 4.2.1 Single Layer Formulation; Cropping and Embedding Operators; Convolution Operator; Max-Pooling Operator; The Layerwise Function; 4.2.2 Multiple Layers; 4.2.3 Single-Layer Derivatives; 4.2.4 Gradient Descent Step Algorithm; 4.3 Deep Auto-Encoder; 4.3.1 Weight Sharing; 4.3.2 Single-Layer Formulation; 4.3.3 Single-Layer Derivatives; 4.3.4 Loss Functions and Gradient Descent; 4.4 Conclusion; References; 5 Recurrent Neural Networks; 5.1 Generic RNN Formulation; 5.1.1 Sequence Data

5.1.2 Hidden States, Parameters, and Forward Propagation5.1.3 Prediction and Loss Functions; 5.1.4 Loss Function Gradients; Prediction Parameters; Real-Time Recurrent Learning; Backpropagation Through Time; 5.2 Vanilla RNNs; 5.2.1 Formulation; 5.2.2 Single-Layer Derivatives; 5.2.3 Backpropagation Through Time; 5.2.4 Real-Time Recurrent Learning; Evolution Equation; Loss Function Derivatives; Gradient Descent Step Algorithm; 5.3 RNN Variants; 5.3.1 Gated RNNs; 5.3.2 Bidirectional RNNs; 5.3.3 Deep RNNs; 5.4 Conclusion; References; 6 Conclusion and Future Work; References; Glossary

Browse Subjects

Show more subjects...

Statistics

from
to
Export