Linked e-resources

Details

Intro
Preface
Contents
1 Different Views of Interpretability
1.1 Introduction
1.2 Interpretability: In Praise of Transparent Models
1.2.1 What Happened?
1.2.2 What Will Happen?
1.2.3 What Shall be Done to Make It Happen?
1.2.4 Patterns and Models
1.3 Generalizability and Interpretability with Industry 4.0 Implications
1.3.1 Introduction to Interpretable AI
1.3.2 A Wide Angle Perspective of Generalizability
1.3.3 Statistical Generalizability
1.4 Connections Between Interpretability in Machine Learning and Sensitivity Analysis of Model Outputs

1.4.1 Machine Learning and Uncertainty Quantification
1.4.2 Basics on Sensitivity Analysis and Its Main Settings
1.4.3 A Brief Taxonomy of Interpretability in Machine Learning
1.4.4 A Review of Sensitivity Analysis Powered Interpretability Methods
References
2 Model Interpretability, Explainability and Trust for Manufacturing 4.0
2.1 Manufacturing 4.0: Driving Trends for Data Mining
2.1.1 Process Monitoring in Manufacturing 4.0
2.1.2 Design of Experiments in Manufacturing 4.0

2.1.3 Increasing Trust in AI Models for Manufacturing 4.0: Interpretability, Explainability and Robustness
2.2 Additive Manufacturing as a Paradigmatic Example of Manufacturing 4.0
2.3 Increase Trust in Additive Manufacturing: Robust Functional Analysis of Variance in Video-Image Analysis
2.3.1 The RoFANOVA Approach
2.3.2 An Additive Manufacturing Application
References
3 Interpretability via Random Forests
3.1 Introduction
3.2 Interpretable Rule-Based Models
3.2.1 Literature Review
3.2.1.1 Definitions and Origins of Rule Models
3.2.1.2 Decision Trees

3.2.1.3 Tree-Based Rule Learning
3.2.1.4 Modern Rule Learning
3.2.2 SIRUS: Stable and Interpretable RUle Set
3.2.2.1 SIRUS Algorithm
3.2.2.2 Theoretical Analysis
3.2.2.3 Experiments
3.2.3 Discussion
3.3 Post-Processing of Black-Box Algorithms via Variable Importance
3.3.1 Literature Review
3.3.1.1 Model-Specific Variable Importance
3.3.1.2 Global Sensitivity Analysis
3.3.1.3 Local Interpretability
3.3.2 Sobol-MDA
3.3.2.1 Sobol-MDA Algorithm
3.3.2.2 Sobol-MDA Properties
3.3.2.3 Experiments
3.3.3 SHAFF: SHApley eFfects Estimates via Random Forests

3.3.3.1 SHAFF Algorithm
3.3.3.2 SHAFF Consistency
3.3.3.3 Experiments
3.3.4 Discussion
References
4 Interpretability in Generalized Additive Models
4.1 GAMs: A Basic Framework for Flexible Interpretable Regression
4.1.1 Flexibility Can Be Important
4.1.2 Making the Model Computable
4.1.3 Estimation and Inference
4.1.4 Checking, Effective Degrees of Freedom and Model Selection
4.1.5 GAM Computation with mgcv in R
4.1.6 Smooths of Several Predictors
4.1.7 Further Interpretable Structure
4.2 From GAM to GAMLSS: Interpretability for Model Building

Browse Subjects

Show more subjects...

Statistics

from
to
Export