Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS

Linked e-resources

Details

Intro
Preface
Acknowledgement
Author biography
Shajulin Benedict
Chapter 1 Deep learning for social good-an introduction
1.1 Deep learning-a subset of AI
1.2 History of deep learning
1.3 Trends-deep learning for social good
1.3.1 Increasing data
increasing machines
1.3.2 Increasing publications
1.3.3 International trends
1.4 Motivations
1.5 Deep learning for social good-a need
1.6 Intended audience
1.6.1 Students
1.6.2 Researchers
1.6.3 Practitioners/startup entrepreneurs
1.6.4 Government/smart city officials
1.7 Chapters and descriptions
1.7.1 About deep learning
1.7.2 Social-good applications
1.7.3 Computing architectures-base technologies
1.7.4 Convolutional neural network techniques
1.7.5 Object detection techniques and algorithms
1.7.6 Sentiment analysis-algorithms and frameworks
1.7.7 Autoencoders and variational autoencoders
1.7.8 Generative adversarial networks and disentangled mechanisms
1.7.9 Deep reinforcement learning architectures
1.7.10 Facial recognition and applications
1.7.11 Data security and platforms
1.7.12 Performance monitoring and analysis
1.7.13 Deep learning-future perspectives
1.8 Reading flow
References
Chapter 2 Applications for social good
2.1 Characteristics of social-good applications
2.2 Generic architecture-entities
2.2.1 User interface
2.2.2 Sensor connectivity
2.2.3 Hierarchical intelligence
2.2.4 Data security-immutability
2.2.5 Notification/visualization
2.3 Applications for social good
2.3.1 Economic forecasting
2.3.2 Personal assistants
2.3.3 Language assistance
2.3.4 Speech, text, and object recognition
2.3.5 Smart transportation
2.3.6 Wildlife conservation
2.3.7 Healthcare applications
2.4 Technologies and techniques
2.5 Technology-blockchain.

2.5.1 Types of transaction
2.6 AI/machine learning/deep learning techniques
2.7 The Internet of things/sensor technology
2.7.1 The industrial IoT
2.7.2 The consumer IoT
2.7.3 The social IoT
2.7.4 The semantic IoT
2.7.5 The productization IoT
2.8 Robotic technology
2.9 Computing infrastructures-a needy technology
2.10 Security-related techniques
References
Chapter 3 Computing architectures-base technologies
3.1 History of computing
3.2 Types of computing
3.3 Hardware support for deep learning
3.3.1 CPUs versus GPUs
3.3.2 CPUs versus TPUs
3.4 Microcontrollers, microprocessors, and FPGAs
3.5 Cloud computing-an environment for deep learning
3.6 Virtualization-a base for cloud computing
3.6.1 Virtualization-an analogous example
3.6.2 Objectives of virtualization
3.6.3 VMs-comparison to physical machines
3.6.4 VMs-case studies
3.7 Hypervisors-impact on deep learning
3.7.1 Bare-metal hypervisor
3.7.2 Hosted hypervisors
3.7.3 Full virtualization
3.7.4 Paravirtualization
3.7.5 Hardware-assisted virtualization
3.8 Containers and Dockers
3.8.1 Docker instances
3.8.2 Docker building blocks
3.8.3 Docker storage characteristics
3.8.4 Docker working model
3.8.5 Docker tools
3.9 Cloud execution models
3.9.1 Serverless cloud execution model
3.9.2 Kubernetes solutions
3.9.3 DL-as-a-service
3.10 Programming deep learning tasks-libraries
3.10.1 Features of TensorFlow
3.10.2 TensorFlow components
3.11 Sensor-enabled data collection for DLs
3.11.1 Required mechanisms
3.11.2 Sensors to DL services-data connectivity
3.11.3 Application-layer protocols
3.11.4 Lower-layer protocols
3.12 Edge-level deep learning systems
3.12.1 About the ESP32
3.12.2 Programming ESP boards
References
Chapter 4 CNN techniques.

4.1 CNNs-introduction
4.1.1 Analogy with human brains/eyes
4.1.2 Characteristics of the human brain
4.1.3 CNN principles-in a nutshell
4.1.4 Comparison between CNNs and ML
4.1.5 Advantages of CNNs
4.2 CNNs-nuts and bolts
4.2.1 Object recognition-the computer's perspective
4.2.2 Neurons and CNN connections
4.2.3 CNN building blocks
4.2.4 Pooling layers
4.2.5 Fully connected layers
4.3 Social-good applications-a CNN perspective
4.4 CNN use case-climate change problem
4.4.1 Reasons for climate change
4.4.2 Ways to reduce climate change
4.4.3 Forest fire prediction
4.4.4 TensorFlow-based CNN code snippets
4.4.5 TensorFlow-based single-perceptron code snippets
4.4.6 Scikit-learn-based extra-tree classifier
4.4.7 Scikit-learn-based K-neighbors classifier
4.4.8 Scikit-learn-based support vector machine
4.4.9 Scikit-learn-based logistic regression
4.4.10 Flood prediction
4.4.11 Scikit-learn-based stochastic gradient descent
4.4.12 Scikit-learn-based linear regression
4.4.13 Scikit-learn-based Bayesian regression
4.4.14 Scikit-learn-based ridge regression
4.4.15 Scikit-learn-based lasso regression
4.4.16 Scikit-learn-based elastic net regression
4.4.17 Scikit-learn-based LARS lasso regression
4.4.18 Scikit-learn-based online one-class SVM regression
4.4.19 Scikit-learn-based random forest regression
4.4.20 Scikit-learn-based multilayer perceptron
4.5 CNN challenges
4.5.1 Insufficient data
4.5.2 Low speed
4.5.3 Hidden layers
4.5.4 Missing coordinates
4.5.5 Inaccurate datasets
4.5.6 Black-box approach
4.5.7 Overfitting/underfitting problems
References
Chapter 5 Object detection techniques and algorithms
5.1 Computer vision-taxonomy
5.2 Object detection-objectives
5.2.1 Locating targets
5.2.2 Semantic representations.

5.2.3 Robust algorithms
5.3 Object detection-challenges
5.4 Object detection-major steps or processes
5.4.1 Step 1-requirement identification
5.4.2 Step 2-image processing
5.4.3 Convolutions and object detection
5.5 Object detection methods
5.5.1 R-CNN
5.5.2 Fast R-CNN
5.5.3 Faster R-CNN
5.5.4 You Only Look Once
5.5.5 YOLO variants
5.6 Applications
5.6.1 Tracking
5.6.2 Geo-classification
5.6.3 Healthcare solutions
5.6.4 E-learning solutions
5.7 Exam proctoring-YOLOv5
5.7.1 Crucial applications
5.7.2 Requirements
5.8 Proctoring system-implementation stages
5.8.1 Interface
5.8.2 Screen recording
5.8.3 Image categorization
References
Chapter 6 Sentiment analysis-algorithms and frameworks
6.1 Sentiment analysis-an introduction
6.1.1 History-sentiment analysis
6.1.2 Trends and objectives
6.2 Levels and approaches
6.2.1 Levels of sentiment analysis
6.2.2 Approaches to and techniques used for sentiment analysis
6.2.3 Processing stages
6.3 Sentiment analysis-processes
6.4 Recommendation system-sentiment analysis
6.5 Movie recommendation-a case study
6.5.1 Convolutional neural networks-sentiments
6.5.2 Recurrent neural networks-sentiments
6.5.3 Long short-term memory-sentiments
6.6 Metrics
6.7 Tools and frameworks
6.7.1 The necessity for sentiment analysis tools and frameworks
6.7.2 HubSpot tool
6.7.3 Repustate sentiment analyzer
6.7.4 Lexanalytics
6.7.5 Critical Mention
6.7.6 Brandwatch
6.7.7 Social Searcher
6.7.8 MonkeyLearn
6.8 Sentiment analysis-sarcasm detection
6.8.1 News headline data set
6.8.2 Data processing using TensorFlow
6.8.3 Training using neural networks
6.8.4 Training using long short-term memory
References
Chapter 7 Autoencoders and variational autoencoders.

7.1 Introduction-autoencoders
7.1.1 Advantages of compression
7.1.2 Unsupervised learning
7.1.3 Principal component analysis versus autoencoders
7.1.4 Autoencoders-transfer learning
7.1.5 Autoencoders versus traditional compression
7.2 Autoencoder architectures
7.2.1 Training phase
7.2.2 Loss functions
7.3 Types of autoencoder
7.3.1 Convolutional autoencoders
7.3.2 Sparse autoencoders
7.3.3 Deep autoencoders
7.3.4 Contractive autoencoders
7.3.5 Denoising autoencoders
7.3.6 Undercomplete autoencoders
7.3.7 Variational autoencoders
7.4 Applications of autoencoders
7.4.1 Image reconstruction
7.4.2 Image colorization
7.4.3 High-resolution image generation
7.4.4 Recommendation systems via feature extraction
7.4.5 Image compression
7.4.6 Image segmentation
7.5 Variational autoencoders
7.5.1 Variational autoencoder vectors
7.5.2 Variational autoencoder loss functions
7.5.3 A pictorial way of understanding the variational autoencoder
7.5.4 Variational autoencoder-use cases
7.6 Autoencoder implementation-code snippet explanation
7.6.1 Importing packages
7.6.2 Initialization
7.6.3 Layer definition
7.6.4 Feature extraction
7.6.5 Modeling and testing
References
Chapter 8 GANs and disentangled mechanisms
8.1 Introduction to GANs
8.2 Concept-generative and descriptive
8.2.1 Generative models
8.2.2 Discriminative models
8.3 Major steps involved
8.3.1 Load and prepare
8.3.2 Modeling
8.3.3 Loss and optimizers
8.3.4 Training step
8.3.5 Testing step
8.4 GAN architecture
8.4.1 Generators
8.4.2 Discriminator
8.5 Types of GAN
8.5.1 DCGANs
8.5.2 CGANs
8.5.3 ESRGAN
8.5.4 GFP GANs
8.6 StyleGAN
8.7 A simple implementation of a GAN
8.7.1 Importing libraries and data sets
8.7.2 Generator models.

8.7.3 Discriminator models.

Browse Subjects

Show more subjects...

Statistics

from
to
Export