Linked e-resources

Details

Intro
Preface
Contents
Artificial Intelligence Accelerators
1 Introduction
1.1 Introduction to Artificial Intelligence (AI)
1.1.1 AI Applications
1.1.2 AI Algorithms
1.2 Hardware Accelerators
2 Requirements of AI Accelerators
2.1 Hardware Accelerator Designs
2.2 Domain-Specific Accelerators
2.3 Performance Metrics in Accelerators
2.3.1 Instructions Per Second (IPS)
2.3.2 Floating Point Operations Per Second (FLOPS, flops, or flop/s)
2.3.3 Trillion/Tera of Operations Per Second (TOPS)
2.3.4 Throughput Per Cost (Throughput/)

2.4 Key Metrics and Design Objectives
3 Classifications of AI Accelerators
4 Organization of this Book
5 Popular Design Approaches in AI Acceleration
6 Bottleneck of AI Accelerator and In-Memory Processing
7 A Few State-of-the-Art AI Accelerators
8 Conclusions
References
AI Accelerators for Standalone Computer
1 Introduction to Standalone Compute
2 Hardware Accelerators for Standalone Compute
2.1 Inference and Training of DNNs
2.2 Accelerating DNN Computation
2.3 Considerations in Hardware Design
2.4 Deep Learning Frameworks

3 Hardware Accelerators in GPU
3.1 History and Overview
3.2 GPU Architecture
3.3 GPU Acceleration Techniques
3.4 CUDA-Related Libraries
4 Hardware Accelerators in NPU
4.1 History and Overview: Hardware
4.2 Standalone Accelerating System Characteristics
4.3 Architectures of Hardware Accelerator in NPU
4.4 SOTA Architectures
5 Summary
References
AI Accelerators for Cloud and Server Applications
1 Introduction
2 Background
3 Hardware Accelerators in Clouds
4 Hardware Accelerators in Data Centers
4.1 Design of HW Accelerator for Data Centers

4.1.1 Batch Processing Applications
4.1.2 Streaming Processing Applications
4.2 Design Consideration for HW Accelerators in the Data Center
4.2.1 HW Accelerator Architecture
4.2.2 Programmable HW Accelerators
4.2.3 AI Design Ecosystem
4.2.4 Hardware Accelerator IPs
4.2.5 Energy and Power Efficiency
5 Heterogeneous Parallel Architectures in Data Centers and Cloud
5.1 Heterogeneous Computing Architectures in Data Centers and Cloud
6 Hardware Accelerators for Distributed In-Network and Edge Computing
6.1 HW Accelerator Model for In-Network Computing

6.2 HW Accelerator Model for Edge Computing
7 Infrastructure for Deploying FPGAs
8 Infrastructure for Deploying ASIC
8.1 Tensor Processing Unit (TPU) Accelerators
8.2 Cloud TPU
8.3 Edge TPU
9 SOTA Architectures for Cloud and Edge
9.1 Advances in Cloud and Edge Accelerator
9.1.1 Cloud TPU System Architecture
9.1.2 Cloud TPU VM Architecture
9.2 Staggering Cost of Training SOTA AI Models
10 Security and Privacy Issues
11 Summary
References
Overviewing AI-Dedicated Hardware for On-Device AI in Smartphones
1 Introduction

Browse Subjects

Show more subjects...

Statistics

from
to
Export