Linked e-resources

Details

3.2.2.1 Generating an Unrolled Decoder 3.2.2.2 Eliminating Superfluous Operations on B-Values; 3.2.2.3 Improved Layout of the a-Memory; 3.2.2.4 Compile-Time Specialization; 3.2.2.5 Architecture-Specific Optimizations; 3.2.2.6 Memory Footprint; 3.2.2.7 Implementation Comparison; 3.3 Implementation on Embedded Processors; 3.4 Implementation on Graphical Processing Units; 3.4.1 Overview of the GPU Architecture and Terminology; 3.4.2 Choosing an Appropriate Number of Threads per Block; 3.4.3 Choosing an Appropriate Number of Blocks per Kernel; 3.4.4 On the Constituent Codes Implemented

Preface; Origin; Scope; Organization; Audience; Acknowledgements; Contents; Acronyms; 1 Polar Codes; 1.1 Construction; 1.2 Tree Representation; 1.3 Systematic Coding; 1.4 Successive-Cancellation Decoding; 1.5 Simplified Successive-Cancellation Decoding; 1.5.1 Rate-0 Nodes; 1.5.2 Rate-1 Nodes; 1.5.3 Rate-R Nodes; 1.6 Fast-SSC Decoding; 1.6.1 Repetition Codes; 1.6.2 SPC Codes; 1.6.3 Repetition-SPC Codes; 1.6.4 Other Operations; 1.7 Other SC-Based Decoding Algorithms; 1.7.1 ML-SSC Decoding; 1.7.2 Hybrid ML-SC Decoding; 1.8 Other Decoding Algorithms; 1.8.1 Belief-Propagation Decoding

1.8.2 List-Based Decoding1.9 SC-Based Decoder Hardware Implementations; 1.9.1 Processing Element for SC Decoding; 1.9.2 Semi-Parallel Decoder; 1.9.3 Two-Phase Decoder; 1.9.4 Processor-Like Decoder or the Original Fast-SSC Decoder; 1.9.5 Implementation Results; 2 Fast Low-Complexity Hardware Decoders for Low-RatePolar Codes; 2.1 Introduction; 2.2 Altering the Code Construction; 2.2.1 Original Construction; 2.2.2 Altered Polar Code Construction; 2.2.3 Proposed Altered Construction; 2.2.3.1 Human-Guided Criteria; 2.2.3.2 Example Results; 2.3 New Constituent Decoders; 2.4 Implementation

2.4.1 Quantization2.4.2 Rep1 Node; 2.4.3 High-Level Architecture; 2.4.4 Processing Unit or Processor; 2.5 Results; 2.5.1 Verification Methodology; 2.5.2 Comparison with State-of-the-Art Decoders; 2.6 Conclusion; 3 Low-Latency Software Polar Decoders; 3.1 Introduction; 3.2 Implementation on x86 Processors; 3.2.1 Instruction-Based Decoder; 3.2.1.1 Using Fixed-Point Numbers; 3.2.1.2 Vectorizing the Decoding of Constituent Codes; 3.2.1.3 Data Representation; 3.2.1.4 Architecture-Specific Optimizations; 3.2.1.5 Implementation Comparison; 3.2.2 Unrolled Decoder

3.4.5 Shared Memory and Memory Coalescing3.4.6 Asynchronous Memory Transfers and Multiple Streams; 3.4.7 On the Use of Fixed-Point Numbers on a GPU; 3.4.8 Results; 3.5 Energy Consumption Comparison; 3.6 Further Discussion; 3.6.1 On the Relevance of the Instruction-Based Decoders; 3.6.2 On the Relevance of Software Decoders in Comparison to Hardware Decoders; 3.6.3 Comparison with LDPC Codes; 3.7 Conclusion; 4 Unrolled Hardware Architectures for Polar Decoders; 4.1 Introduction; 4.2 State-of-the-Art Architectures with Implementations; 4.3 Architecture, Operations and Processing Nodes

Browse Subjects

Show more subjects...

Statistics

from
to
Export