001476516 000__ 07436cam\\22007217a\4500 001476516 001__ 1476516 001476516 003__ OCoLC 001476516 005__ 20231003174424.0 001476516 006__ m\\\\\o\\d\\\\\\\\ 001476516 007__ cr\cn\nnnunnun 001476516 008__ 230902s2023\\\\si\\\\\\ob\\\\000\0\eng\d 001476516 019__ $$a1395537849$$a1396897115 001476516 020__ $$a9789819932801$$q(electronic bk.) 001476516 020__ $$a9819932807$$q(electronic bk.) 001476516 020__ $$z9819932793 001476516 020__ $$z9789819932795 001476516 0247_ $$a10.1007/978-981-99-3280-1$$2doi 001476516 035__ $$aSP(OCoLC)1396064430 001476516 040__ $$aEBLCP$$beng$$cEBLCP$$dYDX$$dGW5XE$$dQGK 001476516 049__ $$aISEA 001476516 050_4 $$aTL152.8 001476516 08204 $$a629.046$$223/eng/20230914 001476516 1001_ $$aZhang, Xinyu. 001476516 24510 $$aMulti-sensor fusion for autonomous driving /$$cXinyu Zhang, Jun Li, Zhiwei Li, Huaping Liu, Mo Zhou, Li Wang, Zhenhong Zou. 001476516 260__ $$aSingapore :$$bSpringer,$$c2023. 001476516 300__ $$a1 online resource (237 p.) 001476516 336__ $$atext$$btxt$$2rdacontent 001476516 337__ $$acomputer$$bc$$2rdamedia 001476516 338__ $$aonline resource$$bcr$$2rdacarrier 001476516 500__ $$a8 Vehicle-Road Multi-View Interactive Data Fusion 001476516 504__ $$aIncludes bibliographical references. 001476516 5050_ $$aIntro -- Foreword -- Preface -- Acknowledgments -- Contents -- Part I Basic -- 1 Introduction -- 1.1 Autonomous Driving -- 1.2 Sensors -- 1.3 Perception -- 1.4 Multi-Sensor Fusion -- 1.5 Public Datasets -- 1.6 Challenges -- 1.7 Summary -- References -- 2 Overview of Data Fusion in Autonomous Driving Perception -- 2.1 A Brief Review of Deep Learning -- 2.2 Fusion in Depth Completion -- 2.3 Fusion in Dynamic Object Detection -- 2.4 Fusion in Stationary Road Object Detection -- 2.5 Fusion in Semantic Segmentation -- 2.6 Fusion in Object Tracking -- 2.7 Summary -- References -- Part II Method 001476516 5058_ $$a3 Multi-Sensor Calibration -- 3.1 Introduction -- 3.2 Line-Based Multi-Sensor Calibration -- 3.2.1 Methodology -- 3.2.2 Experiment -- 3.3 Challenges and Prospect -- 3.4 Summary -- References -- 4 Multi-Sensor Object Detection -- 4.1 Introduction -- 4.2 LiDAR-Image Fusion Object Detection -- 4.2.1 RI-Fusion Framework -- 4.2.1.1 Data Preprocessing -- 4.2.1.2 RI-Attention Network -- 4.2.1.3 Point Cloud Recovery -- 4.2.2 Experiment -- 4.2.2.1 Dataset and Evaluation Metrics -- 4.2.2.2 Implementation Details -- 4.2.2.3 Results -- 4.2.2.4 Ablation Studies -- 4.3 RaDAR-LiDAR Fusion Object Detection 001476516 5058_ $$a4.3.1 Preprocessing of 4D RaDAR Point Clouds -- 4.3.2 Interaction-Based Multimodal Fusion (IMMF) -- 4.3.3 Center-Based Multi-Scale Fusion (CMSF) -- 4.3.4 Experiments -- 4.3.4.1 Dataset -- 4.3.4.2 Implementation Details -- 4.3.4.3 Training -- 4.3.4.4 3D Object Detection on Astyx HiRes 2019 Dataset -- 4.3.4.5 Ablation Studies with M2-Fusion -- 4.3.4.6 Accuracy Comparison Experiments at Different Ranges -- 4.3.4.7 Parameter Comparison Experiment -- 4.3.4.8 Visualization Experiments -- 4.4 Challenges and Prospect -- 4.5 Summary -- References -- 5 Multi-Sensor Scene Segmentation -- 5.1 Background 001476516 5058_ $$a5.2 Introduction -- 5.3 Attention in Multimodal Fusion Segmentation -- 5.3.1 Network Architectures -- 5.3.2 Experiments and Discussion -- 5.4 Adaptive Strategies in Multimodal Fusion Segmentation -- 5.4.1 MIMF Network -- 5.4.2 Experiment -- 5.5 Video Multimodal Fusion Segmentation -- 5.5.1 Method -- 5.5.2 Experiments -- 5.6 Summary -- 5.7 Challenges and Prospect -- References -- 6 Multi-Sensor Fusion Localization -- 6.1 Introduction -- 6.2 GF-SLAM -- 6.2.1 Methodology -- 6.2.2 Experiment -- 6.3 Lifelong Localization in Semi-Dynamic Environment -- 6.3.1 Methodology -- 6.3.2 Experiment 001476516 5058_ $$a6.4 Challenges and Prospect -- 6.5 Summary -- References -- Part III Advance -- 7 OpenMPD: An Open Multimodal Perception Dataset -- 7.1 Introduction -- 7.2 Automated Driving-Related Datasets -- 7.2.1 Comprehensive Datasets -- 7.2.2 Characteristic Datasets -- 7.2.3 Our Dataset -- 7.3 OpenMPD -- 7.3.1 Platform Setup -- 7.3.2 Calibration -- 7.3.3 Collecting Route -- 7.3.4 Combine Annotation -- 7.4 Data Analysis -- 7.4.1 Complexity -- 7.4.2 Occlusion -- 7.4.3 Scale -- 7.4.4 Position -- 7.5 Experiment -- 7.5.1 Object Detection -- 7.5.2 Semantic Segmentation -- 7.6 Summary -- References 001476516 506__ $$aAccess limited to authorized users. 001476516 520__ $$aAlthough sensor fusion is an essential prerequisite for autonomous driving, it entails a number of challenges and potential risks. For example, the commonly used deep fusion networks are lacking in interpretability and robustness. To address these fundamental issues, this book introduces the mechanism of deep fusion models from the perspective of uncertainty and models the initial risks in order to create a robust fusion architecture. This book reviews the multi-sensor data fusion methods applied in autonomous driving, and the main body is divided into three parts: Basic, Method, and Advance. Starting from the mechanism of data fusion, it comprehensively reviews the development of automatic perception technology and data fusion technology, and gives a comprehensive overview of various perception tasks based on multimodal data fusion. The book then proposes a series of innovative algorithms for various autonomous driving perception tasks, to effectively improve the accuracy and robustness of autonomous driving-related tasks, and provide ideas for solving the challenges in multi-sensor fusion methods. Furthermore, to transition from technical research to intelligent connected collaboration applications, it proposes a series of exploratory contents such as practical fusion datasets, vehicle-road collaboration, and fusion mechanisms. In contrast to the existing literature on data fusion and autonomous driving, this book focuses more on the deep fusion method for perception-related tasks, emphasizes the theoretical explanation of the fusion method, and fully considers the relevant scenarios in engineering practice. Helping readers acquire an in-depth understanding of fusion methods and theories in autonomous driving, it can be used as a textbook for graduate students and scholars in related fields or as a reference guide for engineers who wish to apply deep fusion methods. 001476516 588__ $$aOnline resource; title from PDF title page (SpringerLink, viewed September 14, 2023). 001476516 650_0 $$aAutomated vehicles. 001476516 650_0 $$aMultisensor data fusion. 001476516 655_0 $$aElectronic books. 001476516 7001_ $$aLi, Jun. 001476516 7001_ $$aLi, Zhiwei. 001476516 7001_ $$aLiu, Huaping,$$d1976- 001476516 7001_ $$aZhou, Mo. 001476516 7001_ $$aWang, Li. 001476516 7001_ $$aZou, Zhenhong. 001476516 77608 $$iPrint version:$$aZhang, Xinyu$$tMulti-Sensor Fusion for Autonomous Driving$$dSingapore : Springer,c2023$$z9789819932795 001476516 852__ $$bebk 001476516 85640 $$3Springer Nature$$uhttps://univsouthin.idm.oclc.org/login?url=https://link.springer.com/10.1007/978-981-99-3280-1$$zOnline Access$$91397441.1 001476516 909CO $$ooai:library.usi.edu:1476516$$pGLOBAL_SET 001476516 980__ $$aBIB 001476516 980__ $$aEBOOK 001476516 982__ $$aEbook 001476516 983__ $$aOnline 001476516 994__ $$a92$$bISE