TY - GEN AB - This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol. Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appeal to a broad audience in the field of machine learning, artificial intelligence, big data and database management. AU - Jiang, Jiawei, AU - Cui, Bin, AU - Zhang, Ce, CN - Q325.5 CY - Singapore : DA - 2022. DO - 10.1007/978-981-16-3420-8 DO - doi ID - 1444772 KW - Machine learning. KW - Mathematical optimization. KW - Apprentissage automatique. KW - Optimisation mathématique. LK - https://univsouthin.idm.oclc.org/login?url=https://link.springer.com/10.1007/978-981-16-3420-8 N2 - This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol. Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appeal to a broad audience in the field of machine learning, artificial intelligence, big data and database management. PB - Springer, PP - Singapore : PY - 2022. SN - 9789811634208 SN - 9811634203 T1 - Distributed machine learning and gradient optimization/ TI - Distributed machine learning and gradient optimization/ UR - https://univsouthin.idm.oclc.org/login?url=https://link.springer.com/10.1007/978-981-16-3420-8 ER -