001461295 000__ 06126cam\a2200685\i\4500 001461295 001__ 1461295 001461295 003__ OCoLC 001461295 005__ 20230503003345.0 001461295 006__ m\\\\\o\\d\\\\\\\\ 001461295 007__ cr\un\nnnunnun 001461295 008__ 230316s2023\\\\si\a\\\\ob\\\\000\0\eng\d 001461295 019__ $$a1372397495$$a1373231884 001461295 020__ $$a9789811989346$$q(electronic bk.) 001461295 020__ $$a9811989346$$q(electronic bk.) 001461295 020__ $$z9789811989339 001461295 020__ $$z9811989338 001461295 0247_ $$a10.1007/978-981-19-8934-6$$2doi 001461295 035__ $$aSP(OCoLC)1372322960 001461295 040__ $$aYDX$$beng$$erda$$epn$$cYDX$$dGW5XE$$dEBLCP$$dUKAHL$$dOCLCF 001461295 049__ $$aISEA 001461295 050_4 $$aTK5105.5 001461295 08204 $$a004.6$$223/eng/20230316 001461295 1001_ $$aWu, Hao,$$eauthor. 001461295 24510 $$aDynamic network representation based on latent factorization of tensors /$$cHao Wu, Xuke Wu, Xin Luo. 001461295 264_1 $$aSingapore :$$bSpringer,$$c[2023] 001461295 264_4 $$c©2023 001461295 300__ $$a1 online resource (viii, 80 pages) :$$billustrations (chiefly color). 001461295 336__ $$atext$$btxt$$2rdacontent 001461295 337__ $$acomputer$$bc$$2rdamedia 001461295 338__ $$aonline resource$$bcr$$2rdacarrier 001461295 4901_ $$aSpringerBriefs in computer science 001461295 504__ $$aIncludes bibliographical references. 001461295 5050_ $$aIntro -- Preface -- Contents -- Chapter 1: Introduction -- 1.1 Overview -- 1.2 Formulating a Dynamic Network into an HDI Tensor -- 1.3 Latent Factorization of Tensor -- 1.4 Book Organization -- References -- Chapter 2: Multiple Biases-Incorporated Latent Factorization of Tensors -- 2.1 Overview -- 2.2 MBLFT Model -- 2.2.1 Short-Term Bias -- 2.2.2 Preprocessing Bias -- 2.2.3 Long-Term Bias -- 2.2.4 Parameter Learning Via SGD -- 2.3 Performance Analysis of MBLFT Model -- 2.3.1 MBLFT Algorithm Design -- 2.3.2 Effect of Short-Term Bias -- 2.3.3 Effect of Preprocessing Bias 001461295 5058_ $$a2.3.4 Effect of Long-Term Bias -- 2.3.5 Comparison with State-of-the-Art Models -- 2.4 Summary -- References -- Chapter 3: PID-Incorporated Latent Factorization of Tensors -- 3.1 Overview -- 3.2 PLFT Model -- 3.2.1 A PID Controller -- 3.2.2 Objective Function -- 3.2.3 Parameter Learning Scheme -- 3.3 Performance Analysis of PLFT Model -- 3.3.1 PLFT Algorithm Design -- 3.3.2 Effects of Hyper-Parameters -- 3.3.3 Comparison with State-of-the-Art Models -- 3.4 Summary -- References -- Chapter 4: Diverse Biases Nonnegative Latent Factorization of Tensors -- 4.1 Overview -- 4.2 DBNT Model 001461295 5058_ $$a4.2.1 Extended Linear Biases -- 4.2.2 Preprocessing Bias -- 4.2.3 Parameter Learning Via SLF-NMU -- 4.3 Performance Analysis of DBNT Model -- 4.3.1 DBNT Algorithm Design -- 4.3.2 Effects of Biases -- 4.3.3 Comparison with State-of-the-Art Models -- 4.4 Summary -- References -- Chapter 5: ADMM-Based Nonnegative Latent Factorization of Tensors -- 5.1 Overview -- 5.2 ANLT Model -- 5.2.1 Objective Function -- 5.2.2 Learning Scheme -- 5.2.3 ADMM-Based Learning Sequence -- 5.3 Performance Analysis of ANLT Model -- 5.3.1 ANLT Algorithm Design -- 5.3.2 Comparison with State-of-the-Art Models 001461295 5058_ $$a5.4 Summary -- References -- Chapter 6: Perspectives and Conclusion -- 6.1 Perspectives -- 6.2 Conclusion -- References 001461295 506__ $$aAccess limited to authorized users. 001461295 520__ $$aA dynamic network is frequently encountered in various real industrial applications, such as the Internet of Things. It is composed of numerous nodes and large-scale dynamic real-time interactions among them, where each node indicates a specified entity, each directed link indicates a real-time interaction, and the strength of an interaction can be quantified as the weight of a link. As the involved nodes increase drastically, it becomes impossible to observe their full interactions at each time slot, making a resultant dynamic network High Dimensional and Incomplete (HDI). An HDI dynamic network with directed and weighted links, despite its HDI nature, contains rich knowledge regarding involved nodes various behavior patterns. Therefore, it is essential to study how to build efficient and effective representation learning models for acquiring useful knowledge. In this book, we first model a dynamic network into an HDI tensor and present the basic latent factorization of tensors (LFT) model. Then, we propose four representative LFT-based network representation methods. The first method integrates the short-time bias, long-time bias and preprocessing bias to precisely represent the volatility of network data. The second method utilizes a proportion-al-integral-derivative controller to construct an adjusted instance error to achieve a higher convergence rate. The third method considers the non-negativity of fluctuating network data by constraining latent features to be non-negative and incorporating the extended linear bias. The fourth method adopts an alternating direction method of multipliers framework to build a learning model for implementing representation to dynamic networks with high preciseness and efficiency. 001461295 588__ $$aOnline resource; title from PDF title page (SpringerLink, viewed March 16, 2023). 001461295 650_0 $$aComputer networks$$xMathematical models. 001461295 650_0 $$aCalculus of tensors. 001461295 655_0 $$aElectronic books. 001461295 7001_ $$aWu, Xuke,$$eauthor. 001461295 7001_ $$aLuo, Xin,$$eauthor. 001461295 77608 $$iPrint version: $$z9811989338$$z9789811989339$$w(OCoLC)1351730437 001461295 830_0 $$aSpringerBriefs in computer science. 001461295 852__ $$bebk 001461295 85640 $$3Springer Nature$$uhttps://univsouthin.idm.oclc.org/login?url=https://link.springer.com/10.1007/978-981-19-8934-6$$zOnline Access$$91397441.1 001461295 909CO $$ooai:library.usi.edu:1461295$$pGLOBAL_SET 001461295 980__ $$aBIB 001461295 980__ $$aEBOOK 001461295 982__ $$aEbook 001461295 983__ $$aOnline 001461295 994__ $$a92$$bISE