001453761 000__ 06629cam\a22005777i\4500 001453761 001__ 1453761 001453761 003__ OCoLC 001453761 005__ 20230314003443.0 001453761 006__ m\\\\\o\\d\\\\\\\\ 001453761 007__ cr\cn\nnnunnun 001453761 008__ 230106t20232023nyua\\\\ob\\\\001\0\eng\d 001453761 019__ $$a1356794111 001453761 020__ $$a9781484289310$$q(electronic bk.) 001453761 020__ $$a1484289315$$q(electronic bk.) 001453761 020__ $$z9781484289303 001453761 020__ $$z1484289307 001453761 0247_ $$a10.1007/978-1-4842-8931-0$$2doi 001453761 035__ $$aSP(OCoLC)1356976064 001453761 040__ $$aORMDA$$beng$$erda$$epn$$cORMDA$$dEBLCP$$dYDX$$dGW5XE$$dUKAHL 001453761 049__ $$aISEA 001453761 050_4 $$aQ325.5$$b.P37 2023 001453761 08204 $$a006.3/1$$223/eng/20230106 001453761 1001_ $$aPattanayak, Santanu,$$eauthor. 001453761 24510 $$aPro deep learning with TensorFlow 2.0 :$$ba mathematical approach to advanced artificial intelligence in Python /$$cSantanu Pattanayak. 001453761 250__ $$aSecond edition. 001453761 264_1 $$aNew York, NY :$$bApress,$$c[2023] 001453761 264_4 $$c©2023 001453761 300__ $$a1 online resource (667 pages) :$$billustrations 001453761 336__ $$atext$$btxt$$2rdacontent 001453761 337__ $$acomputer$$bc$$2rdamedia 001453761 338__ $$aonline resource$$bcr$$2rdacarrier 001453761 504__ $$aIncludes bibliographical references and index. 001453761 5050_ $$aIntro -- Table of Contents -- About the Author -- About the Technical Reviewer -- Introduction -- Chapter 1: Mathematical Foundations -- Linear Algebra -- Vector -- Scalar -- Matrix -- Tensor -- Matrix Operations and Manipulations -- Addition of Two Matrices -- Subtraction of Two Matrices -- Product of Two Matrices -- Transpose of a Matrix -- Dot Product of Two Vectors -- Matrix Working on a Vector -- Linear Independence of Vectors -- Rank of a Matrix -- Identity Matrix or Operator -- Determinant of a Matrix -- Interpretation of Determinant -- Inverse of a Matrix -- Norm of a Vector 001453761 5058_ $$aPseudo-Inverse of a Matrix -- Unit Vector in the Direction of a Specific Vector -- Projection of a Vector in the Direction of Another Vector -- Eigen Vectors -- Characteristic Equation of a Matrix -- Power Iteration Method for Computing Eigen Vector -- Calculus -- Differentiation -- Gradient of a Function -- Successive Partial Derivatives -- Hessian Matrix of a Function -- Maxima and Minima of Functions -- Rules for Maxima and Minima for a Univariate Function -- Local Minima and Global Minima -- Positive Semi-definite and Positive Definite -- Convex Set -- Convex Function -- Non-convex Function 001453761 5058_ $$aMultivariate Convex and Non-convex Functions Examples -- Taylor Series -- Probability -- Unions, Intersection, and Conditional Probability -- Chain Rule of Probability for Intersection of Event -- Mutually Exclusive Events -- Independence of Events -- Conditional Independence of Events -- Bayes Rule -- Probability Mass Function -- Probability Density Function -- Expectation of a Random Variable -- Variance of a Random Variable -- Skewness and Kurtosis -- Covariance -- Correlation Coefficient -- Some Common Probability Distribution -- Uniform Distribution -- Normal Distribution 001453761 5058_ $$aMultivariate Normal Distribution -- Bernoulli Distribution -- Binomial Distribution -- Poisson Distribution -- Beta Distribution -- Dirichlet Distribution -- Gamma Distribution -- Likelihood Function -- Maximum Likelihood Estimate -- Hypothesis Testing and p Value -- Formulation of Machine-Learning Algorithm and Optimization Techniques -- Supervised Learning -- Linear Regression as a Supervised Learning Method -- Linear Regression Through Vector Space Approach -- Classification -- Hyperplanes and Linear Classifiers -- Unsupervised Learning -- Reinforcement Learning 001453761 5058_ $$aOptimization Techniques for Machine-Learning Gradient Descent -- Gradient Descent for a Multivariate Cost Function -- Contour Plot and Contour Lines -- Steepest Descent -- Stochastic Gradient Descent -- Newton's Method -- Linear Curve -- Negative Curvature -- Positive Curvature -- Constrained Optimization Problem -- A Few Important Topics in Machine Learning -- Dimensionality-Reduction Methods -- Principal Component Analysis -- When Will PCA Be Useful in Data Reduction? -- How Do You Know How Much Variance Is Retained by the Selected Principal Components? -- Singular Value Decomposition 001453761 506__ $$aAccess limited to authorized users. 001453761 520__ $$aThis book builds upon the foundations established in its first edition, with updated chapters and the latest code implementations to bring it up to date with Tensorflow 2.0. Pro Deep Learning with TensorFlow 2.0 begins with the mathematical and core technical foundations of deep learning. Next, you will learn about convolutional neural networks, including new convolutional methods such as dilated convolution, depth-wise separable convolution, and their implementation. You'll then gain an understanding of natural language processing in advanced network architectures such as transformers and various attention mechanisms relevant to natural language processing and neural networks in general. As you progress through the book, you'll explore unsupervised learning frameworks that reflect the current state of deep learning methods, such as autoencoders and variational autoencoders. The final chapter covers the advanced topic of generative adversarial networks and their variants, such as cycle consistency GANs and graph neural network techniques such as graph attention networks and GraphSAGE. Upon completing this book, you will understand the mathematical foundations and concepts of deep learning, and be able to use the prototypes demonstrated to build new deep learning applications. What You Will Learn Understand full-stack deep learning using TensorFlow 2.0 Gain an understanding of the mathematical foundations of deep learning Deploy complex deep learning solutions in production using TensorFlow 2.0 Understand generative adversarial networks, graph attention networks, and GraphSAGE Who This Book Is For: Data scientists and machine learning professionals, software developers, graduate students, and open source enthusiasts. 001453761 588__ $$aDescription based on online resource; title from digital title page (viewed on January 17, 2023). 001453761 63000 $$aTensorFlow (Electronic resource) 001453761 650_0 $$aMachine learning. 001453761 650_0 $$aArtificial intelligence. 001453761 655_0 $$aElectronic books. 001453761 77608 $$iPrint version: $$z1484289307$$z9781484289303$$w(OCoLC)1345459702 001453761 852__ $$bebk 001453761 85640 $$3Springer Nature$$uhttps://univsouthin.idm.oclc.org/login?url=https://link.springer.com/10.1007/978-1-4842-8931-0$$zOnline Access$$91397441.1 001453761 909CO $$ooai:library.usi.edu:1453761$$pGLOBAL_SET 001453761 980__ $$aBIB 001453761 980__ $$aEBOOK 001453761 982__ $$aEbook 001453761 983__ $$aOnline 001453761 994__ $$a92$$bISE