본문 바로가기

리디 접속이 원활하지 않습니다.
강제 새로 고침(Ctrl + F5)이나 브라우저 캐시 삭제를 진행해주세요.
계속해서 문제가 발생한다면 리디 접속 테스트를 통해 원인을 파악하고 대응 방법을 안내드리겠습니다.
테스트 페이지로 이동하기

Hands-On Mathematics for Deep Learning 상세페이지

컴퓨터/IT 개발/프로그래밍 ,   컴퓨터/IT IT 해외원서

Hands-On Mathematics for Deep Learning

Build a solid mathematical foundation for training efficient deep neural networks
소장전자책 정가23,000
판매가23,000
Hands-On Mathematics for Deep Learning 표지 이미지

Hands-On Mathematics for Deep Learning작품 소개

<Hands-On Mathematics for Deep Learning> ▶What You Will Learn
- Understand the key mathematical concepts for building neural network models
- Discover core multivariable calculus concepts
- Improve the performance of deep learning models using optimization techniques
- Cover optimization algorithms, from basic stochastic gradient descent (SGD) to the advanced Adam optimizer
- Understand computational graphs and their importance in DL
- Explore the backpropagation algorithm to reduce output error
- Cover DL algorithms such as convolutional neural networks (CNNs), sequence models, and generative adversarial networks (GANs)

▶Key Features
- Understand linear algebra, calculus, gradient algorithms, and other concepts essential for training deep neural networks
- Learn the mathematical concepts needed to understand how deep learning models function
- Use deep learning for solving problems related to vision, image, text, and sequence applications

▶Who This Book Is For
This book is for data scientists, machine learning developers, aspiring deep learning developers, or anyone who wants to understand the foundation of deep learning by learning the math behind it. Working knowledge of the Python programming language and machine learning basics is required.

▶What this book covers
- Chapter 1, Linear Algebra, will give you an understanding of the inner workings of linear algebra, which is essential for understanding how deep neural networks work. In particular, you will learn about multi-dimensional linear equations, how matrices are multiplied together, and various methods of decomposing/factorizing matrices. These concepts will be critical for developing an intuition for how forward propagation works in neural networks.

- Chapter 2, Vector Calculus, will cover all the main concepts of calculus, where you will start by learning the fundamentals of single variable calculus and build toward an understanding of multi-variable and ultimately vector calculus. The concepts of this chapter will help you better understand the math that underlies the training process of neural networks, particularly how backpropagation works.

- Chapter 3, Probability and Statistics, will teach you the essentials of both probability and statistics and how they are related to each other. In particular, the focus will be on understanding different types of distributions, the importance of the central limit theorem, and how estimations are made. This chapter is critical to developing an understanding of what exactly it is that neural networks are learning.

- Chapter 4, Optimization, will explain what exactly optimization is and several methods of it that are used in practice, such as least squares, gradient descent, Newton's method, and genetic algorithms. The methods covered in this chapter are essential to understanding how neural networks learn during their training phase.

- Chapter 5, Graph Theory, will teach you about graph theory, which is used to model relationships between objects, and will also help in your understanding of the different types of neural network architectures. Later in the book, the concepts from this chapter will be very useful for understanding how graph neural networks work.

- Chapter 6, Linear Neural Networks, will cover the most basic type of neural network and teach you how a model learns to find linear relationships from data through regression. You will also learn that this type of model has limitations, which is where the need for neural networks arises.

- Chapter 7, Feedforward Neural Networks, will show you how all the concepts covered in the previous chapters are brought together to form modern-day neural networks, including coverage of how they are structured, how and what they learn, and what makes them so powerful.

- Chapter 8, Regularization, will show you the various methods of regularization, such as dropout and norm penalties, that are used extensively in practice to help our models to generalize to test data so that they work well once deployed.

- Chapter 9, Convolutional Neural Networks, will explain CNNs, which are a variant of feedforward neural networks and are particularly effective for tasks related to computer vision, as well as time series analysis.

- Chapter 10, Recurrent Neural Networks, will explain RNNs, which are another variant of feedforward neural networks that have recurrent connections, which gives them the ability to learn relationships in sequences such as those in time series and language.

- Chapter 11, Attention Mechanisms, will show a relatively recent breakthrough in deep learning known as attention. This has led to the creation of transformer models, which have resulted in phenomenal results in tasks related to natural language processing.

- Chapter 12, Generative Models, is where the focus will be switched from neural networks that learn to predict classes given data to models that learn to synthetically create data. You will learn about various models, such as autoencoders, GANs, and flow-based networks.

- Chapter 13, Transfer and Meta Learning, will teach you about two separate but related concepts known as transfer learning and meta learning. Their goals respectively are to transfer what one model has learned to another to help it work on a similar task and to create networks that can use existing knowledge to learn new tasks or learn how to learn.

- Chapter 14, Geometric Deep Learning, will explain another relatively new concept in DL, which extends the power of deep neural networks from the Euclidean domain to the non-Euclidean domain.


출판사 서평

▶ Preface
Most programmers and data scientists struggle with mathematics, having either overlooked or forgotten core mathematical concepts. This book uses Python libraries to help you understand the math required to build deep learning (DL) models.

You'll begin by learning about core mathematical and modern computational techniques used to design and implement DL algorithms. This book will cover essential topics, such as linear algebra, eigenvalues and eigenvectors, the singular value decomposition concept, and gradient algorithms, to help you understand how to train deep neural networks. Later chapters focus on important neural networks, such as the linear neural network and multilayer perceptrons, with a primary focus on helping you learn how each model works. As you advance, you will delve into the math used for regularization, multi-layered DL, forward propagation, optimization, and backpropagation techniques to understand what it takes to build full-fledged DL models. Finally, you'll explore CNN, recurrent neural network (RNN), and GAN models and their application.

By the end of this book, you'll have built a strong foundation in neural networks and DL mathematical concepts, which will help you to confidently research and build custom models in DL.


저자 소개

▶About the Author
- Jay Dawani
Jay Dawani is a former professional swimmer turned mathematician and computer scientist. He is also a Forbes 30 Under 30 Fellow. At present, he is the Director of Artificial Intelligence at Geometric Energy Corporation (NATO CAGE) and the CEO of Lemurian Labs - a startup he founded that is developing the next generation of autonomy, intelligent process automation, and driver intelligence. Previously he has also been the technology and R&D advisor to Spacebit Capital. He has spent the last three years researching at the frontiers of AI with a focus on reinforcement learning, open-ended learning, deep learning, quantum machine learning, human-machine interaction, multi-agent and complex systems, and artificial general intelligence.

목차

▶TABLE of CONTENTS
▷ Section 1: Essential Mathematics for Deep Learning
1. Linear Algebra
2. Vector Calculus
3. Probability and Statistics
4. Optimization
5. Graph Theory
▷ Section 2: Essential Neural Networks
6. Linear Neural Networks
7. Feedforward Neural Networks
8. Regularization
9. Convolutional Neural Networks
10. Recurrent Neural Networks
▷ Section 3: Advanced Deep Learning Concepts Simplified
11. Attention Mechanisms
12. Generative Models
13. Transfer and Meta Learning
14. Geometric Deep Learning


리뷰

구매자 별점

0.0

점수비율
  • 5
  • 4
  • 3
  • 2
  • 1

0명이 평가함

리뷰 작성 영역

이 책을 평가해주세요!

내가 남긴 별점 0.0

별로예요

그저 그래요

보통이에요

좋아요

최고예요

별점 취소

구매자 표시 기준은 무엇인가요?

'구매자' 표시는 리디에서 유료도서 결제 후 다운로드 하시거나 리디셀렉트 도서를 다운로드하신 경우에만 표시됩니다.

무료 도서 (프로모션 등으로 무료로 전환된 도서 포함)
'구매자'로 표시되지 않습니다.
시리즈 도서 내 무료 도서
'구매자’로 표시되지 않습니다. 하지만 같은 시리즈의 유료 도서를 결제한 뒤 리뷰를 수정하거나 재등록하면 '구매자'로 표시됩니다.
영구 삭제
도서를 영구 삭제해도 ‘구매자’ 표시는 남아있습니다.
결제 취소
‘구매자’ 표시가 자동으로 사라집니다.

이 책과 함께 구매한 책


이 책과 함께 둘러본 책



본문 끝 최상단으로 돌아가기

spinner
모바일 버전