본문 바로가기

리디 접속이 원활하지 않습니다.
강제 새로 고침(Ctrl + F5)이나 브라우저 캐시 삭제를 진행해주세요.
계속해서 문제가 발생한다면 리디 접속 테스트를 통해 원인을 파악하고 대응 방법을 안내드리겠습니다.
테스트 페이지로 이동하기

Machine Learning Quick Reference 상세페이지

Machine Learning Quick Reference

Quick and essential machine learning hacks for training smart data models

  • 관심 0
소장
전자책 정가
12,000원
판매가
12,000원
출간 정보
  • 2019.01.31 전자책 출간
듣기 기능
TTS(듣기) 지원
파일 정보
  • PDF
  • 283 쪽
  • 15.2MB
지원 환경
  • PC뷰어
  • PAPER
ISBN
9781788831611
ECN
-
Machine Learning Quick Reference

작품 정보

▶Book Description
Machine learning makes it possible to learn about the unknowns and gain hidden insights into your datasets by mastering many tools and techniques. This book guides you to do just that in a very compact manner.

After giving a quick overview of what machine learning is all about, Machine Learning Quick Reference jumps right into its core algorithms and demonstrates how they can be applied to real-world scenarios. From model evaluation to optimizing their performance, this book will introduce you to the best practices in machine learning. Furthermore, you will also look at the more advanced aspects such as training neural networks and work with different kinds of data, such as text, time-series, and sequential data. Advanced methods and techniques such as causal inference, deep Gaussian processes, and more are also covered.

By the end of this book, you will be able to train fast, accurate machine learning models at your fingertips, which you can easily use as a point of reference.

▶What You Will Learn
⦁ Get a quick rundown of model selection, statistical modeling, and cross-validation
⦁ Choose the best machine learning algorithm to solve your problem
⦁ Explore kernel learning, neural networks, and time-series analysis
⦁ Train deep learning models and optimize them for maximum performance
⦁ Briefly cover Bayesian techniques and sentiment analysis in your NLP solution
⦁ Implement probabilistic graphical models and causal inferences
⦁ Measure and optimize the performance of your machine learning models

▶Key Features
⦁ Your guide to learning efficient machine learning processes from scratch
⦁ Explore expert techniques and hacks for a variety of machine learning concepts
⦁ Write effective code in R, Python, Scala, and Spark to solve all your machine learning problems

▶Who This Book Is For
If you're a machine learning practitioner, data scientist, machine learning developer, or engineer, this book will serve as a reference point in building machine learning solutions. You will also find this book useful if you're an intermediate machine learning developer or data scientist looking for a quick, handy reference to all the concepts of machine learning. You'll need some exposure to machine learning to get the best out of this book.

▶What this book covers
⦁ Chapter 1, Quantification of Learning, builds the foundation for later chapters. First, we are going to understand the meaning of a statistical model. We'll also discuss the thoughts of Leo Breiman about statistical modeling. Later, we will discuss curves and why they are so important. One of the typical ways to find out the association between variables and modeling is curve fitting, which is introduced in this chapter.
To build a model, one of the steps is to partition the data. We will discuss the reasoning behind this and examine an approach to carry it out. While we are building a model, more often that not it is not a smooth ride, and we run into several issues. We often encounter overfitting and underfitting, for several reasons. We need to understand why and learn how to overcome it. Also, we will be discussing how overfitting and underfitting are connected to bias and variance. This chapter will discuss these concepts with respect to neural networks. Regularization is one of the hyperparameters that is an integral part of the model building process. We will understand why it is required. Cross-validation, model selection, and 0.632+ bootstrap will be talked about in this chapter, as they help data scientists to fine-tune a model.

⦁ Chapter 2, Evaluating Kernel Learning, explains how support vector machines (SVMs) have been among the most sophisticated models and have grabbed a lot of attention in the areas of classification and regression. But practitioners still find them difficult to grasp as it involve lots of mathematics. However, we have tried to keep it simple and mathematical too, so that you should be able to understand the tricks of SVMs. Also, we'll look at the kernel trick, which took SVMs to another level by making computation simple, to an extent. We will study the different types of kernel and their usage.

⦁ Chapter 3, Performance in Ensemble Learning, explains how to build models based on the concepts of bagging and boosting, which are ruling the world of hackathons. We will discuss bagging and boosting in detail. They have led to the creation of many good algorithms, such as random forest and gradient boosting. We will discuss each in detail with the help of a use case so that you can understand the difference between these two. Also, an important part of this chapter deals with the optimization of hyperparameters.

⦁ Chapter 4, Training Neural Networks, covers neural networks, which have always been deemed black box algorithms that take lots of effort to understand. We have tried to unbox the complexities surrounding NNs. We have started with detailing how NNs are analogous to the human brain. This chapter also covers what parameters such as weights and biases are and how an NN learns. An NN's learning process involves network initialization, a feedforward system, and cost calculation. Once a cost is calculated, backpropagation kicks off.
Next comes the challenges in the model, such as exploding gradients, vanishing gradients, and overfitting. This chapter encompasses all such problems, helps us understand why such challenges occur, and explains how to overcome them.

⦁ Chapter 5, Time-Series Analysis, covers different time series models for analyzing demand forecasting, be it stock price or sales forecasting, or anything else. Almost every industry runs into such use cases. In order to carry out such use cases, there are multiple approaches, and what we have covered is autoregressive models, ARMA, ARIMA, and others. We have started with the concepts of autoregression. Then comes stationarity, which is an important element of such models. This chapter examines stationarity and how we can detect it. Also, assessment of the model is covered too. Anomaly detection in econometrics is also discussed at length with the help of a use case.

⦁ Chapter 6, Natural Language Processing, explains what natural language processing is making textual data talk. There are a number of algorithms that make this work. We cannot work with textual data as it is. It needs to be vectorized and embedded. This chapter covers various ways of doing this, such as TF-IDF and bag-of-words methods. We will also talk about how sentiment analysis can be done with the help of such approaches, and compare the results of different methods. We then move on to topic modeling, wherein the prime motive is to extract the the main topics from a corpus. And later, we will examine a use case and solve it with a Bayesian approach.

⦁ Chapter 7, Temporal and Sequential Pattern Discovery, focuses on why it is necessary to study frequent itemsets and how we can deal with them. We cover the use of the Apriori and Frequent Pattern Growth algorithms to uncover findings in transactional data.
Chapter 8, Probabilistic Graphical Models, covers Bayesian networks and how they are making a difference in machine learning. We will look at Bayesian networks (trees) constructed on conditional probability tables.

⦁ Chapter 9, Selected Topics in Deep Learning, explains that as the world is transitioning from simple business analytics to deep learning, we have lots to catch up on. This chapter explores weight initialization, layer formation, the calculation of cost, and backpropagation. And subsequently, we will introduce Hinton's capsule network and look at how it works.

⦁ Chapter 10, Causal Inference, discusses algorithms that provide a directional view around causality in a time series. Our stakeholders often mention the causality behind the target variable. So, we have addressed it using the Granger causality model in time series, and we have also discussed Bayesian techniques that enable us to achieve causality.

⦁ Chapter 11, Advanced Methods, explains that there are number of state-of-the-art models in the pipeline, and they need a special mention in this book. This chapter should help you understand and apply them. Also, we have talked about independent component analysis and how it is different from principal component analysis. Subsequently, we discuss the Bayesian technique of multiple imputation and its importance. We will also get an understanding of self-organizing maps and why they are important. Lastly, we will also touch upon compressed sensing.

작가 소개

⦁ Rahul Kumar
Rahul Kumar has got more than 10 years of experience in the space of Data Science and Artificial Intelligence. His expertise lies in the machine learning and deep learning arena. He is known to be a seasoned professional in the area of Business Consulting and Business Problem Solving, fuelled by his proficiency in machine learning and deep learning. He has been associated with organizations such as Mercedes-Benz Research and Development(India), Fidelity Investments, Royal Bank of Scotland among others. He has accumulated a diverse exposure through industries like BFSI, telecom and automobile. Rahul has also got papers published in IIM and IISc Journals.

리뷰

0.0

구매자 별점
0명 평가

이 작품을 평가해 주세요!

건전한 리뷰 정착 및 양질의 리뷰를 위해 아래 해당하는 리뷰는 비공개 조치될 수 있음을 안내드립니다.
  1. 타인에게 불쾌감을 주는 욕설
  2. 비속어나 타인을 비방하는 내용
  3. 특정 종교, 민족, 계층을 비방하는 내용
  4. 해당 작품의 줄거리나 리디 서비스 이용과 관련이 없는 내용
  5. 의미를 알 수 없는 내용
  6. 광고 및 반복적인 글을 게시하여 서비스 품질을 떨어트리는 내용
  7. 저작권상 문제의 소지가 있는 내용
  8. 다른 리뷰에 대한 반박이나 논쟁을 유발하는 내용
* 결말을 예상할 수 있는 리뷰는 자제하여 주시기 바랍니다.
이 외에도 건전한 리뷰 문화 형성을 위한 운영 목적과 취지에 맞지 않는 내용은 담당자에 의해 리뷰가 비공개 처리가 될 수 있습니다.
아직 등록된 리뷰가 없습니다.
첫 번째 리뷰를 남겨주세요!
'구매자' 표시는 유료 작품 결제 후 다운로드하거나 리디셀렉트 작품을 다운로드 한 경우에만 표시됩니다.
무료 작품 (프로모션 등으로 무료로 전환된 작품 포함)
'구매자'로 표시되지 않습니다.
시리즈 내 무료 작품
'구매자'로 표시되지 않습니다. 하지만 같은 시리즈의 유료 작품을 결제한 뒤 리뷰를 수정하거나 재등록하면 '구매자'로 표시됩니다.
영구 삭제
작품을 영구 삭제해도 '구매자' 표시는 남아있습니다.
결제 취소
'구매자' 표시가 자동으로 사라집니다.

개발/프로그래밍 베스트더보기

  • 핸즈온 LLM (제이 알아마르, 마르턴 흐루턴도르스트)
  • LLM과 RAG로 구현하는 AI 애플리케이션 (에디유, 대니얼김)
  • 도커로 구축한 랩에서 혼자 실습하며 배우는 네트워크 프로토콜 입문 (미야타 히로시, 이민성)
  • 나만의 MCP 서버 만들기 with 커서 AI (서지영)
  • 개정판 | 밑바닥부터 시작하는 딥러닝 1 (사이토 고키, 이복연)
  • 생성형 AI 인 액션 (아미트 바리, 이준)
  • 테디노트의 랭체인을 활용한 RAG 비법노트 심화편 (이경록)
  • 지식그래프 (이광배, 이채원)
  • LLM 인 프로덕션 (크리스토퍼 브루소, 매슈 샤프)
  • 객체지향의 사실과 오해 (조영호)
  • 데이터 삽질 끝에 UX가 보였다 (이미진(란란))
  • LLM을 활용한 실전 AI 애플리케이션 개발 (허정준, 정진호)
  • 지속적 배포 (발렌티나 세르빌, 이일웅)
  • 테디노트의 랭체인을 활용한 RAG 비법노트_기본편 (이경록(테디노트))
  • 개정2판 | 파인만의 컴퓨터 강의 (리처드 파인만, 서환수)
  • 생성형 AI를 위한 프롬프트 엔지니어링 (제임스 피닉스, 마이크 테일러)
  • 실전! 스프링 부트 3 & 리액트로 시작하는 모던 웹 애플리케이션 개발 (주하 힌쿨라, 변영인)
  • 혼자 공부하는 네트워크 (강민철)
  • 혼자 공부하는 컴퓨터 구조+운영체제 (강민철)
  • 개정2판 | 인프라 엔지니어의 교과서 (사노 유타카, 김성훈)

본문 끝 최상단으로 돌아가기

spinner
앱으로 연결해서 다운로드하시겠습니까?
닫기 버튼
대여한 작품은 다운로드 시점부터 대여가 시작됩니다.
앱으로 연결해서 보시겠습니까?
닫기 버튼
앱이 설치되어 있지 않으면 앱 다운로드로 자동 연결됩니다.
모바일 버전