본문 바로가기

리디 접속이 원활하지 않습니다.
강제 새로 고침(Ctrl + F5)이나 브라우저 캐시 삭제를 진행해주세요.
계속해서 문제가 발생한다면 리디 접속 테스트를 통해 원인을 파악하고 대응 방법을 안내드리겠습니다.
테스트 페이지로 이동하기

Machine Learning Quick Reference 상세페이지

Machine Learning Quick Reference

Quick and essential machine learning hacks for training smart data models

  • 관심 0
소장
전자책 정가
12,000원
판매가
12,000원
출간 정보
  • 2019.01.31 전자책 출간
듣기 기능
TTS(듣기) 지원
파일 정보
  • PDF
  • 283 쪽
  • 15.2MB
지원 환경
  • PC뷰어
  • PAPER
ISBN
9781788831611
UCI
-
Machine Learning Quick Reference

작품 정보

▶Book Description
Machine learning makes it possible to learn about the unknowns and gain hidden insights into your datasets by mastering many tools and techniques. This book guides you to do just that in a very compact manner.

After giving a quick overview of what machine learning is all about, Machine Learning Quick Reference jumps right into its core algorithms and demonstrates how they can be applied to real-world scenarios. From model evaluation to optimizing their performance, this book will introduce you to the best practices in machine learning. Furthermore, you will also look at the more advanced aspects such as training neural networks and work with different kinds of data, such as text, time-series, and sequential data. Advanced methods and techniques such as causal inference, deep Gaussian processes, and more are also covered.

By the end of this book, you will be able to train fast, accurate machine learning models at your fingertips, which you can easily use as a point of reference.

▶What You Will Learn
⦁ Get a quick rundown of model selection, statistical modeling, and cross-validation
⦁ Choose the best machine learning algorithm to solve your problem
⦁ Explore kernel learning, neural networks, and time-series analysis
⦁ Train deep learning models and optimize them for maximum performance
⦁ Briefly cover Bayesian techniques and sentiment analysis in your NLP solution
⦁ Implement probabilistic graphical models and causal inferences
⦁ Measure and optimize the performance of your machine learning models

▶Key Features
⦁ Your guide to learning efficient machine learning processes from scratch
⦁ Explore expert techniques and hacks for a variety of machine learning concepts
⦁ Write effective code in R, Python, Scala, and Spark to solve all your machine learning problems

▶Who This Book Is For
If you're a machine learning practitioner, data scientist, machine learning developer, or engineer, this book will serve as a reference point in building machine learning solutions. You will also find this book useful if you're an intermediate machine learning developer or data scientist looking for a quick, handy reference to all the concepts of machine learning. You'll need some exposure to machine learning to get the best out of this book.

▶What this book covers
⦁ Chapter 1, Quantification of Learning, builds the foundation for later chapters. First, we are going to understand the meaning of a statistical model. We'll also discuss the thoughts of Leo Breiman about statistical modeling. Later, we will discuss curves and why they are so important. One of the typical ways to find out the association between variables and modeling is curve fitting, which is introduced in this chapter.
To build a model, one of the steps is to partition the data. We will discuss the reasoning behind this and examine an approach to carry it out. While we are building a model, more often that not it is not a smooth ride, and we run into several issues. We often encounter overfitting and underfitting, for several reasons. We need to understand why and learn how to overcome it. Also, we will be discussing how overfitting and underfitting are connected to bias and variance. This chapter will discuss these concepts with respect to neural networks. Regularization is one of the hyperparameters that is an integral part of the model building process. We will understand why it is required. Cross-validation, model selection, and 0.632+ bootstrap will be talked about in this chapter, as they help data scientists to fine-tune a model.

⦁ Chapter 2, Evaluating Kernel Learning, explains how support vector machines (SVMs) have been among the most sophisticated models and have grabbed a lot of attention in the areas of classification and regression. But practitioners still find them difficult to grasp as it involve lots of mathematics. However, we have tried to keep it simple and mathematical too, so that you should be able to understand the tricks of SVMs. Also, we'll look at the kernel trick, which took SVMs to another level by making computation simple, to an extent. We will study the different types of kernel and their usage.

⦁ Chapter 3, Performance in Ensemble Learning, explains how to build models based on the concepts of bagging and boosting, which are ruling the world of hackathons. We will discuss bagging and boosting in detail. They have led to the creation of many good algorithms, such as random forest and gradient boosting. We will discuss each in detail with the help of a use case so that you can understand the difference between these two. Also, an important part of this chapter deals with the optimization of hyperparameters.

⦁ Chapter 4, Training Neural Networks, covers neural networks, which have always been deemed black box algorithms that take lots of effort to understand. We have tried to unbox the complexities surrounding NNs. We have started with detailing how NNs are analogous to the human brain. This chapter also covers what parameters such as weights and biases are and how an NN learns. An NN's learning process involves network initialization, a feedforward system, and cost calculation. Once a cost is calculated, backpropagation kicks off.
Next comes the challenges in the model, such as exploding gradients, vanishing gradients, and overfitting. This chapter encompasses all such problems, helps us understand why such challenges occur, and explains how to overcome them.

⦁ Chapter 5, Time-Series Analysis, covers different time series models for analyzing demand forecasting, be it stock price or sales forecasting, or anything else. Almost every industry runs into such use cases. In order to carry out such use cases, there are multiple approaches, and what we have covered is autoregressive models, ARMA, ARIMA, and others. We have started with the concepts of autoregression. Then comes stationarity, which is an important element of such models. This chapter examines stationarity and how we can detect it. Also, assessment of the model is covered too. Anomaly detection in econometrics is also discussed at length with the help of a use case.

⦁ Chapter 6, Natural Language Processing, explains what natural language processing is making textual data talk. There are a number of algorithms that make this work. We cannot work with textual data as it is. It needs to be vectorized and embedded. This chapter covers various ways of doing this, such as TF-IDF and bag-of-words methods. We will also talk about how sentiment analysis can be done with the help of such approaches, and compare the results of different methods. We then move on to topic modeling, wherein the prime motive is to extract the the main topics from a corpus. And later, we will examine a use case and solve it with a Bayesian approach.

⦁ Chapter 7, Temporal and Sequential Pattern Discovery, focuses on why it is necessary to study frequent itemsets and how we can deal with them. We cover the use of the Apriori and Frequent Pattern Growth algorithms to uncover findings in transactional data.
Chapter 8, Probabilistic Graphical Models, covers Bayesian networks and how they are making a difference in machine learning. We will look at Bayesian networks (trees) constructed on conditional probability tables.

⦁ Chapter 9, Selected Topics in Deep Learning, explains that as the world is transitioning from simple business analytics to deep learning, we have lots to catch up on. This chapter explores weight initialization, layer formation, the calculation of cost, and backpropagation. And subsequently, we will introduce Hinton's capsule network and look at how it works.

⦁ Chapter 10, Causal Inference, discusses algorithms that provide a directional view around causality in a time series. Our stakeholders often mention the causality behind the target variable. So, we have addressed it using the Granger causality model in time series, and we have also discussed Bayesian techniques that enable us to achieve causality.

⦁ Chapter 11, Advanced Methods, explains that there are number of state-of-the-art models in the pipeline, and they need a special mention in this book. This chapter should help you understand and apply them. Also, we have talked about independent component analysis and how it is different from principal component analysis. Subsequently, we discuss the Bayesian technique of multiple imputation and its importance. We will also get an understanding of self-organizing maps and why they are important. Lastly, we will also touch upon compressed sensing.

작가 소개

⦁ Rahul Kumar
Rahul Kumar has got more than 10 years of experience in the space of Data Science and Artificial Intelligence. His expertise lies in the machine learning and deep learning arena. He is known to be a seasoned professional in the area of Business Consulting and Business Problem Solving, fuelled by his proficiency in machine learning and deep learning. He has been associated with organizations such as Mercedes-Benz Research and Development(India), Fidelity Investments, Royal Bank of Scotland among others. He has accumulated a diverse exposure through industries like BFSI, telecom and automobile. Rahul has also got papers published in IIM and IISc Journals.

리뷰

0.0

구매자 별점
0명 평가

이 작품을 평가해 주세요!

건전한 리뷰 정착 및 양질의 리뷰를 위해 아래 해당하는 리뷰는 비공개 조치될 수 있음을 안내드립니다.
  1. 타인에게 불쾌감을 주는 욕설
  2. 비속어나 타인을 비방하는 내용
  3. 특정 종교, 민족, 계층을 비방하는 내용
  4. 해당 작품의 줄거리나 리디 서비스 이용과 관련이 없는 내용
  5. 의미를 알 수 없는 내용
  6. 광고 및 반복적인 글을 게시하여 서비스 품질을 떨어트리는 내용
  7. 저작권상 문제의 소지가 있는 내용
  8. 다른 리뷰에 대한 반박이나 논쟁을 유발하는 내용
* 결말을 예상할 수 있는 리뷰는 자제하여 주시기 바랍니다.
이 외에도 건전한 리뷰 문화 형성을 위한 운영 목적과 취지에 맞지 않는 내용은 담당자에 의해 리뷰가 비공개 처리가 될 수 있습니다.
아직 등록된 리뷰가 없습니다.
첫 번째 리뷰를 남겨주세요!
'구매자' 표시는 유료 작품 결제 후 다운로드하거나 리디셀렉트 작품을 다운로드 한 경우에만 표시됩니다.
무료 작품 (프로모션 등으로 무료로 전환된 작품 포함)
'구매자'로 표시되지 않습니다.
시리즈 내 무료 작품
'구매자'로 표시되지 않습니다. 하지만 같은 시리즈의 유료 작품을 결제한 뒤 리뷰를 수정하거나 재등록하면 '구매자'로 표시됩니다.
영구 삭제
작품을 영구 삭제해도 '구매자' 표시는 남아있습니다.
결제 취소
'구매자' 표시가 자동으로 사라집니다.

개발/프로그래밍 베스트더보기

  • 바이브 코딩 너머 개발자 생존법 (애디 오스마니, 강민혁)
  • 혼자 공부하는 바이브 코딩 with 클로드 코드 (조태호)
  • 요즘 당근 AI 개발 (당근 팀)
  • 도메인 주도 설계를 위한 함수형 프로그래밍 (스콧 블라신, 박주형)
  • AI 자율학습 밑바닥부터 배우는 AI 에이전트 (다비드스튜디오)
  • 연필과 종이로 풀어보는 딥러닝 수학 워크북 214제 (톰 예(Tom yeh) )
  • 요즘 바이브 코딩 클로드 코드 완벽 가이드 (최지호(코드팩토리))
  • 밑바닥부터 만들면서 배우는 LLM (세바스찬 라시카, 박해선)
  • 알아서 잘하는 에이전틱 AI 시스템 구축하기 (안자나바 비스와스, 릭 탈루크다르)
  • 개정2판 | 소프트웨어 아키텍처 The Basics (마크 리처즈, 닐 포드)
  • 러스트 클린 코드 (브렌든 매슈스, 윤인도)
  • AI 엔지니어링 (칩 후옌, 변성윤)
  • 밑바닥부터 시작하는 웹 브라우저 (파벨 판체카, 크리스 해럴슨)
  • 그림으로 이해하는 도커와 쿠버네티스 (토쿠나가 코헤이 , 서수환)
  • 생성형 AI를 위한 프롬프트 엔지니어링 (제임스 피닉스, 마이크 테일러)
  • 개정판 | <소문난 명강의> 레트로의 유니티 6 게임 프로그래밍 에센스 (이제민)
  • 혼자 공부하는 네트워크 (강민철)
  • 데이터베이스 설계, 이렇게 하면 된다 (미크, 윤인성)
  • 핸즈온 바이브 코딩 (정도현)
  • 기본 이론에서 실무 예제까지, HANA 기반 Easy ABAP 3.0 (김성준, 박재형)

본문 끝 최상단으로 돌아가기

spinner
앱으로 연결해서 다운로드하시겠습니까?
닫기 버튼
대여한 작품은 다운로드 시점부터 대여가 시작됩니다.
앱으로 연결해서 보시겠습니까?
닫기 버튼
앱이 설치되어 있지 않으면 앱 다운로드로 자동 연결됩니다.
모바일 버전