본문 바로가기

리디 접속이 원활하지 않습니다.
강제 새로 고침(Ctrl + F5)이나 브라우저 캐시 삭제를 진행해주세요.
계속해서 문제가 발생한다면 리디 접속 테스트를 통해 원인을 파악하고 대응 방법을 안내드리겠습니다.
테스트 페이지로 이동하기

Reinforcement Learning Algorithms with Python 상세페이지

Reinforcement Learning Algorithms with Python

Learn, understand, and develop smart algorithms for addressing AI challenges

  • 관심 0
소장
전자책 정가
19,000원
판매가
19,000원
출간 정보
  • 2019.10.18 전자책 출간
듣기 기능
TTS(듣기) 지원
파일 정보
  • PDF
  • 356 쪽
  • 22.1MB
지원 환경
  • PC뷰어
  • PAPER
ISBN
9781789139709
ECN
-
Reinforcement Learning Algorithms with Python

작품 정보

▶What You Will Learn
- Develop an agent to play CartPole using the OpenAI Gym interface
- Discover the model-based reinforcement learning paradigm
- Solve the Frozen Lake problem with dynamic programming
- Explore Q-learning and SARSA with a view to playing a taxi game
- Apply Deep Q-Networks (DQNs) to Atari games using Gym
- Study policy gradient algorithms, including Actor-Critic and REINFORCE
- Understand and apply PPO and TRPO in continuous locomotion environments
- Get to grips with evolution strategies for solving the lunar lander problem

▶Key Features
- Learn, develop, and deploy advanced reinforcement learning algorithms to solve a variety of tasks
- Understand and develop model-free and model-based algorithms for building self-learning agents
- Work with advanced Reinforcement Learning concepts and algorithms such as imitation learning and evolution strategies

▶Who This Book Is For
If you are an AI researcher, deep learning user, or anyone who wants to learn reinforcement learning from scratch, this book is for you. You'll also find this reinforcement learning book useful if you want to learn about the advancements in the field. Working knowledge of Python is necessary.

▶What this book covers
- Chapter 1, The Landscape of Reinforcement Learning, gives you an insight into RL. It describes the problems that RL is good at solving and the applications where RL algorithms are already adopted. It also introduces the tools, the libraries, and the setup needed for the completion of the projects in the following chapters.

- Chapter 2, Implementing RL Cycle and OpenAI Gym, describes the main cycle of the RL algorithms, the toolkit used to develop the algorithms, and the different types of environments. You will be able to develop a random agent using the OpenAI Gym interface to play CartPole using random actions. You will also learn how to use the OpenAI Gym interface to run other environments.

- Chapter 3, Solving Problems with Dynamic Programming, introduces to you the core ideas, terminology, and approaches of RL. You will learn about the main blocks of RL and develop a general idea about how RL algorithms can be created to solve a problem. You will also learn the differences between model-based and model-free algorithms and the categorization of reinforcement learning algorithms. Dynamic programming will be used to solve the game FrozenLake.

- Chapter 4, Q-Learning and SARSA Applications, talks about value-based methods, in particular Q-learning and SARSA, two algorithms that differ from dynamic programming and scale well on large problems. To become confident with these algorithms, you will apply them to the FrozenLake game and study the differences from dynamic programming.

- Chapter 5, Deep Q-Networks, describes how neural networks and convolutional neural networks (CNNs) in particular are applied to Q-learning. You'll learn why the combination of Q-learning and neural networks produces incredible results and how its use can open the door to a much larger variety of problems. Furthermore, you'll apply the DQN to an Atari game using the OpenAI Gym interface.

- Chapter 6, Learning Stochastic and PG Optimization, introduces a new family of model-free algorithms: policy gradient methods. You will learn the differences between policy gradient and value-based methods, and you'll learn about their strengths and weaknesses. Then you will implement the REINFORCE and Actor-Critic algorithms to solve a new game called LunarLander.

- Chapter 7, TRPO and PPO Implementation, proposes a modification of policy gradient methods using new mechanisms to control the improvement of the policy. These mechanisms are used to improve the stability and convergence of the policy gradient algorithms. In particular you'll learn and implement two main policy gradient methods that use these techniques, namely TRPO and PPO. You will implement them on RoboSchool, an environment with a continuous action space.

- Chapter 8, DDPG and TD3 Applications, introduces a new category of algorithms called deterministic policy algorithms that combine both policy gradient and Q-learning. You will learn about the underlying concepts and implement DDPG and TD3, two deep deterministic algorithms, on a new environment.

- Chapter 9, Model-Based RL, illustrates RL algorithms that learn the model of the environment to plan future actions, or, to learn a policy. You will be taught how they work, their strengths, and why they are preferred in many situations. To master them, you will implement a model-based algorithm on Roboschool.

- Chapter 10, Imitation Learning with the DAgger Algorithm, explains how imitation learning works and how it can be applied and adapted to a problem. You will learn about the most well-known imitation learning algorithm, DAgger. To become confident with it, you will implement it to speed up the learning process of an agent on FlappyBird.

- Chapter 11, Understanding Black-Box Optimization Algorithms, explores evolutionary algorithms, a class of black-box optimization algorithms that don't rely on backpropagation. These algorithms are gaining interest because of their fast training and easy parallelization across hundreds or thousands of cores. This chapter provides a theoretical and practical background of these algorithms by focusing particularly on the Evolution Strategy algorithm, a type of evolutionary algorithm.

- Chapter 12, Developing ESBAS Algorithm, introduces the important exploration-exploitation dilemma, which is specific to RL. The dilemma is demonstrated using the multi-armed bandit problem and is solved using approaches such as UCB and UCB1. Then, you will learn about the problem of algorithm selection and develop a meta-algorithm called ESBAS. This algorithm uses UCB1 to select the most appropriate RL algorithm for each situation.

- Chapter 13, Practical Implementations to Resolve RL Challenges, takes a look at the major challenges in this field and explains some practices and methods to overcome them. You will also learn about some of the challenges of applying RL to real-world problems, future developments of deep RL, and their social impact in the world.

작가 소개

▶About the Author
- Andrea Lonza
Andrea Lonza is a deep learning engineer with a great passion for artificial intelligence and a desire to create machines that act intelligently. He has acquired expert knowledge in reinforcement learning, natural language processing, and computer vision through academic and industrial machine learning projects. He has also participated in several Kaggle competitions, achieving high results. He is always looking for compelling challenges and loves to prove himself.

리뷰

0.0

구매자 별점
0명 평가

이 작품을 평가해 주세요!

건전한 리뷰 정착 및 양질의 리뷰를 위해 아래 해당하는 리뷰는 비공개 조치될 수 있음을 안내드립니다.
  1. 타인에게 불쾌감을 주는 욕설
  2. 비속어나 타인을 비방하는 내용
  3. 특정 종교, 민족, 계층을 비방하는 내용
  4. 해당 작품의 줄거리나 리디 서비스 이용과 관련이 없는 내용
  5. 의미를 알 수 없는 내용
  6. 광고 및 반복적인 글을 게시하여 서비스 품질을 떨어트리는 내용
  7. 저작권상 문제의 소지가 있는 내용
  8. 다른 리뷰에 대한 반박이나 논쟁을 유발하는 내용
* 결말을 예상할 수 있는 리뷰는 자제하여 주시기 바랍니다.
이 외에도 건전한 리뷰 문화 형성을 위한 운영 목적과 취지에 맞지 않는 내용은 담당자에 의해 리뷰가 비공개 처리가 될 수 있습니다.
아직 등록된 리뷰가 없습니다.
첫 번째 리뷰를 남겨주세요!
'구매자' 표시는 유료 작품 결제 후 다운로드하거나 리디셀렉트 작품을 다운로드 한 경우에만 표시됩니다.
무료 작품 (프로모션 등으로 무료로 전환된 작품 포함)
'구매자'로 표시되지 않습니다.
시리즈 내 무료 작품
'구매자'로 표시되지 않습니다. 하지만 같은 시리즈의 유료 작품을 결제한 뒤 리뷰를 수정하거나 재등록하면 '구매자'로 표시됩니다.
영구 삭제
작품을 영구 삭제해도 '구매자' 표시는 남아있습니다.
결제 취소
'구매자' 표시가 자동으로 사라집니다.

개발/프로그래밍 베스트더보기

  • 한 걸음 앞선 개발자가 지금 꼭 알아야 할 클로드 코드 (조훈, 정찬훈)
  • AI 엔지니어링 (칩 후옌, 변성윤)
  • 헤드 퍼스트 소프트웨어 아키텍처 (라주 간디, 마크 리처드)
  • 블렌더로 애니 그림체 캐릭터를 만들어보자! -모델링편- (나츠모리 카츠, 김모세)
  • 딥러닝 제대로 이해하기 (사이먼 J. D. 프린스, 고연이)
  • AI 프로덕트 기획과 운영 (마릴리 니카, 오성근)
  • 플러터 엔지니어링 (마지드 하지안, 한국 플러터 커뮤니티)
  • 개발자를 위한 생성형 AI 활용 가이드 (핫토리 유우키, 하승민)
  • 블렌더로 애니 그림체 캐릭터를 만들어보자! 카툰 렌더링편 (나츠모리 카츠, 김모세)
  • AI 에이전트 생태계 (이주환)
  • 안티프래질 프런트엔드 (김상철)
  • 소문난 명강의 : 크리핵티브의 한 권으로 끝내는 웹 해킹 바이블 (하동민)
  • 밑바닥부터 시작하는 웹 브라우저 (파벨 판체카, 크리스 해럴슨)
  • 테디노트의 랭체인을 활용한 RAG 비법노트 심화편 (이경록)
  • 조코딩의 랭체인으로 AI 에이전트 서비스 만들기 (우성우, 조동근)
  • LLM을 활용한 실전 AI 애플리케이션 개발 (허정준, 정진호)
  • 게임 시스템 디자인 입문 (댁스 개저웨이, 강세중)
  • 데이터 중심 애플리케이션 설계 (마틴 클레프만, 정재부)
  • 현장에서 통하는 도메인 주도 설계 실전 가이드 (마스다 토오루, 타나카 히사테루)
  • 테디노트의 랭체인을 활용한 RAG 비법노트_기본편 (이경록(테디노트))

본문 끝 최상단으로 돌아가기

spinner
앱으로 연결해서 다운로드하시겠습니까?
닫기 버튼
대여한 작품은 다운로드 시점부터 대여가 시작됩니다.
앱으로 연결해서 보시겠습니까?
닫기 버튼
앱이 설치되어 있지 않으면 앱 다운로드로 자동 연결됩니다.
모바일 버전