본문 바로가기

리디 접속이 원활하지 않습니다.
강제 새로 고침(Ctrl + F5)이나 브라우저 캐시 삭제를 진행해주세요.
계속해서 문제가 발생한다면 리디 접속 테스트를 통해 원인을 파악하고 대응 방법을 안내드리겠습니다.
테스트 페이지로 이동하기

Hands-On GPU Programming with Python and CUDA 상세페이지

컴퓨터/IT 개발/프로그래밍 ,   컴퓨터/IT IT 해외원서

Hands-On GPU Programming with Python and CUDA

Explore high-performance parallel computing with CUDA
소장전자책 정가19,000
판매가19,000
Hands-On GPU Programming with Python and CUDA 표지 이미지

Hands-On GPU Programming with Python and CUDA작품 소개

<Hands-On GPU Programming with Python and CUDA> ▶Book Description
Hands-On GPU Programming with Python and CUDA hits the ground running: you'll start by learning how to apply Amdahl's Law, use a code profiler to identify bottlenecks in your Python code, and set up an appropriate GPU programming environment. You'll then see how to “query” the GPU's features and copy arrays of data to and from the GPU's own memory.

As you make your way through the book, you'll launch code directly onto the GPU and write full blown GPU kernels and device functions in CUDA C. You'll get to grips with profiling GPU code effectively and fully test and debug your code using Nsight IDE. Next, you'll explore some of the more well-known NVIDIA libraries, such as cuFFT and cuBLAS.

With a solid background in place, you will now apply your new-found knowledge to develop your very own GPU-based deep neural network from scratch. You'll then explore advanced topics, such as warp shuffling, dynamic parallelism, and PTX assembly. In the final chapter, you'll see some topics and applications related to GPU programming that you may wish to pursue, including AI, graphics, and blockchain.

By the end of this book, you will be able to apply GPU programming to problems related to data science and high-performance computing.

▶What You Will Learn
⦁ Launch GPU code directly from Python
⦁ Write effective and efficient GPU kernels and device functions
⦁ Use libraries such as cuFFT, cuBLAS, and cuSolver
⦁ Debug and profile your code with Nsight and Visual Profiler
⦁ Apply GPU programming to datascience problems
⦁ Build a GPU-based deep neuralnetwork from scratch
⦁ Explore advanced GPU hardware features, such as warp shuffling

▶Key Features
⦁ Expand your background in GPU programming―PyCUDA, scikit-cuda, and Nsight
⦁ Effectively use CUDA libraries such as cuBLAS, cuFFT, and cuSolver
⦁ Apply GPU programming to modern data science applications

▶Who This Book Is For
Hands-On GPU Programming with Python and CUDA is for developers and data scientists who want to learn the basics of effective GPU programming to improve performance using Python code. You should have an understanding of first-year college or university-level engineering mathematics and physics, and have some experience with Python as well as in any C-based programming language such as C, C++, Go, or Java.

▶What this book covers
⦁ Chapter 1, Why GPU Programming?, gives us some motivations as to why we should learn this field, and how to apply Amdahl's Law to estimate potential performance improvements from translating a serial program to making use of a GPU.

⦁ Chapter 2, Setting Up Your GPU Programming Environment, explains how to set up an appropriate Python and C++ development environment for CUDA under both Windows and Linux.

⦁ Chapter 3, Getting Started with PyCUDA, shows the most essential skills we will need for programming GPUs from Python. We will notably see how to transfer data to and from a GPU using PyCUDA's gpuarray class, and how to compile simple CUDA kernels with PyCUDA's ElementwiseKernel function.

⦁ Chapter 4, Kernels, Threads, Blocks, and Grids, teaches the fundamentals of writing effective CUDA kernels, which are parallel functions that are launched on the GPU. We will see how to write CUDA device functions ("serial" functions called directly by CUDA kernels), and learn about CUDA's abstract grid/block structure and the role it plays in launching kernels.

⦁ Chapter 5, Streams, Events, Contexts, and Concurrency, covers the notion of CUDA Streams, which is a feature that allows us to launch and synchronize many kernels onto a GPU concurrently. We will see how to use CUDA Events to time kernel launches, and how to create and use CUDA Contexts.

⦁ Chapter 6, Debugging and Profiling Your CUDA Code, fill in some of the gaps we have in terms of pure CUDA C programming, and shows us how to use the NVIDIA Nsight IDE for debugging and development, as well as how to use the NVIDIA profiling tools.

⦁ Chapter 7, Using the CUDA Libraries with Scikit-CUDA, gives us a brief tour of some of the important standard CUDA libraries by way of the Python Scikit-CUDA module, including cuBLAS, cuFFT, and cuSOLVER.

⦁ Chapter 8, The CUDA Device Function Libraries and Thrust, shows us how to use the cuRAND and CUDA Math API libraries in our code, as well as how to use CUDA Thrust C++ containers.

⦁ Chapter 9, Implementation of a Deep Neural Network, serves as a capstone in which we learn how to build an entire deep neural network from scratch, applying many of the ideas we have learned in the text.

⦁ Chapter 10, Working with Compiled GPU Code, shows us how to interface our Python code with pre-compiled GPU code, using both PyCUDA and Ctypes.

⦁ Chapter 11, Performance Optimization in CUDA, teaches some very low-level performance optimization tricks, especially in relation to CUDA, such as warp shuffling, vectorized memory access, using inline PTX assembly, and atomic operations.

⦁ Chapter 12, Where to Go from Here, is an overview of some of the educational and career paths you will have that will build upon your now-solid foundation in GPU programming.


출판사 서평

▶ Preface
Greetings and salutations! This text is an introductory guide to GPU programming with Python and CUDA. GPU may stand for Graphics Programming Unit, but we should be clear that this book is not about graphics programming—it is essentially an introduction to General-Purpose GPU Programming, or GPGPU Programming for short. Over the last decade, it has become clear that GPUs are well suited for computations besides rendering graphics, particularly for parallel computations that require a great deal of computational throughput. To this end, NVIDIA released the CUDA Toolkit, which has made the world of GPGPU programming all the more accessible to just about anyone with some C programming knowledge.

The aim of Hands-On GPU Programming with Python and CUDA is to get you started in the world of GPGPU programming as quickly as possible. We have strived to come up with fun and interesting examples and exercises for each chapter; in particular, we encourage you to type in these examples and run them from your favorite Python environment as you go along (Spyder, Jupyter, and PyCharm are all suitable choices). This way, you will eventually learn all of the requisite functions and commands, as well as gain an intuition of how a GPGPU program should be written.

Initially, GPGPU parallel programming seems very complex and daunting, especially if you've only done CPU programming in the past. There are so many new concepts and conventions you have to learn that it may seem like you're starting all over again at zero. During these times, you'll have to have some faith that your efforts to learn this field are not for naught. With a little bit of initiative and discipline, this subject will seem like second nature to you by the time you reach the end of the text.

Happy programming!


저자 소개

⦁ Dr. Brian Tuomanen
Dr. Brian Tuomanen has been working with CUDA and General-Purpose GPU Programming since 2014. He received his Bachelor of Science in Electrical Engineering from the University of Washington in Seattle, and briefly worked as a Software Engineer before switching to Mathematics for Graduate School. He completed his Ph.D. in Mathematics at the University of Missouri in Columbia, where he first encountered GPU programming as a means for studying scientific problems. Dr. Tuomanen has spoken at the US Army Research Lab about General Purpose GPU programming, and has recently lead GPU integration and development at a Maryland based start-up company. He currently lives and works in the Seattle area.

목차

▶TABLE of CONTENTS
1: WHY GPU PROGRAMMING?
2: SETTING UP YOUR GPU PROGRAMMING ENVIRONMENT
3: GETTING STARTED WITH PYCUDA
4: KERNELS, THREADS, BLOCKS, AND GRIDS
5: STREAMS, EVENTS, CONTEXTS, AND CONCURRENCY
6: DEBUGGING AND PROFILING YOUR CUDA CODE
7: USING THE CUDA LIBRARIES WITH SCIKIT-CUDA
8: THE CUDA DEVICE FUNCTION LIBRARIES AND THRUST
9: IMPLEMENTATION OF A DEEP NEURAL NETWORK
10: WORKING WITH COMPILED GPU CODE
11: PERFORMANCE OPTIMIZATION IN CUDA
12: WHERE TO GO FROM HERE


리뷰

구매자 별점

0.0

점수비율
  • 5
  • 4
  • 3
  • 2
  • 1

0명이 평가함

리뷰 작성 영역

이 책을 평가해주세요!

내가 남긴 별점 0.0

별로예요

그저 그래요

보통이에요

좋아요

최고예요

별점 취소

구매자 표시 기준은 무엇인가요?

'구매자' 표시는 리디에서 유료도서 결제 후 다운로드 하시거나 리디셀렉트 도서를 다운로드하신 경우에만 표시됩니다.

무료 도서 (프로모션 등으로 무료로 전환된 도서 포함)
'구매자'로 표시되지 않습니다.
시리즈 도서 내 무료 도서
'구매자’로 표시되지 않습니다. 하지만 같은 시리즈의 유료 도서를 결제한 뒤 리뷰를 수정하거나 재등록하면 '구매자'로 표시됩니다.
영구 삭제
도서를 영구 삭제해도 ‘구매자’ 표시는 남아있습니다.
결제 취소
‘구매자’ 표시가 자동으로 사라집니다.

이 책과 함께 구매한 책


이 책과 함께 둘러본 책



본문 끝 최상단으로 돌아가기

spinner
모바일 버전