본문 바로가기

리디 접속이 원활하지 않습니다.
강제 새로 고침(Ctrl + F5)이나 브라우저 캐시 삭제를 진행해주세요.
계속해서 문제가 발생한다면 리디 접속 테스트를 통해 원인을 파악하고 대응 방법을 안내드리겠습니다.
테스트 페이지로 이동하기

Advanced Natural Language Processing with TensorFlow 2 상세페이지

컴퓨터/IT 개발/프로그래밍 ,   컴퓨터/IT IT 해외원서

Advanced Natural Language Processing with TensorFlow 2

Build effective real-world NLP applications using NER, RNNs, seq2seq models, Transformers, and more
소장전자책 정가21,000
판매가21,000
Advanced Natural Language Processing with TensorFlow 2 표지 이미지

Advanced Natural Language Processing with TensorFlow 2작품 소개

<Advanced Natural Language Processing with TensorFlow 2> One-stop solution for NLP practitioners, ML developers, and data scientists to build effective NLP systems that can perform real-world complicated tasks

▶Book Description
Recently, there have been tremendous advances in NLP, and we are now moving from research labs into practical applications. This book comes with a perfect blend of both the theoretical and practical aspects of trending and complex NLP techniques.

The book is focused on innovative applications in the field of NLP, language generation, and dialogue systems. It helps you apply the concepts of pre-processing text using techniques such as tokenization, parts of speech tagging, and lemmatization using popular libraries such as Stanford NLP and SpaCy. You will build Named Entity Recognition (NER) from scratch using Conditional Random Fields and Viterbi Decoding on top of RNNs.

The book covers key emerging areas such as generating text for use in sentence completion and text summarization, bridging images and text by generating captions for images, and managing dialogue aspects of chatbots. You will learn how to apply transfer learning and fine-tuning using TensorFlow 2.

Further, it covers practical techniques that can simplify the labelling of textual data. The book also has a working code that is adaptable to your use cases for each tech piece.

By the end of the book, you will have an advanced knowledge of the tools, techniques and deep learning architecture used to solve complex NLP problems.

▶What You Will Learn
-Grasp important pre-steps in building NLP applications like POS tagging
-Use transfer and weakly supervised learning using libraries like Snorkel
-Do sentiment analysis using BERT
-Apply encoder-decoder NN architectures and beam search for summarizing texts
-Use Transformer models with attention to bring images and text together
-Build apps that generate captions and answer questions about images using custom Transformers
-Use advanced TensorFlow techniques like learning rate annealing, custom layers, and custom loss functions to build the latest DeepNLP models

▶Key Features
-Apply deep learning algorithms and techniques such as BiLSTMS, CRFs, BPE and more using TensorFlow 2
-Explore applications like text generation, summarization, weakly supervised labelling and more
-Read cutting edge material with seminal papers provided in the GitHub repository with full working code

▶Who This Book Is For
This is not an introductory book and assumes the reader is familiar with basics of NLP and has fundamental Python skills, as well as basic knowledge of machine learning and undergraduate-level calculus and linear algebra.</p><p>The readers who can benefit the most from this book include intermediate ML developers who are familiar with the basics of supervised learning and deep learning techniques and professionals who already use TensorFlow/Python for purposes such as data science, ML, research, analysis, etc.

▶What this book covers
- Chapter 1, Essentials of NLP, provides an overview of various topics in NLP such as tokenization, stemming, lemmatization, POS tagging, vectorization, etc. An overview of common NLP libraries like spaCy, Stanford NLP, and NLTK, with their key capabilities and use cases, will be provided. We will also build a simple classifier for spam.

- Chapter 2, Understanding Sentiment in Natural Language with BiLSTMs, covers the NLU use case of sentiment analysis with an overview of Recurrent Neural Networks (RNNs), LSTMs, and BiLSTMs, which are the basic building blocks of modern NLP models. We will also use tf.data for efficient use of CPUs and GPUs to speed up data pipelines and model training.

- Chapter 3, Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding, focuses on the key NLU problem of NER, which is a basic building block of taskoriented chatbots. We will build a custom layer for CRFs for improving the accuracy of NER and the Viterbi decoding scheme, which is often applied to a deep model to improve the quality of the output.

- Chapter 4, Transfer Learning with BERT, covers a number of important concepts in modern deep NLP such as types of transfer learning, pre-trained embeddings, an overview of Transformers, and BERT and its application in improving the sentiment analysis task introduced in Chapter 2, Understanding Sentiment in Natural Language with BiLSTMs.

- Chapter 5, Generating Text with RNNs and GPT-2, focuses on generating text with a custom character-based RNN and improving it with Beam Search. We will also cover the GPT-2 architecture and touch upon GPT-3.

- Chapter 6, Text Summarization with Seq2seq Attention and Transformer Networks, takes on the challenging task of abstractive text summarization. BERT and GPT are two halves of the full encoder-decoder model. We put them together to build a seq2seq model for summarizing news articles by generating headlines for them. How ROUGE metrics are used for the evaluation of summarization is also covered.

- Chapter 7, Multi-Modal Networks and Image Captioning with ResNets and Transformers, combines computer vision and NLP together to see if a picture is indeed worth a thousand words! We will build a custom Transformer model from scratch and train it to generate captions for images.

- Chapter 8, Weakly Supervised Learning for Classification with Snorkel, focuses on a key problem–labeling data. While NLP has a lot of unlabeled data, labeling it is quite an expensive task. This chapter introduces the snorkel library and shows how massive amounts of data can be quickly labeled.

- Chapter 9, Building Conversational AI Applications with Deep Learning, combines the various techniques covered throughout the book to show how different types of chatbots, such as question-answering or slot-filling bots, can be built.

- Chapter 10, Installation and Setup Instructions for Code, walks through all the instructions required to install and configure a system for running the code supplied with the book.


출판사 서평

▶ Preface
2017 was a watershed moment for Natural Language Processing (NLP), with Transformer-and attention-based networks coming to the fore. The past few years have been as transformational for NLP as AlexNet was for computer vision in 2012. Tremendous advances in NLP have been made, and we are now moving from research labs into applications.

These advances span the domains of Natural Language Understanding (NLU), Natural Language Generation (NLG), and Natural Language Interaction (NLI). With so much research in all of these domains, it can be a daunting task to understand the exciting developments in NLP.

This book is focused on cutting-edge applications in the fields of NLP, language generation, and dialog systems. It covers the concepts of pre-processing text using techniques such as tokenization, parts-of-speech (POS) tagging, and lemmatization using popular libraries such as Stanford NLP and spaCy. Named Entity Recognition (NER) models are built from scratch using Bi-directional Long Short-Term Memory networks (BiLSTMs), Conditional Random Fields (CRFs), and Viterbi decoding. Taking a very practical, application-focused perspective, the book covers key emerging areas such as generating text for use in sentence completion and text summarization, multi-modal networks that bridge images and text by generating captions for images, and managing the dialog aspects of chatbots. It covers one of the most important reasons behind recent advances of NLP – transfer learning and fine tuning. Unlabeled textual data is easily available but labeling this data is costly. This book covers practical techniques that can simplify the labeling of textual data.

By the end of the book, I hope you will have advanced knowledge of the tools, techniques, and deep learning architectures used to solve complex NLP problems. The book will cover encoder-decoder networks, Long Short-Term Memory networks (LSTMs) and BiLSTMs, CRFs, BERT, GPT-2, GPT-3, Transformers, and other key technologies using TensorFlow.

Advanced TensorFlow techniques required for building advanced models are also covered:

• Building custom models and layers
• Building custom loss functions
• Implementing learning rate annealing
• Using tf.data for loading data efficiently
• Checkpointing models to enable long training times (usually several days)

This book contains working code that can be adapted to your own use cases. I hope that you will even be able to do novel state-of-the-art research using the skills you'll gain as you progress through the book.


저자 소개

▶About the Author
- Ashish Bansal
Ashish is an AI/ML leader, a well-known speaker, and an astute technologist with over 20 years of experience in the field. He has a Bachelor's in technology from IIT BHU, and an MBA in marketing from Kellogg School of Management. He is currently the Director of Recommendations at Twitch where he works on building scalable recommendation systems across a variety of product surfaces, connecting content to people. He has worked on recommendation systems at multiple organizations, most notably Twitter where he led Trends and Events recommendations and at Capital One where he worked on B2B and B2C products. Ashish was also a co-founder of GALE Partners, a full-service digital agency in Toronto, and spent over 9 years at SapientNitro, a leading digital agency.

목차

▶TABLE of CONTENTS
-Chapter 1: Essentials of NLP
-Chapter 2: Understanding Sentiment in Natural Language with BiLSTMs
-Chapter 3: Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding
-Chapter 4: Transfer Learning with BERT
-Chapter 5: Generating Text with RNNs and GPT-2
-Chapter 6: Text Summarization with Seq2seq Attention and Transformer Networks
-Chapter 7: Multi-Modal Networks and Image Captioning with ResNets and Transformer Networks
-Chapter 8: Weakly Supervised Learning for Classification with Snorkel
-Chapter 9: Building Conversational AI Applications with Deep Learning
-Chapter 10: Installation and Setup Instructions for Code


리뷰

구매자 별점

0.0

점수비율
  • 5
  • 4
  • 3
  • 2
  • 1

0명이 평가함

리뷰 작성 영역

이 책을 평가해주세요!

내가 남긴 별점 0.0

별로예요

그저 그래요

보통이에요

좋아요

최고예요

별점 취소

구매자 표시 기준은 무엇인가요?

'구매자' 표시는 리디에서 유료도서 결제 후 다운로드 하시거나 리디셀렉트 도서를 다운로드하신 경우에만 표시됩니다.

무료 도서 (프로모션 등으로 무료로 전환된 도서 포함)
'구매자'로 표시되지 않습니다.
시리즈 도서 내 무료 도서
'구매자’로 표시되지 않습니다. 하지만 같은 시리즈의 유료 도서를 결제한 뒤 리뷰를 수정하거나 재등록하면 '구매자'로 표시됩니다.
영구 삭제
도서를 영구 삭제해도 ‘구매자’ 표시는 남아있습니다.
결제 취소
‘구매자’ 표시가 자동으로 사라집니다.

이 책과 함께 구매한 책


이 책과 함께 둘러본 책



본문 끝 최상단으로 돌아가기

spinner
모바일 버전