본문 바로가기

리디 접속이 원활하지 않습니다.
강제 새로 고침(Ctrl + F5)이나 브라우저 캐시 삭제를 진행해주세요.
계속해서 문제가 발생한다면 리디 접속 테스트를 통해 원인을 파악하고 대응 방법을 안내드리겠습니다.
테스트 페이지로 이동하기

Hands-On Explainable AI (XAI) with Python 상세페이지

Hands-On Explainable AI (XAI) with Python

Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps

  • 관심 0
소장
전자책 정가
30,000원
판매가
30,000원
출간 정보
  • 2020.07.31 전자책 출간
듣기 기능
TTS(듣기) 지원
파일 정보
  • PDF
  • 455 쪽
  • 8.7MB
지원 환경
  • PC뷰어
  • PAPER
ISBN
9781800202764
ECN
-
Hands-On Explainable AI (XAI) with Python

작품 정보

Resolve the black box models in your AI applications to make them fair, trustworthy, and secure. Familiarize yourself with the basic principles and tools to deploy Explainable AI (XAI) into your apps and reporting interfaces.

▶Book Description
Effectively translating AI insights to business stakeholders requires careful planning, design, and visualization choices. Describing the problem, the model, and the relationships among variables and their findings are often subtle, surprising, and technically complex.

Hands-On Explainable AI (XAI) with Python will see you work with specific hands-on machine learning Python projects that are strategically arranged to enhance your grasp on AI results analysis. You will be building models, interpreting results with visualizations, and integrating XAI reporting tools and different applications.

You will build XAI solutions in Python, TensorFlow 2, Google Cloud's XAI platform, Google Colaboratory, and other frameworks to open up the black box of machine learning models. The book will introduce you to several open-source XAI tools for Python that can be used throughout the machine learning project life cycle.

You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting the visualization of machine learning models into user explainable interfaces.

By the end of this AI book, you will possess an in-depth understanding of the core concepts of XAI.

▶What You Will Learn
⦁Plan for XAI through the different stages of the machine learning life cycle
⦁Estimate the strengths and weaknesses of popular open-source XAI applications
⦁Examine how to detect and handle bias issues in machine learning data
⦁Review ethics considerations and tools to address common problems in machine learning data
⦁Share XAI design and visualization best practices
⦁Integrate explainable AI results using Python models
⦁Use XAI toolkits for Python in machine learning life cycles to solve business problems

▶Key Features
⦁Learn explainable AI tools and techniques to process trustworthy AI results
⦁Understand how to detect, handle, and avoid common issues with AI ethics and bias
⦁Integrate fair AI into popular apps and reporting tools to deliver business value using Python and associated tools

▶Who This Book Is For
This book is not an introduction to Python programming or machine learning concepts. You must have some foundational knowledge and/or experience with machine learning libraries such as scikit-learn to make the most out of this book.

Some of the potential readers of this book include:

⦁Professionals who already use Python for as data science, machine learning, research, and analysis
⦁Data analysts and data scientists who want an introduction into explainable AI tools and techniques
⦁AI Project managers who must face the contractual and legal obligations of AI Explainability for the acceptance phase of their applications

▶What this book covers
⦁ Chapter 1, Explaining Artificial Intelligence with Python
Explainable AI (XAI) cannot be summed up in a single method for all participants in a project. When a patient shows signs of COVID-19, West Nile Virus, or any other virus, how can a general practitioner and AI form a cobot to determine the origin of the disease? The chapter describes a case study and an AI solution built from scratch, to trace the origins of a patient's infection with a Python solution that uses k-nearest neighbors and Google Location History.

⦁ Chapter 2, White Box XAI for AI Bias and Ethics
Artificial intelligence might sometimes have to make life or death decisions. When the autopilot of an autonomous vehicle detects pedestrians suddenly crossing a road, what decision should be made when there is no time to stop?
Can the vehicle change lanes without hitting other pedestrians or vehicles? The chapter describes the MIT moral machine experiment and builds a Python program using decision trees to make real-life decisions.

⦁ Chapter 3, Explaining Machine Learning with Facets
Machine learning is a data-driven training process. Yet, companies rarely provide clean data or even all of the data required to start a project. Furthermore, the data often comes from different sources and formats. Machine learning models involve complex mathematics, even when the data seems acceptable. A project can rapidly become a nightmare from the start.
This chapter implements Facets in Python in a Jupyter Notebook on Google Colaboratory. Facets provides multiple views and tools to track the variables that distort the ML model's results. Finding counterfactual data points, and identifying the causes, can save hours of otherwise tedious classical analysis.

⦁ Chapter 4, Microsoft Azure Machine Learning Model Interpretability with SHAP
Artificial intelligence designers and developers spend days searching for the right ML model that fits the specifications of a project. Explainable AI provides valuable time-saving information. However, nobody has the time to develop an explainable AI solution for every single ML model on the market!
This chapter introduces model-agnostic explainable AI through a Python program that implements Shapley values with SHAP based on Microsoft Azure's research. This game theory approach provides explanations no matter which ML model it faces. The Python program provides explainable AI graphs showing which variables influence the outcome of a specific result.

⦁ Chapter 5, Building an Explainable AI Solution from Scratch
Artificial intelligence has progressed so fast in the past few years that moral obligations have sometimes been overlooked. Eradicating bias has become critical to the survival of AI. Machine learning decisions based on racial or ethnic criteria were once accepted in the United States; however, it has now become an obligation to track bias and eliminate those features in datasets that could be using discrimination as information.
This chapter shows how to eradicate bias and build an ethical ML system in Python with Google's What-If Tool and Facets. The program will take moral, legal, and ethical parameters into account from the very beginning.


⦁ Chapter 6, AI Fairness with Google's What-If Tool (WIT)
Google's PAIR (People + AI Research – https://research.google/teams/brain/pair/) designed What-If Tool (WIT) to investigate the fairness of an AI model. This chapter takes us deeper into Explainable AI, introducing a Python program that creates a deep neural network (DNN) with TensorFlow, uses a SHAP explainer and creates a WIT instance.
The WIT will provide ground truth, cost ration fairness, and PR curve visualizations. The Python program shows how ROC curves, AUC, slicing, and PR curves can pinpoint the variables that produced a result, using AI fairness and ethical tools to make predictions.

⦁ Chapter 7, A Python Client for Explainable AI Chatbots
The future of artificial intelligence will increasingly involve bots and chatbots. This chapter shows how chatbots can provide a CUI XAI through Google Dialogflow. A Google Dialogflow Python client will be implemented with an API that communicates with Google Dialogflow.
The goal is to simulate user interactions for decision-making XAI based on the Markov Decision Process (MDP). The XAI dialog is simulated in a Jupyter Notebook, and the agent is tested on Google Assistant.

⦁ Chapter 8, Local Interpretable Model-Agnostic Explanations (LIME)
This chapter takes model agnostics further with Local Interpretable Model-agnostic Explanations (LIME). The chapter shows how to create a model-agnostic explainable AI Python program that can explain the results of random forests, k-nearest neighbors, gradient boosting, decision trees, and extra trees.
The Python program creates a unique LIME explainer with visualizations no matter which ML model produces the results.

⦁ Chapter 9, The Counterfactual Explanations Method
It is sometimes impossible to find why a data point has not been classified as expected. No matter how we look at it, we cannot determine which feature or features generated the error.
Visualizing counterfactual explanations can display the features of a data point that has been classified in the wrong category right next to the closest data point that was classified in the right category. An explanation can be rapidly tracked down with the Python program created in this chapter with a WIT.
The Python program created in this chapter's WIT can define the belief, truth, justification, and sensitivity of a prediction.

⦁ Chapter 10, Contrastive XAI
Sometimes, even the most potent XAI tools cannot pinpoint the reason an ML program made a decision. The Contrastive Explanation Method (CEM) implemented in Python in this chapter will find precisely how a datapoint crossed the line into another class.
The program created in this chapter prepares a MNIST dataset for CEM, defines a CNN, tests the accuracy of the CNN, and defines and trains an auto-encoder. From there, the program creates a CEM explainer that will provide visual explanations of pertinent negatives and positives.

⦁ Chapter 11, Anchors XAI
Rules have often been associated with hard coded expert system rules. But what if an XAI tool could generate rules automatically to explain a result? Anchors are high-precision rules that are produced automatically.
This chapter's Python program creates anchors for text classification and images. The program pinpoints the precise pixels of an image that made a model change its mind and select a class.

⦁ Chapter 12, Cognitive XAI
Human cognition has provided the framework for the incredible technical progress made by humanity in the past few centuries, including artificial intelligence. This chapter puts human cognition to work to build cognitive rule bases for XAI.
The chapter explains how to build a cognitive dictionary and a cognitive sentiment analysis function A Python program shows how to measure marginal cognitive contributions.
This chapter sums up the essence of XAI, for the reader to build the future of artificial intelligence, containing real human intelligence and ethics.

작가 소개

▶About the Author
- Denis Rothman
Denis Rothman graduated from Sorbonne University and Paris-Diderot University, writing one of the very first word2vector embedding solutions. He began his career authoring one of the first AI cognitive natural language processing (NLP) chatbots applied as a language teacher for Moet et Chandon and other companies. He has also authored an AI resource optimizer for IBM and apparel producers. He then authored an advanced planning and scheduling (APS) solution that is used worldwide. Denis is an expert in explainable AI (XAI), having added interpretable mandatory, acceptance-based explanation data and explanation interfaces to the solutions implemented for major corporate aerospace, apparel, and supply chain projects.

리뷰

0.0

구매자 별점
0명 평가

이 작품을 평가해 주세요!

건전한 리뷰 정착 및 양질의 리뷰를 위해 아래 해당하는 리뷰는 비공개 조치될 수 있음을 안내드립니다.
  1. 타인에게 불쾌감을 주는 욕설
  2. 비속어나 타인을 비방하는 내용
  3. 특정 종교, 민족, 계층을 비방하는 내용
  4. 해당 작품의 줄거리나 리디 서비스 이용과 관련이 없는 내용
  5. 의미를 알 수 없는 내용
  6. 광고 및 반복적인 글을 게시하여 서비스 품질을 떨어트리는 내용
  7. 저작권상 문제의 소지가 있는 내용
  8. 다른 리뷰에 대한 반박이나 논쟁을 유발하는 내용
* 결말을 예상할 수 있는 리뷰는 자제하여 주시기 바랍니다.
이 외에도 건전한 리뷰 문화 형성을 위한 운영 목적과 취지에 맞지 않는 내용은 담당자에 의해 리뷰가 비공개 처리가 될 수 있습니다.
아직 등록된 리뷰가 없습니다.
첫 번째 리뷰를 남겨주세요!
'구매자' 표시는 유료 작품 결제 후 다운로드하거나 리디셀렉트 작품을 다운로드 한 경우에만 표시됩니다.
무료 작품 (프로모션 등으로 무료로 전환된 작품 포함)
'구매자'로 표시되지 않습니다.
시리즈 내 무료 작품
'구매자'로 표시되지 않습니다. 하지만 같은 시리즈의 유료 작품을 결제한 뒤 리뷰를 수정하거나 재등록하면 '구매자'로 표시됩니다.
영구 삭제
작품을 영구 삭제해도 '구매자' 표시는 남아있습니다.
결제 취소
'구매자' 표시가 자동으로 사라집니다.

개발/프로그래밍 베스트더보기

  • 멀티패러다임 프로그래밍 (유인동)
  • 조코딩의 AI 비트코인 자동 매매 시스템 만들기 (조동근)
  • 랭체인 & 랭그래프로 AI 에이전트 개발하기 (서지영)
  • 요즘 우아한 AI 개발 (우아한형제들)
  • 윌 라슨의 엔지니어링 리더십 (윌 라슨, 임백준)
  • 랭체인과 RAG로 배우는 실전 LLM 애플리케이션 개발 (양기빈, 조국일)
  • 한 권으로 끝내는 실전 LLM 파인튜닝 (강다솔)
  • 주니어 백엔드 개발자가 반드시 알아야 할 실무 지식 (최범균)
  • 이펙티브 소프트웨어 설계 (토마스 레렉, 존 스키트)
  • MCP 혁신: 클로드로 엑셀, 한글, 휴가 등록부터 결재문서 자동화까지 with python (이호준, 차경림)
  • LLM을 활용한 실전 AI 애플리케이션 개발 (허정준, 정진호)
  • 챗GPT로 만드는 주식 & 암호화폐 자동매매 시스템 (설근민)
  • 혼자 공부하는 컴퓨터 구조+운영체제 (강민철)
  • 카프카 커넥트 (미카엘 메종, 케이트 스탠리)
  • 개정판 | 혼자 공부하는 머신러닝+딥러닝 (박해선)
  • 플랫폼 엔지니어링 (이언 놀런드, 카미유 푸르니에)
  • 개정판 | <소문난 명강의> 레트로의 유니티 6 게임 프로그래밍 에센스 (이제민)
  • 이지 러스트 (데이브 매클라우드, 이지호)
  • 이펙티브 소프트웨어 아키텍처 (올리버 골드만, 최희철)
  • 소프트웨어 엔지니어 가이드북 (게르겔리 오로스, 이민석)

본문 끝 최상단으로 돌아가기

spinner
앱으로 연결해서 다운로드하시겠습니까?
닫기 버튼
대여한 작품은 다운로드 시점부터 대여가 시작됩니다.
앱으로 연결해서 보시겠습니까?
닫기 버튼
앱이 설치되어 있지 않으면 앱 다운로드로 자동 연결됩니다.
모바일 버전