Implement various state-of-the-art architectures, such as GANs and autoencoders, for image generation using TensorFlow 2.x from scratch
▶Book Description
The emerging field of Generative Adversarial Networks (GANs) has made it possible to generate indistinguishable images from existing datasets. With this hands-on book, you'll not only develop image generation skills but also gain a solid understanding of the underlying principles.
Starting with an introduction to the fundamentals of image generation using TensorFlow, this book covers Variational Autoencoders (VAEs) and GANs. You'll discover how to build models for different applications as you get to grips with performing face swaps using deepfakes, neural style transfer, image-to-image translation, turning simple images into photorealistic images, and much more. You'll also understand how and why to construct state-of-the-art deep neural networks using advanced techniques such as spectral normalization and self-attention layer before working with advanced models for face generation and editing. You'll also be introduced to photo restoration, text-to-image synthesis, video retargeting, and neural rendering. Throughout the book, you'll learn to implement models from scratch in TensorFlow 2.x, including PixelCNN, VAE, DCGAN, WGAN, pix2pix, CycleGAN, StyleGAN, GauGAN, and BigGAN.
By the end of this book, you'll be well versed in TensorFlow and be able to implement image generative technologies confidently.
▶What You Will Learn
-Train on face datasets and use them to explore latent spaces for editing new faces
-Get to grips with swapping faces with deepfakes
-Perform style transfer to convert a photo into a painting
-Build and train pix2pix, CycleGAN, and BicycleGAN for image-to-image translation
-Use iGAN to understand manifold interpolation and GauGAN to turn simple images into photorealistic images
-Become well versed in attention generative models such as SAGAN and BigGAN
-Generate high-resolution photos with Progressive GAN and StyleGAN
▶Key Features
-Understand the different architectures for image generation, including autoencoders and GANs
-Build models that can edit an image of your face, turn photos into paintings, and generate photorealistic images
-Discover how you can build deep neural networks with advanced TensorFlow 2.x features
▶Who This Book Is For
The Hands-On Image Generation with TensorFlow book is for deep learning engineers, practitioners, and researchers who have basic knowledge of convolutional neural networks and want to learn various image generation techniques using TensorFlow 2.x. You'll also find this book useful if you are an image processing professional or computer vision engineer looking to explore state-of-the-art architectures to improve and enhance images and videos. Knowledge of Python and TensorFlow will help you to get the best out of this book.
▶What this book covers
- Chapter 1, Getting Started with Image Generation Using TensorFlow, walks through the basics of pixel probability and uses it to build our first model to generate handwritten digits.
- Chapter 2, Variational Autoencoder, explains how to build a variational autoencoder (VAE) and use it to generate and edit faces.
- Chapter 3, Generative Adversarial Network, introduces the fundamentals of GANs and builds a DCGAN to generate photorealistic images. We'll then learn about new adversarial loss to stabilize the training.
- Chapter 4, Image-to-Image Translation, covers a lot of models and interesting applications. We will first implement pix2pix to convert sketches to photorealistic photos. Then we'll use CycleGAN to transform a horse to a zebra. Lastly, we will use BicycleGAN to generate a variety of shoes.
- Chapter 5, Style Transfer, explains how to extract the style from a painting and transfer it into a photo. We'll also learn advanced techniques to make neural style transfer run faster in runtime, and to use it in state-of-the-art GANs.
- Chapter 6, AI Painter, goes through the underlying principles of image editing and transformation using interactive GAN (iGAN) as an example. Then we will build a GauGAN to create photorealistic building facades from a simple segmentation map.
- Chapter 7, High Fidelity Face Generation, shows how to build a StyleGAN using techniques from style transfer. However, before that, we will learn to grow the network layer progressively using a Progressive GAN.
- Chapter 8, Self-Attention for Image Generation, shows how to build self-attention into a Self-Attention GAN (SAGAN) and a BigGAN for conditional image generation.
- Chapter 9, Video Synthesis, demonstrates how to use autoencoders to create a deepfake video. Along the way, we'll learn how to use OpenCV and dlib for face processing.
- Chapter 10, Road Ahead, reviews and summarizes the generative techniques we have learned. Then we will look at how they are used as the basis of up-and-coming applications, including text-to-image-synthesis, video compression, and video retargeting.