▶Book Description
Core ML is a popular framework by Apple, with APIs designed to support various machine learning tasks. It allows you to train your machine learning models and then integrate them into your iOS apps.
Machine Learning with Core ML is a fun and practical guide that not only demystifies Core ML but also sheds light on machine learning. In this book, you’ll walk through realistic and interesting examples of machine learning in the context of mobile platforms (specifically iOS). You’ll learn to implement Core ML for visual-based applications using the principles of transfer learning and neural networks. Having got to grips with the basics, you’ll discover a series of seven examples, each providing a new use-case that uncovers how machine learning can be applied along with the related concepts.
By the end of the book, you will have the skills required to put machine learning to work in their own applications, using the Core ML APIs]
▶What You Will Learn
⦁ Understand components of an ML project using algorithms, problems, and data
⦁ Master Core ML by obtaining and importing machine learning model, and generate classes
⦁ Prepare data for machine learning model and interpret results for optimized solutions
⦁ Create and optimize custom layers for unsupported layers
⦁ Apply CoreML to image and video data using CNN
⦁ Learn the qualities of RNN to recognize sketches, and augment drawing
⦁ Use Core ML transfer learning to execute style transfer on images
▶Key Features
⦁ Explore the concepts of machine learning and Apple’s Core ML APIs
⦁ Use Core ML to understand and transform images and videos
⦁ Exploit the power of using CNN and RNN in iOS applications
▶Who This Book Is For
Machine Learning with Core ML is for you if you are an intermediate iOS developer interested in applying machine learning to your mobile apps. This book is also for those who are machine learning developers or deep learning practitioners who want to bring the power of neural networks in their iOS apps. Some exposure to machine learning concepts would be beneficial but not essential, as this book acts as a launchpad into the world of machine learning for
▶What this book covers
⦁ Chapter 1, Introduction to Machine Learning, provides a brief introduction to ML, including some explanation of the core concepts, the types of problems, algorithms, and general workflow of creating and using a ML models. The chapter concludes by exploring some examples where ML is being applied.
⦁ Chapter 2, Introduction to Apple Core ML, introduces Core ML, discussing what it is, what it is not, and the general workflow for using it.
⦁ Chapter 3, Recognizing Objects in the World, walks through building a Core ML application from start to finish. By the end of the chapter, we would have been through the whole process of obtaining a model, importing it into the project, and making use of it.
⦁ Chapter 4, Emotion Detection with CNNs, explores the possibilities of computers understanding us better, specifically our mood. We start by building our intuition of how ML can learn to infer your mood, and then put this to practice by building an application that does just that. We also use this as an opportunity to introduce the Vision framework and see how it complements Core ML.
⦁ Chapter 5, Locating Objects in the World, goes beyond recognizing a single object to being able to recognize and locate multiple objects within a single image through object detection. After building our understanding of how it works, we move on to applying it to a visual search application that filters not only by object but also by composition of objects. In this chapter, we'll also get an opportunity to extend Core ML by implementing customer layers.
⦁ Chapter 6, Creating Art with Style Transfer, uncovers the secrets behind the popular photo effects application, Prisma. We start by discussing how a model can be taught to differentiate between the style and content of an image, and then go on to build a version of Prisma that applies a style from one image to another. We wrap up this chapter by looking at ways to optimize the model.
⦁ Chapter 7, Assisted Drawing with CNNs, walks through building an application that can recognize a users sketch using the same concepts that have been introduced in previous chapters. Once what the user is trying to sketch has been recognized, we look at how we can find similar substitutes using the feature vectors from a CNN.
⦁ Chapter 8, Assisted Drawing with RNNs, builds on the previous chapter and explores replacing the the convolution neural network (CNN) with a recurrent neural network (RNN) for sketch classification, thus introducing RNNs and showing how they can be applied to images. Along with a discussion on learning sequences, we will also delve into the details of how to download and compile Core ML models remotely.
⦁ Chapter 9, Object Segmentation Using CNNs, walks through building an ActionShot photography application. And in doing so, we introduce another model and accompanying concepts, and get some hands-on experience of preparing and processing data.
⦁ Chapter 10, An Introduction to Create ML, is the last chapter. We introduce Create ML, a framework for creating and training Core ML models within Xcode using Swift. By the end of this chapter, you will know how to quickly create, train, and deploy a custom models.