Hands-On Unsupervised Learning with Python
上QQ阅读APP看书,第一时间看更新

Generative models

Another unsupervised approach is based on generative models. The concept is not very different from what we have already discussed for supervised algorithms, but, in this case, the data generating process doesn't contain any label. Hence the goal is to model a parametrized distribution and optimize the parameters so that the distance between candidate distribution and the data generating process is minimized:

The process is generally based on the Kullback-Leibler divergence or other similar measures:

At the end of the training phase, we assume L → 0, so p ≈ pdata. In this way, we have not limited the analysis to a subset of possible samples, but rather to the entire distribution. Using a generative model allows you to draw new samples that can be very different from the ones selected for the training process, but they always belong to the same distribution. Therefore, they are (potentially) always acceptable.

For example, a Generative Adversarial Network (GAN) is a particular deep learning model that is able to learn the distribution of an image set, producing new samples that are almost indistinguishable (from a visual semantic point of view) from the training ones. As unsupervised learning is the main topic of this book, we won't dwell further on GAN in this introduction. All these concepts will be extensively discussed (with practical examples) in all the next chapters.