Generative Models
Click on a tile to change the color scheme:
Formulate as density estimation problems:
- Explicit density estimation: explicitly define and solve for \(p_{\mathrm{model}}(x)\)
- Implicit density estimation: learn model that can sample from \(p_{\mathrm{model}}(x)\) without explicitly defining it.
1. Explicit density
1.1 PixelRNN
Use a chain rule to estimate a pixel based on previous pixels.
Note that there's no labels. Just use the input data to train the probability model.
1.2 PixelCNN
1.3 Summary
2. Implicit Density
2.1 Background: Autoencoders
Decoder and reconstructed input data are just used to compute loss to train the Autoencoder.
Significance: Encoder can be used to initialize a supervised model.
We can use a large amount of unlabeled data to train an unsupervised model which have learned some universal features.
Then, we can use it to initialize a supervised model.
2.2 Variational Autoencoders (VAE)
Autoencoders can reconstruct data, and can learn features to initialize a supervised model
Features capture factors of variation in training data.
But we can’t generate new images from an autoencoder because we don’t know the space of z. ???
How do we make autoencoder a generative model?