Skip to content

Generative Models

Click on a tile to change the color scheme:

Screen Shot 2021-04-25 at 3.44.33 PM

Formulate as density estimation problems:

  • Explicit density estimation: explicitly define and solve for \(p_{\mathrm{model}}(x)\)
  • Implicit density estimation: learn model that can sample from \(p_{\mathrm{model}}(x)\) without explicitly defining it.

1. Explicit density

1.1 PixelRNN

Use a chain rule to estimate a pixel based on previous pixels.

Screen Shot 2021-04-25 at 4.04.36 PM

Note that there's no labels. Just use the input data to train the probability model.

Screen Shot 2021-04-25 at 4.06.40 PM

1.2 PixelCNN

Screen Shot 2021-04-25 at 4.07.26 PM

1.3 Summary

Screen Shot 2021-04-25 at 4.08.02 PM

2. Implicit Density

2.1 Background: Autoencoders

Screen Shot 2021-04-25 at 4.19.21 PM

Decoder and reconstructed input data are just used to compute loss to train the Autoencoder.

Significance: Encoder can be used to initialize a supervised model.

We can use a large amount of unlabeled data to train an unsupervised model which have learned some universal features.

Then, we can use it to initialize a supervised model.

Screen Shot 2021-04-25 at 4.27.05 PM

2.2 Variational Autoencoders (VAE)

Autoencoders can reconstruct data, and can learn features to initialize a supervised model

Features capture factors of variation in training data.

But we can’t generate new images from an autoencoder because we don’t know the space of z. ???

How do we make autoencoder a generative model?

2.3 Generative Adversarial Networks

Screen Shot 2021-04-25 at 5.22.52 PM

Screen Shot 2021-04-25 at 5.23.48 PM

Screen Shot 2021-04-25 at 5.28.00 PM

Screen Shot 2021-04-25 at 5.33.40 PM


Last update: June 16, 2023
Authors: Colin