Casey Chu
- Perspectives on the variational autoencoder
- There are many ways of looking at the variational autoencoder, or VAE, of Kingma et al. (2014), and the evidence lower bound, or ELBO, used to train it. The goal of this post is to concisely catalog these perspectives for quick reference.
- In the VAE, there are two probability distributions:
- Conceptually, is the data distribution, making an encoder, and is a latent prior, making a decoder. These terms are assumed to be tractable, whereas reversed terms like and and marginalized terms like and are intractable.
- The variational autoencoder is trained by maximizing the ELBO:
- Maximum likelihood. We can think of the VAE as training a generative model using maximum likelihood, by attempting to maximize . Since is intractable, we instead maximize a lower bound a lower bound that is tight when .
- Variational Bayes. From the perspective of Bayesian inference, is a prior, and is a likelihood. This makes the posterior; unfortunately, this is intractable, so we approximate it with a variational posterior . Ideally, we would minimize , but this is intractable, so we instead maximize Note that from this perspective, has no optimizable parameters, so that is a constant. Inference is amortized, being done for every ; if we only care about one observed data point , then we can set .
- Autoencoder. We can view the VAE as an autoencoder by writing the ELBO as Suppose , and is a deterministic function . The first term is a reconstruction error, proportional to . The second is a KL term that matches the variational posterior with the prior. In practice, this is the objective that is trained with SGD, with the KL term either estimated via Monte Carlo or analytically integrated.
- Importance sampling. Motivated by the observation that Burda et al. (2016) proposed the importance-weighted autoencoder, which maximizes Jensen’s inequality shows that
- Thus we may achieve a tighter bound by replacing the ELBO with the IWAE bound. Here, loses its interpretation as a variational posterior, but Bachman and Precup (2015) and Cremer et al. (2017) reinterpret the IWAE bound as the usual ELBO where is replaced with an implicitly defined distribution , one that converges to the true posterior as . This approximate posterior can be sampled from by first sampling and returning with probability proportional to .
- Expectation-maximization. Expectation-maximization is an iterative algorithm for computing the maximum likelihood estimator. In the language of VAEs, it maximizes the ELBO by coordinate ascent, alternately on (the “E-step”) and on (the “M-step”). In the E-step, the ELBO is maximized when is set to . (This is called the E-step because , where represents unobserved data.) In the M-step, the ELBO is maximized when is set to , where maximizes the likelihood .
- Representation learning. The encoder in a VAE can be seen as a way to learn a compact representation of the data. However, the ELBO on its own does not seem to promote this objective particularly well. See, for example, Higgins et al. (2017) and Alemi et al. (2018).