Skip to main content

Posts

Showing posts from November, 2020

Hands-on experiments with Variational Autoencoders (VAEs)

The goal of this blogpost is to demonstrate how the formulation of the Variational Autoencoder (VAE) translates to empirical observations using the MNIST dataset. First, we examine how VAEs handle the tasks that their formulation dictates, i.e. reconstruction of their input and generation of samples using the decoder. Then, we study the output distribution of the encoder in the latent space. Last, we use our observations of the latent space and strategically choose the latent variable to generate examples in order to see how the latent space affects the pixel space and the final image. We assume the reader is familiar with VAEs and the how the formulation is derived and interpreted. If not, we have an in-depth study on the matter. Run in Google Colab Open in GitHub Download IPython notebook Run in Google Colab Open in GitHub Download IPython notebook     Let's start by examining the loss function of the VAE: \[ \begin{equat