A VAE is a probabilistic take on the autoencoder, a model which takes high dimensional input data and compresses it into a smaller representation space.
Unlike a traditional autoencoder, which maps the input onto a latent vector, a VAE maps the input data into the parameters of a probability distribution, such as the mean and variance of a Gaussian. This approach produces a continuous, structured latent space, which is useful for image generation.
official link and COLAB NB:
[ Ссылка ]
[ Ссылка ]
Funny explanation of KL-Divergence:
[ Ссылка ]
#ai
#deeplearning
#tensorflow2 #vae
![](https://s2.save4k.ru/pic/pRKTr8gw2KA/maxresdefault.jpg)