In this talk I discuss our recent paper Pulling back information geometry. More importantly, I plan to introduce our general line of work on learning geometries in the latent spaces of Variational Autoencoders (VAEs), using their decoder as a stochastic embedding and pulling back geometries with it. Such latent space geometries allow for measuring distances between latent codes in a way that is invariant to reparametrization, and have been useful in domains like robotics, protein modelling, and procedural content generation. In Pulling back information geometry we introduce a way of defining these latent space geometries for VAEs that decode to (almost) any distribution. This talk introduces the background and applications of latent space geometries, motivating our contribution.
Speaker: Miguel González Duque (Website)
About the speaker: Miguel González-Duque is a Ph.D. student at the IT University of Copenhagen, supervised by Sebastian Risi and co-supervised by Søren Hauberg (DTU). His work is at the intersection of representation learning, differential geometry and Bayesian optimization, applying these to procedural content generation. During his Ph.D., Miguel did an internship at the Bosch Center for AI, working with Leonel Rozo on applications of Gaussian Processes on manifold-valued data to robotics.