In this talk, I will discuss our work on symmetry and structure in single and multi-agent reinforcement learning. I will first discuss MDP Homomorphic Networks (NeurIPS 2020), a class of networks that ties transformations of observations to transformations of decisions. Such symmetries are ubiquitous in deep reinforcement learning, but often ignored in earlier approaches. Enforcing this prior knowledge into policy and value networks allows us to reduce the size of the solution space, a necessity in problems with large numbers of possible observations. I will showcase the benefits of our approach on agents in virtual environments. Building on the foundations of MDP Homomorphic Networks, I will also discuss our recent multi-agent works, Multi-Agent MDP Homomorphic Networks (ICLR 2022) and Equivariant Networks for Zero-Shot Coordination (NeurIPS 2022), which consider symmetries in multi-agent systems. This forms a basis for my vision for reinforcement learning for complex virtual environments, as well as for problems with intractable search spaces. Finally, I will briefly discuss AI4Science.
Speaker: Elise van der Pol (Website)
About the speaker: Elise van der Pol is a Senior Researcher at Microsoft Research AI4Science Amsterdam, working on reinforcement learning and deep learning for molecular simulation. Additionally, she works on symmetry, structure, and equivariance in single and multi-agent reinforcement learning and machine learning.
Before joining MSR, she did a Ph.D. in the Amsterdam Machine Learning Lab, working with Max Welling (UvA), Frans Oliehoek (TU Delft), and Herke van Hoof (UvA). During her Ph.D., she spent time in DeepMind’s multi-agent team. Elise was an invited speaker at the BeneRL 2022 workshop and the Self-Supervision for Reinforcement Learning workshop at ICLR 2021. She was also a co-organizer of the workshop on Ecological/Data-Centric Reinforcement Learning at NeurIPS 2021.