Generative Modeling Demystifying MMD GANs A Maximum Mean Discrepancy loss for Generative Adversarial Networks. ICLR 2018 Code ArXiv On gradient regularizers for MMD-GANs A gradient penalty method for regularizing MMD-GANs with state-of-the-art performance. NeurIPS 2018 Code ArXiv Generalized energy based models A model combining implicit and explit modeling for learning distributions with small intrinsic dimension. ICLR 2021 Code ArXiv High Dimensioncal Sampling Annealed Flow Transport Monte Carlo A sampling algorithm combining Sequential Monte Carlo and Normalizing Flows. ICML 2021 Code ArXiv Kernel Methods Kernel Conditional Exponential Family An algorithm for conditional density estimation using Kernel Exponential Families. AISTATS 2018 Code ArXiv Efficient and principled score estimation with Nystrom kernel exponential families An efficient algorithm for density estimation. AISTATS 2018 Code ArXiv Patches in Deep Convolutional Kernels Methods Code to reproduce the experiments of the paper. ICLR 2021 Code ArXiv Optimization Maximum Mean Discrepancy Gradient Flow An algorithm based on Noise Injection to optimize the corresponding paper. NeurIPS 2019 Code ArXiv Kernelized Wasserstein Natural Gradient An efficient and scalable algorithm for estimating the Wasserstein natural gradient. ICLR 2020 Code ArXiv Synchronizing Probability Measures on Rotations via Optimal Transport A simple algorithm based on optimal transport for synchronizing rotations. CVPR 2020 Code ArXiv Reinforcement Learning Efficient wasserstein natural gradients for reinforcement learning An efficient algorithm to incorporate behavioral geometry for policy gradient in RL. ICLR 2021 Code ArXiv Tactical Optimism and Pessimism for Deep Reinforcement Learning A simple algorithm for dynamically trading-off between optimism and pessimism in RL. NeurIPS 2021 Code ArXiv