Title: A Geometric View to the GAN model Abstract: Generative Adversarial Net (GAN) is a powerful machine learning model, and becomes extremely successful recently. The generator and the discriminator in a GAN model competes each other and reaches the Nash equilibrium. GAN can generate samples automatically, therefore reduce the requirements for large amount of training data. It can also model distributions from data samples. In spite of its popularity, GAN model lacks theoretic foundation.In this talk, we give a geometric interpretation to optimal mass transportation theory, and applied for the GAN model. We try to answer the following fundamental questions: 1. Does a GAN model learn a function, a mapping or a probability distribution ? Is the solution unique or infinite ? What is the dimension of the solution space ? What is the structure of the solution space? 2. Does a GAN model really learn or just memorize ? 3. Is the competition between the generator and discriminator really necessary ? Can we simplify the neural networks and avoid the competition ? 4. Why sometimes a ML model can be fooled easily? 5. Can we replace the black-box in the GAN model by a transparent model ? Bio: David Gu is an associate professor (with tenure) at the Department of Computer Science, Stony Brook University. He received his Ph.D degree from the Department of Computer Science, Harvard University in 2003 and B.S. degree from the Tsinghua University, Beijing, China in 1995.