Equivariant Neural Fields: A Roadmap Towards Generalizable Neural Representation and InferenceGe Yang, IAIFI and MIT
Generalization is a central problem in deep learning research because it directly affects how much data and compute it costs to achieve good performance. In other words better generalization makes better performance more accessible. In this talk, we begin by looking at a few interesting situations where modern neural networks fail to generalize. We discuss the components responsible for these failures, and ways to fix them. Then we introduce continuous neural representation and neural fields as a unifying theme. As part of the roadmap, I will lay down key technical milestones, and specific applications in control, reinforcement learning, and scene understanding.