Dr. Gal Vardi
Gal Vardi has been researching machine-learning and deep learning theory for the past five years. Motivated by the tremendous practical success of neural networks (machine learning programs that make decisions in a manner similar to the human brain,) Gal Vardi’s lab in the Weizmann Institute’s Department of Computer Science and Applied Mathematics aims to establish theoretical foundations for understanding them, and for the theory of deep learning in general. Bridging the gap between theory and practice may be crucial to providing principled guidance and ensuring the advancement of deep learning in the long term.
With a PhD in Computer Science from The Hebrew University of Jerusalem in temporal logic and automata theory, Dr. Vardi transitioned to machine learning theory, where he has made significant contributions.
For his first postdoc, in the Department of Computer Science and Applied Mathematics at Weizmann, Dr. Vardi focused on depth separations in neural networks. He proved that such depth separations would violate classical natural proofs barriers from computational complexity theory.
He then examined the problem of implicit bias of the gradient methods used to train neural networks. Optimization methods select solutions with properties conducive to good generalization, which explains why neural networks may perform well in practice but actually be biased. In one project, Dr. Vardi demonstrated a remarkable, and totally unexpected, implication of implicit bias: it is possible to reconstruct a significant portion of the data used to train a given neural network. This has significant security implications, since networks are frequently trained on sensitive data such as medical records or user files.
Dr. Vardi completed a second postdoc as part of the National Science Foundation-Simons Research Collaboration on the Theoretical Foundations of Deep Learning. This fellowship supports collaborative research focused on some of the most challenging questions in the mathematical and scientific foundations of deep learning. Dr. Vardi continued his work on implicit bias, expanding to new topics such as benign overfitting, in a collaboration between the Toyota Technological Institute at Chicago, an academic computer science institute, and Hebrew University’s School of Computer Science and Engineering.
Beyond enjoying the substantial intellectual and mathematical challenges in his work, Dr. Vardi believes that enhancing theoretical understanding of deep learning has the potential to significantly impact various domains that rely on this technology.