Francesco’s research interests focus on the foundations of optimization algorithms and how to use their properties for inference. He proved convergence of several algorithms for which convergence was not known before, in particular for projection-free algorithms over conic constraints and boosting variational inference. Furthermore, he and his collaborator, Anant Raj, were among the first to show accelerated convergence of Greedy Coordinate Descent.
He has also been working on non-smooth optimization, proving that a variant of stochastic Frank-Wolfe is able to handle indicator function constraints at scale, in addition to the usual convex constraints. This covers scalable stochastic optimization of semidefinite programs. Methodological advances of this type can have implications in a whole range of applications that rely on semidefinite programming as well as kernel learning, online covariance matrix estimation, streaming PCA etc.
Recently, he has been working on representation learning in collaboration with Google AI in Zürich. He focused on unsupervised learning of disentangled representation, which are considered fundamental for representation learning. Disentangled representations should contain all the information about the observations in a compact yet interpretable structure which makes learning any downstream task easy. His work highlighted severe limitations with state-of-the-art approaches, proving that not only unsupervised disentanglement is theoretically impossible but also many of the promises of disentanglement are not kept in practice. With his collaborators at Google AI, he developed an open source library for fair and reproducible research on disentanglement and is currently pushing the state of the field further with semi-supervised approaches. He is co-organizing a challenge at NeurIPS 2019 in Vancouver, to bring disentanglement learning to real world datasets and he will also present his his latest work at ICML2019 and ICLR2019.