Manuel Burger, MSc
"Questions you cannot answer are usually far better for you than answers you cannot question." - Yuval Noah Harari
PhD Student
- manuel.burger@ inf.ethz.ch
- Address
-
ETH Zürich
Department of Computer Science
Biomedical Informatics Group
Universitätsstrasse 6
8092 Zürich - Room
- CAB F53.1
With my research I want to leverage the potential of machine learning to improve health care. My focus is on representation learning in the clinical and biomedical domain.
I have obtained my Bachelor's Degree in Computer Science, followed by a Master's Degree in Data Science at ETH Zürich. I have always been fascinated by the capabilities of modern computer hardware, and thus ventured into the world of HPC during my bachelor's degree. This led me to learn about my fascination for machine learning and the powerful applications we can develop by using the computing capabilities available to us. I started to focus my path towards data science in the biomedical and health care domain. My aim is to contribute to improved health care and I thus joined the Biomedical Informatics Group at ETH as a Ph.D. Student in 2022.
My fields of interest are centered around representation learning. I am especially excited about Geometric Deep Learning and the recent success of Graph Neural Networks. Learning from structure enables more flexible and expressive machine learning solutions, and at the same time develop more interpretable and robust models. Further I am interested in self-supervised approaches applied to time-series data, as well as learning generalizable prior knowledge representations.
Find out more on my homepage manuelburger.ch
Latest Publications
Abstract Clinicians are increasingly looking towards machine learning to gain insights about patient evolutions. We propose a novel approach named Multi-Modal UMLS Graph Learning (MMUGL) for learning meaningful representations of medical concepts using graph neural networks over knowledge graphs based on the unified medical language system. These representations are aggregated to represent entire patient visits and then fed into a sequence model to perform predictions at the granularity of multiple hospital visits of a patient. We improve performance by incorporating prior medical knowledge and considering multiple modalities. We compare our method to existing architectures proposed to learn representations at different granularities on the MIMIC-III dataset and show that our approach outperforms these methods. The results demonstrate the significance of multi-modal medical concept representations based on prior medical knowledge.
Authors Manuel Burger, Gunnar Rätsch, Rita Kuznetsova
Submitted arXiv Preprints