BMI Contributions at NeurIPS 2018

Date 10 December 2018   Author Martina Baumann   Categories News

Congratulations to our PhD students who made for a strong lab presence at NeurIPS in Montreal!

At the BMI group we feature a broad spectrum from basic research ML to applied methods in (bio-)medicine. The cross-talk and exchange between the researchers and the sometimes fluid project boundaries are the makeup of our lab’s ML face. For a glimpse at the different shades of ML we summarized our contributions below.

SPOTLIGHT PRESENTATION

Boosting Black Box Variational Inference

Francesco Locatello*, Gideon Dresdner*, Rajiv Khanna, Isabel Valera, Gunnar Rätsch

Approximating a probability density in a tractable manner is a central task in Bayesian statistics. Variational Inference (VI) is a popular technique that achieves tractability by choosing a relatively simple variational family. Borrowing ideas from the classic boosting framework, recent approaches attempt to boost VI by replacing the selection of a single density with a greedily constructed mixture of densities. In order to guarantee convergence, previous works impose stringent assumptions that require significant effort for practitioners. Our work fixes these issues with novel theoretical and algorithmic insights. [URL: https://papers.nips.cc/paper/7600-boosting-black-box-variational-inference ]

WORKSHOP CONTRIBUTIONS

Improving Clinical Predictions through Unsupervised Time Series Representation Learning (Spotlight Award), (Machine Learning for Healthcare (ML4H) workshop)

Xinrui Lyu, Matthias Hüser, Stephanie L. Hyland, George Zerveas, Gunnar Rätsch

Unsupervised time series representations, which abstract from the exact definition of individual clinical tasks, and only use the inherent time series structure, are of general utility. In this work, we propose an unsupervised Seq2Seq autoencoder architecture with an attention mechanism, which inductively biases representations towards prediction of future events. In our experiments we investigate the limited labeled data regime, and show superior performance of our model to purely supervised architectures and several unsupervised baselines for patient outcome prediction in the next 24 hours. [arXiv URL: https://arxiv.org/abs/1812.00490]


Deep Self-Organization: Interpretable Discrete Representation Learning on Medical Time Series (Machine Learning for Healthcare (ML4H) workshop)

Vincent Fortuin, Matthias Hüser, Francesco Locatello, Heiko Strathmann and Gunnar Rätsch

We propose a novel unsupervised method to learn interpretable discrete representations on time series, combining ideas from deep representation learning, self-organizing maps, and probabilistic modeling. [arXiv: https://arxiv.org/abs/1806.02199]


Unsupervised Phenotype Identification from Clinical Notes for Association Studies in Cancer (Machine Learning for Healthcare (ML4H) workshop)

Stefan G Stark, Stephanie L. Hyland, Julia E Vogt and Gunnar Rätsch

We extract phenotypes from a large text corpus of Electronic Health Record of cancer patients for use in an association study against somatic mutations. In addition to a phenotype consisting of a simple ontological vocabulary, we cluster sentences both with and without the ontology. Known and novel associations were recovered across all phenotypes, including associations discovered after the collection of the text notes.


Scalable Gaussian Processes on Discrete Domains (Bayesian Nonparametrics (BNP) workshop)

Vincent Fortuin, Gideon Dresdner, Heiko Strathmann and Gunnar Rätsch

We explore different methods for fitting sparse Gaussian Process models on discrete input domains, using insights from discrete optimization to overcome the lack of gradient information for inducing point selection. [arXiv: https://arxiv.org/abs/1810.10368]


Learning Mean Functions for Gaussian Processes (Bayesian Nonparametrics (BNP) workshop)

Vincent Fortuin

We show analytically and empirically that mean functions can be superior to kernel functions when it comes to learning Gaussian Process priors from data in a transfer learning setting, especially in the low-data limit. [BNP website: https://drive.google.com/file/d/1PuB1MTbw8ATgXkdvc71d-nI01pKeMJ2w/view]


On the Connection between Neural Processes and Gaussian Processes with Deep Kernels (Bayesian Deep Learning (BDL) workshop)

Tim Georg Johann Rudner, Vincent Fortuin, Yee Whye Teh and Yarin Gal

We present an exact equivalence between the recently proposed Neural Processes (NPs) and a family of parametric kernel Gaussian Processes (GPs), in the case where the NP decoder uses an affine transformation and the latent GP prior is informed by the context data. [BDL website: http://bayesiandeeplearning.org/2018/papers/128.pdf]