Representation Learning

Temporal time series of structured data often appear in biomedical datasets, presenting challenges such as containing variables with different periodicities, being conditioned by static data, etc. Our laboratory has acquired great expertise in developing patient state space models, including interpretable clustering[1-4]​, time series imputation[5]​, and sepsis prediction[6]​. These approaches typically use un-, semi- or self-supervised learning techniques that extract information from a patient's past (at a specific time) to compute a representation that summarises all relevant information in a way that the representation is predictive of the future data, future diseases, or future outcomes. These representations are often coupled with supervised learning techniques in specific applications that use labels for selected medical events for a subset of the patients or time points. It will be an exciting avenue of research to develop these methods further and combine them for maximum performance.

Our current and future research for sparse multivariate time series for the early prediction of rare events focuses on several key challenges:

  1. Currently, early prediction tasks are typically treated as simple binary classifications, with few bespoke methods proposed to leverage temporal dependency across contiguous samples. One apparent observation is that the further in time from the event we are, the hardest it is to predict it, and thus, the less confident a model should be. Here, we want to extend the famous "label smoothing" technique to incorporate a temporal dependency increasing smoothing as we move further from the event to prediction. This will avoid network learning from known noisy samples around the prediction horizon and thus improve recall of sooner events. 
  2. Motivated by the recent success of various deep learning frameworks for embedding learning, we aim to combine these recent advances for a model more robust to the higher heterogeneity of real-world time series.

Graph Neural Networks

Clinical data comes from a heterogeneous mix of sources and is full of complex relationships. We combine medical ontologies, biomedical databases, and text from EHR records and can store them in a learnt, comprehensive and dynamic format. The distilled information can be provided to different predictive models.

Multi-modal models

Previous research has shown improved performance when adding medical notes to ICU time series data for the MIMIC III benchmark tasks. However, these studies need to explore where improved performance from adding clinical notes comes from. Hence, our research investigates which note types and content in the notes contribute to enhanced performance.

Self- and semi-supervised learning

Our aim in this work direction is to develop a novel contrastive objective that is robust to confounders. Indeed, as previously mentioned, contiguous samples are temporally dependent on the early prediction of rare events. In addition, highly impactful domains of application such as Electronic Health Records data exhibit confounding factors at various scales. For instance, samples can come from several hospitals or acquisition devices. Supervised methods can learn the required degree of invariances to these shifts. However, state-of-the-art contrastive learning methods are not designed to handle confounding factors in the data. To achieve this goal, we aim to build on prior works developed for the specific case where representation should be identical or independent across familiar sources. We aim to show we can surpass both previous self-supervised methods and supervised counterparts, notably when the proportion of labelled data decreases, by building a negative sampling approach accounting for such confounding effects. We proposed Neighborhood Contrastive Learning (NCL) [7], a novel self-supervised learning objective for data suffering from confounders. By redefining negative sampling around the definition of neighbour samples, this objective allows learning representations with a tunable dependence on identified confounding factors. In particular, we show superiority over existing contrastive learning methods for the early prediction of organ failures.

Involved Group Members: Hugo Yeche, Rita Kuznetsova, Alizée Pace, Manuel Burger, Gunnar Rätsch

References
[1] Fortuin, V., Hüser, M., Locatello, F. & Strathmann, H. Som-vae: Interpretable discrete representation learning on time series. ​arXiv preprint arXiv​ (2018).
[2] Lyu, X., Hueser, M., Hyland, S. L. & Zerveas, G. Improving clinical predictions through unsupervised time series representation learning. ​arXiv preprint arXiv​ (2018).
[3] Manduchi, L., Hüser, M., Vogt, J., Rätsch, G. & Fortuin, V. DPSOM: Deep Probabilistic Clustering with Self-Organizing Maps. ​arXiv [cs.LG]​ (2019).
[4] Manduchi, L., Hüser, M., Rätsch, G. & Fortuin, V. Variational pSOM: Deep Probabilistic Clustering with Self-Organizing Maps. (2019).
[5] Fortuin, V., Baranchuk, D., Rätsch, G. & Mandt, S. GP-VAE: Deep Probabilistic Time Series Imputation. ​arXiv 1907–04155 (2019).
[6] Rosnati, M. & Fortuin, V. MGP-AttTCN: An Interpretable Machine Learning Model for the Prediction of Sepsis.arXiv [cs.LG]​ (2019).
[7] Yèche, Hugo, et al. "Neighborhood contrastive learning applied to online patient monitoring." International Conference on Machine Learning. PMLR, 2021.