252-5051-00L Advanced Topics in Machine Learning (Autumn 2017)

Abstract
In this seminar, recent papers of the pattern recognition and machine learning literature are presented and discussed. Possible topics cover statistical models in computer vision, graphical models and machine learning.

Objective
The seminar "Advanced Topics in Machine Learning" familiarizes students with recent developments in pattern recognition and machine learning. Original articles have to be presented and critically reviewed. The students will learn how to structure a scientific presentation in English which covers the key ideas of a scientific paper. An important goal of the seminar presentation is to summarize the essential ideas of the paper in sufficient depth while omitting details which are not essential for the understanding of the work. The presentation style will play an important role and should reach the level of professional scientific presentations.

Content
The seminar will cover a number of recent papers which have emerged as important contributions to the pattern recognition and machine learning literature. The topics will vary from year to year but they are centered on methodological issues in machine learning like new learning algorithms, ensemble methods or new statistical models for machine learning applications. Frequently, papers are selected from computer vision or bioinformatics - two fields, which relies more and more on machine learning methodology and statistical models.

Literature

  1. Deep Learning theory, October 3, 2017
    1. Shwartz-Ziv, Ravid, and Naftali Tishby. "Opening the Black Box of Deep Neural Networks via Information." arXiv preprint arXiv:1703.00810 (2017). PDF.

    2. Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, Jascha Sohl-Dickstein. On the Expressive Power of Deep Neural Networks. Proceedings of the 34th International Conference on Machine Learning, PMLR 70:2847-2854, 2017. PDF.

  2. "Best of both worlds" - combining deep learning and probabilistic models, October 10, 2017

    1. "Sequential Neural Models with Stochastic Layers", Marco Fraccaro, Søren Kaae Sønderby, Ulrich Paquet, Ole Winther - NIPS 2016. PDF.

    2. "Structured Inference Networks for Nonlinear State Space Models” - RG Krishnan, U Shalit, D Sontag - AAAI 2017. PDF 

  3. Dealing with heterogeneous temporal data, October 31, 2017

    1. Song, Yang, Ali Mamdouh Elkahky, and Xiaodong He. "Multi-rate deep learning for temporal recommendation." Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. ACM, 2016. PDF

    2. Baytas, Inci M., et al. "Patient Subtyping via Time-Aware LSTM Networks." Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2017. PDF.

  4. Recent improvements in Recurrent Neural Networks, November 7, 2017

    1. Massimo Quadrana et. al. ”Personalizing Session-based Recommendations with Hierarchical Recurrent Neural Networks” RecSys’17, August 27–31, 2017, Como, Italy. PDF.

    2. Daniel Neil, Michael Pfeiffer, Shih-Chii Liu “Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences.” NIPS 2016: 3882-3890. PDF.

  5. Counterfactual modeling, December 12, 2017

    1. Schulam P, Saria S. What-If Reasoning with Counterfactual Gaussian Processes. arXiv preprint arXiv:1703.10651. 2017 Mar 30. NIPS 2017. PDF.

    2. Kusner MJ, Loftus JR, Russell C, Silva R. Counterfactual Fairness. arXiv preprint arXiv:1703.06856. 2017 Mar 20. NIPS 2017. PDF

  6.     Auxiliary topics, December 19, 2017

    1. Mariet, Zelda, and Suvrit Sra. "Diversity networks." arXiv preprint arXiv:1511.05077 (2015). PDF.

    2. Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013). PDF.