263-5056-00L Applications of Deep Learning on Graphs (Autumn 2023)

Abstract

Graphs are an incredibly versatile abstraction to represent arbitrary structures such as molecules, relational knowledge, or social and traffic networks. This course provides a practical overview of deep (representation) learning on graphs and their applications.

Objective

Many established deep learning methods require dense input data with a well-defined structure (e.g. an image, or a sequence of word embeddings). However, many practical applications deal with sparsely connected and complex data structures, such as molecules, knowledge graphs, or social networks. Graph Neural Networks (GNNs) and general representation learning on graphs have recently experienced a surge in popularity because it addresses the challenge of effectively learning representations over said structures. In this course, we aim to understand the fundamental principles of deep (representation) learning on graphs, the similarities, and differences to other concepts in deep learning, as well as the unique challenges from a practical point of view. Finally, we provide an overview of recent applications of graph neural networks.

Content

Introduction to GNN concepts:

  1. Tasks on graphs (node-, edge-, graph-level objectives), structural priors (inductive biases) of graph data, and applications for graph learning.
  2. Graph Neural Networks: convolutional, attentional, message passing, etc.. Relations to Transformers and DeepSets.
  3. Expressivity of GNNs.
  4. Scalability of Graph Neural Networks: Subsampling, Clustering (Pooling).
  5. Augmentations and self-supervised learning on Graphs

Applications: Drug Discovery, Knowledge Graphs, Temporal GNNs, Geometric GNNs, Deep Generative Models for Graphs.

Location

The lecture takes place physically at ETH in CAB G 51. The time is Wednesdays 16:15 - 18:00. 

This is an applied course with project discussions, paper presentations and tutorials. We encourage all students to attend in-person. 

Course Overview

Date Part 1 Part 2 Course Material / Literature
20.09.2023 [L] Introduction / Motivation [L] Course Organisation Lecture 1 - Part 1
Lecture 1 - Part 2
27.09.2023 [L] Features and Node Embeddings [T] PyTorch Geometric Lecture 2 - Part 1
Lecture 2 - Tutorial (.zip)
04.10.2023 [L] Intro to GNNs [L] Intro to GNNs -
11.10.2023 [L] Training GNNs [T] Project 1: Introduction -
18.10.2023 [GL: Leslie O'Bray] Graph Transformer [P] Applications of GNNs -
25.10.2023 [GL: Kenza Amara] Explainability [P] Advanced GNNs -
01.11.2023 [T] Oversmoothing, Scaling, Expressiveness [P] Limits of GNNs, Project 1 Deadline -
08.11.2023 [L] Augmentations [P] Adversarial Attacks, QA -
15.11.2023 [GL: Mrinmaya Sachan] Knowledge Graphs [T] Project 2: Introduction -
22.11.2023 [L] Temporal GNNs [P] Clinical & Genomics Applications -
29.11.2023 [L] Generative Modelling on Graphs [P] Developing GNNs -
06.12.2023 [L] Drug Discovery [P] Knowledge Graphs, Project 2: Deadline -
13.12.2023 [P] Generative NNs [P] Geometric / 3D GNNs -
20.12.2023 [L] Exam Questions [P] Project 2: Presentations -

[L]: Lecture, [GL]: Guest Lecture, [T]: Tutorial, [P]: Paper/Project Presentations

Material

The slides for the lectures will be posted as the course progresses.

Additional reading:

Materials for review of prerequisites:

Assessment

  • ECTS: 4
  • Deliverables:
    • Paper presentation (in groups of 2)
    • Project report (in groups of 3)
    • Written exam (individual, session examination)
  • Grade: 0.4 * (project and paper) + 0.6 * exam
  • Exam: Written (session examination, 120 min)

Note that you must get a passing grade for the paper presentation and the project to attend the exam. Otherwise please de-register or we will have to give you a no-show.

The written exam will take place in February and will cover topics discussed in lectures and the project. The exam duration is 120 min. You can bring one page (single side) of A4 paper with your notes. The notes may be handwritten or typed (minimal font size: Arial 10pt).

Administrative details: