The code below aims to enhance the accessibility and reproducibility of my work. If you use it, please cite the associated papers.

**Attention is not all you need**Code reproducing our findings.

code, bibtex, preprint, blogpost

**Structural message passing**SMP is a more powerful message passing graph neural network. The respective paper appeared in NeurIPS 2020.

code, bibtex, paper, video, blogpost

**Erdos goes neural**The code for the respective paper from NeurIPS 2020.

code, bibtex, paper, video, blogpost

**Graph coarsening**Obtain coarse graphs that are spectrally similar to a target graph and reproduce the results from “

*Graph reduction with spectral and cut guarantees*” JMLR 2019.

code, bibtex, paper, blogpost

**Attention & convolution**Code reproducing results from “

*On the relationship between self-attention and convolution*” ICLR 2020.

code, bibtex, paper, blogpost

**Gretel**Solve the path extrapolation problem and reproduce results from “

*Path extrapolation with graph neural networks*” IJCAI 2019.

code, bibtex, paper, blogpost

**Joint Fourier Transform**The routines for Fourier analysis of graph signals are part of the GSPBOX. The code below reproduces results from “

*A time-vertex signal processing framework*” TSP 2018.

code, bibtex, preprint

**Time-vertex stationarity**Model stochastic graph signals that vary in time and reproduce the results from “

*Stationary time-vertex signal processing*” JASP 2019.

code, bibtex, paper

**ARMA graph filters**

Use these filters to process graph signals while taking into account long-range interactions between nodes, as in “*Autoregressive Moving Average Graph Filtering*” TSP 2017.

code, bibtex, preprint, blogpost

**Independent implementations and applications**: ARMA convolution (Pytorch Geometric, Spektral, see also this); SPINNER graph partitioning on GIRAPH (Okapi)

*All code is distributed freely.*