My favourite papers of 2021

With Christmas over now in the United Kingdom, selection boxes contain only bounty bars and television networks are running nothing but repeats. Inspired by this, I’m going to outline the papers and posts that I’ve enjoyed reading in the past year.

Challenges and Opportunities in High-dimensional Variational Inference

A framework for diagnosing and resolving any issues, should they exist, in learned variational approximations. As someone who enjoyed Yes, but did it work? Evaluating variational inference, this was a lovely follow-on paper.

A Geometrical Understanding of Matrices

A intuitive and beautifully illustrated blog post the presents matrices as geometrical operators. Understanding a matrix as a geometric operator was not a new concept to me, but I very much enjoyed the exposition given in this post.

Learning GPLVM with arbitrary kernels using the unscented transformation

Clearly written and an elegant solution to the problem of constructing Bayesian GPLVM models with more exotic kernels than just an RBF or polynomial kernel.

Networks beyond pairwise interactions: structure and dynamics

A comprehensive review paper that presents the topic of hypergraphs from primitive basics through to a wide range of real-world examples. Hypergraphs were a new topic to me this year and, without this paper, the task of getting up-to-speed with hypergraphs would have been much harder.

Metrics for Probabilistic Geometries

A wonderful paper by my former supervisor, Alessandra, showing that the latent space of a GPLVM is in face a Riemannian manifold. Not only is the writing in this paper fantastic, but the results are also exciting and have been the ignition for many of my recent whiteboarding sessions.

The Ridgelet Prior: A Covariance Function Approach to Prior Specification for Bayesian Neural Networks

Gaussian priors are often used in Bayesian neural networks. However, they are seldom optimal. I enjoyed the introduction of the Ridgelet prior in this paper as it was a principled and well-presented alternative approach to prior specification in BNNs with a connection to GPs.

SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives

Using a control variate based approach, SAGA presents an alternative to SGD that offers faster rates of convergence. Optimisation is not an area of machine learning that I am particularly well-versed in, so this paper was a real eye opener for me.

Scientific Visualization: Python + Matplotlib

Matplotlib broken down to its primitive concepts and then built back out again as the powerful plotting tool it is. Despite using Matplotlib for years now, this book was still informative and patched up many of the holes or misunderstandings that existed in my Matplotlib understandings.

The SPDE approach for Gaussian and non-Gaussian fields: 10 years and still running

Writing a Gaussian process as an SPDE has origins with Whittle in the 1950s. This survey paper is a informative survey of how the approach has developed over the years to enable algorithms such as INLA or GPs for non-Euclidean data.