With Christmas over now in the United Kingdom, selection boxes contain only bounty bars and television networks are running nothing but repeats. Inspired by this, I’m going to outline the papers and posts that I’ve enjoyed reading in the past year.
A framework for diagnosing and resolving any issues, should they exist, in learned variational approximations. As someone who enjoyed Yes, but did it work? Evaluating variational inference, this was a lovely follow-on paper.
A intuitive and beautifully illustrated blog post the presents matrices as geometrical operators. Understanding a matrix as a geometric operator was not a new concept to me, but I very much enjoyed the exposition given in this post.
Clearly written and an elegant solution to the problem of constructing Bayesian GPLVM models with more exotic kernels than just an RBF or polynomial kernel.
A comprehensive review paper that presents the topic of hypergraphs from primitive basics through to a wide range of real-world examples. Hypergraphs were a new topic to me this year and, without this paper, the task of getting up-to-speed with hypergraphs would have been much harder.
A wonderful paper by my former supervisor, Alessandra, showing that the latent space of a GPLVM is in face a Riemannian manifold. Not only is the writing in this paper fantastic, but the results are also exciting and have been the ignition for many of my recent whiteboarding sessions.
Gaussian priors are often used in Bayesian neural networks. However, they are seldom optimal. I enjoyed the introduction of the Ridgelet prior in this paper as it was a principled and well-presented alternative approach to prior specification in BNNs with a connection to GPs.
Using a control variate based approach, SAGA presents an alternative to SGD that offers faster rates of convergence. Optimisation is not an area of machine learning that I am particularly well-versed in, so this paper was a real eye opener for me.
Matplotlib broken down to its primitive concepts and then built back out again as the powerful plotting tool it is. Despite using Matplotlib for years now, this book was still informative and patched up many of the holes or misunderstandings that existed in my Matplotlib understandings.
Writing a Gaussian process as an SPDE has origins with Whittle in the 1950s. This survey paper is a informative survey of how the approach has developed over the years to enable algorithms such as INLA or GPs for non-Euclidean data.