Gaussian Processes🔗
This class continues our exploration of the Bayesian approach to Machine Learning. Departing from the question of estimating $P(y|x)$ and the hypothesis that a set of observations y is distributed as a Gaussian vector around its meand, we shall see how we can derive an explicit form for the distribution of $y|x$. This explicit form will be called a Gaussian Process and will provide us with the likelihood that each possible function fits our data. In turn, this will provide an elegant and efficient way to estimate the most likely output $y$ for a given $x$ but also the confidence interval around this prediction.
Pre-class refresher activities and solution
Summary card
References🔗
Gaussian Processes in Machine Learning.
C. E. Rasmussen and C. K. I. Williams. MIT press, 2005.
Available for download at http://www.gaussianprocess.org/gpml.