Bayesian Updating of Gaussian Process
2026-1-1
Introduction to Bayesian updating of Gaussian Process.
Here we revisit the Bayesian updating for simple Gaussian process. Recall the Bayes' Law: given that event A has occurred, the probability of event B occurring is
For well-defined continuous random variables, the rule is similar:
Let's start with a static case. Suppose there is a normal variable
where
Note that
As a result,
By rearrangement,
As we know, the second factor is irrelevant for normalization. We could directly tell that the distribution of posterior
What if the signals are dynamic? Assume
Similar as before, we first write down the likelihood upon time
By Bayes' Law:
Obviously, the posterior mean
As
Sometimes the precision of the priori is also unknown. In this case, we must impose more strict assumptions to guarantee closed-form solutions. Common assumption involves Gamma distribution for true precision. In more complicated cases where we have no idea what distribution of the true state is, we need to update our beliefs nonparametrically. Kernel density estimators are useful in those set-ups.
Further Reading
A very nice literature review of Bayesian learning in macro: Bayesian Learning