Bayesian Updating of Gaussian Process

2026-1-1

Introduction to Bayesian updating of Gaussian Process.

Here we revisit the Bayesian updating for simple Gaussian process. Recall the Bayes' Law: given that event A has occurred, the probability of event B occurring is

For well-defined continuous random variables, the rule is similar:

Let's start with a static case. Suppose there is a normal variable and we don't know its mean. The corresponding agents' prior belief is . is called precision, given that a large precision implies small variance and accurate estimate. Agents infer the true mean based on their observations on a noisy signal:

where is a noise independent in belief bias . Given the observation of , how will agents update their belief? Let's go back to the Bayes' Law:

Note that is a constant. Therefore, we can focus on the terms contain information of only. The two Gaussian kernels are

As a result,

By rearrangement,

As we know, the second factor is irrelevant for normalization. We could directly tell that the distribution of posterior follows

What if the signals are dynamic? Assume , where agents prior belief . Here precisions are assumed to be known. Signals are set as i.i.d. realization of . Conditional on the information set , we can prove that the posterior is still normal.

Similar as before, we first write down the likelihood upon time :

By Bayes' Law:

Obviously, the posterior mean

As , by LLN, and .

Sometimes the precision of the priori is also unknown. In this case, we must impose more strict assumptions to guarantee closed-form solutions. Common assumption involves Gamma distribution for true precision. In more complicated cases where we have no idea what distribution of the true state is, we need to update our beliefs nonparametrically. Kernel density estimators are useful in those set-ups.

Further Reading

A very nice literature review of Bayesian learning in macro: Bayesian Learning