发布时间:2023-03-30 16:00
高斯过程:任意给定一批样本点 X = [ x 1 , x 2 , . . . , x n ] \mathbf{X=[x_1,x_2,...,x_n]} X=[x1,x2,...,xn] 为其随机分配 F = [ f ( x 1 ) , f ( x 2 ) , . . . , f ( x n ) ] \mathbf{F = [f(x_1),f(x_2),...,f(x_n)]} F=[f(x1),f(x2),...,f(xn)], F \bold F F 服从多维高斯分布。
假设 F \mathbf{F} F 的实际观测为 Y = [ y 1 , y 2 , . . . , y n ] \mathbf{Y=[y_1,y_2,...,y_n]} Y=[y1,y2,...,yn] ,且观测噪声服从均值 0 \bold 0 0,方差 σ 2 \mathbf{\sigma^2} σ2 的高斯分布。
最终问题:给定一批新数据点 X ∗ \mathbf{X_*} X∗ ,预测新的观测 Y ∗ \mathbf{Y_*} Y∗
隐含问题:给出 P ( F ∗ ∣ X ∗ , X , Y ) \mathbf{P(F_*|X_*,X,Y)} P(F∗∣X∗,X,Y) 后验预测分布
根据后验分布我们就能在该分布上随机采样从而得到新的观测值,这是一个随机过程
在新数据点 X ∗ \bold X{_*} X∗ 上分配的值为 F ∗ = [ f ( x ∗ 1 ) , f ( x ∗ 2 ) , . . . , f ( x ∗ m ) ] \bold F_*=[f(x_{*1}),f(x_{*2}),...,f(x_{*m})] F∗=[f(x∗1),f(x∗2),...,f(x∗m)],根据高斯过程的定义,有:
[ F F ∗ ] ∣ [ X X ∗ ] ∼ N ( [ u ( X ) u ( X ∗ ) ] , [ K K ∗ K ∗ T K ∗ ∗ ] ) \begin{bmatrix} \mathbf{F} \\ \mathbf{F_*} \end{bmatrix}|\begin{bmatrix} \mathbf{X} \\ \mathbf{X_*} \end{bmatrix} \sim N( \begin{bmatrix} \mathbf{u(X)} \\ \mathbf{u(X_*)} \end{bmatrix}, \begin{bmatrix} \mathbf{K} & \mathbf{K_*} \\ \mathbf{K_{*}^T} & \mathbf{K_{**}} \end{bmatrix} ) [FF∗]∣[XX∗]∼N([u(X)u(X∗)],[KK∗TK∗K∗∗])
其中
K = k e r n e l ( X , X ) K ∗ = k e r n e l ( X , X ∗ ) K ∗ ∗ = k e r n e l ( X ∗ , X ∗ ) \begin{aligned} &\mathbf{K = kernel(X,X)} \\ &\mathbf{K_{*} = kernel(X,X_{*})} \\ &\mathbf{K_{**} = kernel(X_{*},X_{*})} \\ \end{aligned} K=kernel(X,X)K∗=kernel(X,X∗)K∗∗=kernel(X∗,X∗)
又
y n = f ( x n ) + ϵ , ϵ ∼ N ( 0 , σ 2 ) \mathbf{y_n = f(x_n)+\epsilon ,\epsilon \sim N(0,\sigma ^2)} yn=f(xn)+ϵ,ϵ∼N(0,σ2)
因此有
[ Y F ∗ ] ∣ [ X X ∗ ] ∼ N ( [ u ( X ) u ( X ∗ ) ] , [ K + σ 2 I K ∗ K ∗ T K ∗ ∗ ] ) \begin{bmatrix} \mathbf{Y} \\ \mathbf{F_*} \end{bmatrix}|\begin{bmatrix} \mathbf{X} \\ \mathbf{X_*} \end{bmatrix}\sim N( \begin{bmatrix} \mathbf{u(X)} \\ \mathbf{u(X_*)} \end{bmatrix}, \begin{bmatrix} \mathbf{K+\sigma^2I} & \mathbf{K_*} \\ \mathbf{K_{*}^T} & \mathbf{K_{**}} \end{bmatrix} ) [YF∗]∣[XX∗]∼N([u(X)u(X∗)],[K+σ2IK∗TK∗K∗∗])
根据多维高斯分布的性质: F ∗ ∣ Y , X , X ∗ F_*|Y,X,X_* F∗∣Y,X,X∗服从高斯分布 N ( u ∗ , Σ ∗ ) N(u_*,\Sigma_*) N(u∗,Σ∗). 求 u ∗ u_* u∗和 Σ ∗ \Sigma_{*} Σ∗的方法如下.
我们先介绍一个普遍的结论,下面的推导引自白板推导笔记
记 x = ( x 1 , x 2 , ⋯ , x p ) T = ( x a , m × 1 , x b , n × 1 ) T , μ = ( μ a , m × 1 , μ b , n × 1 ) , Σ = ( Σ a a Σ a b Σ b a Σ b b ) x=(x_1, x_2,\cdots,x_p)^T=(x_{a,m\times 1}, x_{b,n\times1})^T,\mu=(\mu_{a,m\times1}, \mu_{b,n\times1}),\Sigma=\begin{pmatrix}\Sigma_{aa}&\Sigma_{ab}\\\Sigma_{ba}&\Sigma_{bb}\end{pmatrix} x=(x1,x2,⋯,xp)T=(xa,m×1,xb,n×1)T,μ=(μa,m×1,μb,n×1),Σ=(ΣaaΣbaΣabΣbb),已知 x ∼ N ( μ , Σ ) x\sim\mathcal{N}(\mu,\Sigma) x∼N(μ,Σ)。
求 p ( x b ∣ x a ) p(x_b|x_a) p(xb∣xa)
x b ⋅ a = x b − Σ b a Σ a a − 1 x a μ b ⋅ a = μ b − Σ b a Σ a a − 1 μ a Σ b b ⋅ a = Σ b b − Σ b a Σ a a − 1 Σ a b x_{b\cdot a}=x_b-\Sigma_{ba}\Sigma_{aa}^{-1}x_a\\ \mu_{b\cdot a}=\mu_b-\Sigma_{ba}\Sigma_{aa}^{-1}\mu_a\\ \Sigma_{bb\cdot a}=\Sigma_{bb}-\Sigma_{ba}\Sigma_{aa}^{-1}\Sigma_{ab} xb⋅a=xb−ΣbaΣaa−1xaμb⋅a=μb−ΣbaΣaa−1μaΣbb⋅a=Σbb−ΣbaΣaa−1Σab
于是有
x b ⋅ a = ( − Σ b a Σ a a − 1 I n × n ) ( x a x b ) x_{b\cdot a}=\begin{pmatrix}-\Sigma_{ba}\Sigma_{aa}^{-1}&\mathbb{I}_{n\times n}\end{pmatrix}\begin{pmatrix}x_a\\x_b\end{pmatrix} xb⋅a=(−ΣbaΣaa−1In×n)(xaxb)
从而
E [ x b ⋅ a ] = ( − Σ b a Σ a a − 1 I n × n ) ( μ a μ b ) = μ b ⋅ a V a r [ x b ⋅ a ] = ( − Σ b a Σ a a − 1 I n × n ) ( Σ a a Σ a b Σ b a Σ b b ) ( − Σ a a − 1 Σ b a T I n × n ) = Σ b b ⋅ a \begin{aligned} \mathbb{E}[x_{b\cdot a}] & = \begin{pmatrix}-\Sigma_{ba}\Sigma_{aa}^{-1}&\mathbb{I}_{n\times n}\end{pmatrix}\begin{pmatrix}\mu_a\\\mu_b\end{pmatrix} = \mu_{b\cdot a}\\ Var[x_{b\cdot a}] & = \begin{pmatrix}-\Sigma_{ba}\Sigma_{aa}^{-1}&\mathbb{I}_{n\times n}\end{pmatrix}\begin{pmatrix}\Sigma_{aa}&\Sigma_{ab}\\\Sigma_{ba}&\Sigma_{bb}\end{pmatrix}\begin{pmatrix}-\Sigma_{aa}^{-1}\Sigma_{ba}^T\\\mathbb{I}_{n\times n}\end{pmatrix} = \Sigma_{bb\cdot a} \end{aligned} E[xb⋅a]Var[xb⋅a]=(−ΣbaΣaa−1In×n)(μaμb)=μb⋅a=(−ΣbaΣaa−1In×n)(ΣaaΣbaΣabΣbb)(−Σaa−1ΣbaTIn×n)=Σbb⋅a
可得
x b ∣ x a = x b ⋅ a + Σ b a Σ a a − 1 x a E [ x b ∣ x a ] = μ b ⋅ a + Σ b a Σ a a − 1 x a V a r [ x b ∣ x a ] = Σ b b ⋅ a \begin{aligned} &x_b|x_a =x_{b\cdot a}+\Sigma_{ba}\Sigma_{aa}^{-1}x_a \\\\ &\mathbb{E}[x_b|x_a]=\mu_{b\cdot a}+\Sigma_{ba}\Sigma_{aa}^{-1}x_a \\\\ &Var[x_b|x_a]=\Sigma_{bb\cdot a}\\ \end{aligned} xb∣xa=xb⋅a+ΣbaΣaa−1xaE[xb∣xa]=μb⋅a+ΣbaΣaa−1xaVar[xb∣xa]=Σbb⋅a
其中, x b . a x_{b.a} xb.a与 x a x_a xa的独立性证明过程如图,该图来自B站大佬shuhuai008的白板推导视频勘误
根据上面得到的结论,我们把以下映射带入公式:
x a = Y x b = F ∗ u a = 0 u b = 0 Σ a a = K Σ a b = K ∗ Σ b a = K ∗ T Σ b b = K ∗ ∗ \begin{aligned} x_a & = \mathbf{Y} \\ x_b & = \mathbf{F_*} \\ u_a & = 0 \\ u_b & = 0 \\ \Sigma_{aa} & = \mathbf{K} \\ \Sigma_{ab} & = \mathbf{K_*} \\ \Sigma_{ba} & = \mathbf{K_*^T} \\ \Sigma_{bb} & = \mathbf{K_{**}} \\ \end{aligned} xaxbuaubΣaaΣabΣbaΣbb=Y=F∗=0=0=K=K∗=K∗T=K∗∗
代入的计算略,读者可自己完成。最终可得
μ ∗ = K ∗ T K − 1 Y Σ ∗ = K ∗ ∗ − K ∗ T K − 1 K ∗ \begin{aligned} \boldsymbol{\mu}_{*} &=\mathbf{K}_{*}^{T} \mathbf{K}^{-1} \mathbf{Y} \\ \mathbf{\Sigma}_{*} &=\mathbf{K}_{* *}-\mathbf{K}_{*}^{T} \mathbf{K}^{-1} \mathbf{K}_{*} \end{aligned} μ∗Σ∗=K∗TK−1Y=K∗∗−K∗TK−1K∗
所以
P ( F ∗ ∣ X ∗ , X , Y ) = N ( F ∗ ∣ u ∗ , Σ ∗ ) \mathbf{P(F_*|X_*,X,Y) = N(F_*|u_*,\Sigma_*)} P(F∗∣X∗,X,Y)=N(F∗∣u∗,Σ∗)