(a)
最佳子集选择具有最小的训练 RSS,因为其他两种方法确定模型的路径依赖于它们在迭代到第 k 个模型时首先选择的预测变量。
(b)
最佳子集选择可能具有最小的测试 RSS,因为它考虑了比其他方法更多的模型。 但是,其他模型可能会更幸运地选择更适合测试数据的模型。
(c)
i. True.
ii. True.
iii. False.
iv. False.
v. False.
(a)lasso
iii. 由于方差较小,偏差较大,因此灵活度较低且预测效果更好
(b)Ridge regression
iii. 由于方差较小,偏差较大,因此灵活度较低且预测效果更好
(c)非线性模型
ii.更灵活、更少偏差、更多差异
(a)
iv. 稳定减小:s从0开始增加,所有β从0增加至其最小二乘估计值。0时训练误差最大,稳定减小至普通最小二乘RSS。
(b)
ii. 最初减小,然后开始增加,图像呈现一个U形。s=0时,所有β=0,该模型极其简单,具有很高的测试RSS。随着s增加,β开始变为非零值,模型开始较好地拟合数据,因此测试RSS减少。
(c)
iii. 稳定增长。当s=0,模型有效地预测了一个常数并且几乎没有方差;随着s增加,模型中更多β开始增加,此时β高度依赖训练数据,从而增加了方差。
(d)
iv. 稳定减小:当s=0,该模型有效地预测了一个常数,因此预测值与实际值相差甚远,因此偏差很高。随着 s 的增加,更多的 β变为非零,因此模型继续更好地拟合训练数据。 因此,偏差减少了。
(e)
v. 保持不变。根据定义,不可约误差与模型无关,因此无论 s 的选择如何,都保持不变。
(a)
iii. 稳定增长。λ从0开始增加,所有β 从最小二乘估计值减少到 0。最小二乘估计的训练误差最小,随β减小到0稳定增长。
(b)
ii. 最初减小,然后开始增加,图像呈现一个U形。λ=0时,所有β都有其最小二乘估计值。在这种情况下,模型试图很难适应训练数据,因此测试 RSS 很高。增加 λ ,β开始减少到零,并且一些过度拟合也减少了。 因此,测试 RSS 最初下降。 最终,随着 β接近 0,模型变得过于简单,测试 RSS 增加。
(c)
iv. 稳定减小:当λ=0,所有β都有其最小二乘估计值。实际估计在很大程度上取决于训练数据,因此方差很大。随着 λ增加,β开始减少,模型变得更简单。 在 λ 接近无穷大的极限情况下,所有 β都减少到零,模型预测一个常数并且没有方差。
(d)
iii. 稳定增长。当λ=0,所有β有它们的最小二乘估计值,因此偏差最小。随着 λ 增加,β 开始向0减小,模型对训练数据的拟合不太准确,因此偏差增加。 在 λ 接近无穷大的极限情况下,模型预测一个常数,因此偏差最大。
(e)
v. 保持不变。根据定义,不可约误差与模型无关,因此无论 λ 的选择如何,都保持不变。
(a)
岭回归模型回归系数最优化的一般形式:
最小化:
∑
i
=
1
n
(
y
i
−
β
^
0
−
∑
j
=
1
p
β
^
j
x
j
)
2
+
λ
∑
i
=
1
p
β
^
i
2
sum_{i=1}^{n}left(y_{i}-hat{beta}_{0}-sum_{j=1}^{p} hat{beta}_{j} x_{j}right)^{2}+lambda sum_{i=1}^{p} hat{beta}_{i}^{2}
∑i=1n(yi−β^0−∑j=1pβ^jxj)2+λ∑i=1pβ^i2
当
β
^
0
=
0
hat{beta}_{0}=0
β^0=0,
n
=
p
=
2
n=p=2
n=p=2则:
(
y
1
−
β
^
1
x
11
−
β
^
2
x
12
)
2
+
(
y
2
−
β
^
1
x
21
−
β
^
2
x
22
)
2
+
λ
(
β
^
1
2
+
β
^
2
2
)
left(y_{1}-hat{beta}_{1} x_{11}-hat{beta}_{2} x_{12}right)^{2}+left(y_{2}-hat{beta}_{1} x_{21}-hat{beta}_{2} x_{22}right)^{2}+lambdaleft(hat{beta}_{1}^{2}+hat{beta}_{2}^{2}right)
(y1−β^1x11−β^2x12)2+(y2−β^1x21−β^2x22)2+λ(β^12+β^22)
(b)
假设
x
11
=
x
12
=
x
1
x_{11}=x_{12}=x_{1}
x11=x12=x1,
x
21
=
x
22
=
x
2
x_{21}=x_{22}=x_{2}
x21=x22=x2,对
β
^
1
hat{beta}_{1}
β^1和
β
^
2
hat{beta}_{2}
β^2取上述表达式的导数并将它们设置为0发现:
β
^
∗
1
=
x
1
y
1
+
x
2
V
2
−
β
∗
2
(
x
1
2
+
x
2
2
)
λ
+
x
1
2
+
x
2
2
hat{beta}^{*}{ }_{1}=frac{x_{1} y_{1}+x_{2 V_{2}}-beta^{*} 2left(x_{1}^{2}+x_{2}^{2}right)}{lambda+x_{1}^{2}+x_{2}^{2}}
β^∗1=λ+x12+x22x1y1+x2V2−β∗2(x12+x22)
β
^
2
=
x
1
y
1
+
x
2
y
2
−
β
^
∗
1
(
x
1
2
+
x
2
2
)
λ
+
x
1
2
+
x
2
2
hat{beta}_{2}=frac{x_{1} y_{1}+x_{2} y_{2}-hat{beta}^{*} 1left(x_{1}^{2}+x_{2}^{2}right)}{lambda+x_{1}^{2}+x_{2}^{2}}
β^2=λ+x12+x22x1y1+x2y2−β^∗1(x12+x22)
这些表达式中的对称性表明
β
^
1
hat{beta}_{1}
β^1=
β
^
2
hat{beta}_{2}
β^2。
(c)
与岭回归相似
最小化:
(
y
1
−
β
^
1
x
11
−
β
^
2
x
12
)
2
+
(
y
2
−
β
^
1
x
21
−
β
^
2
x
22
)
2
+
λ
(
∣
β
^
1
∣
+
∣
β
^
2
∣
)
left(y_{1}-hat{beta}_{1} x_{11}-hat{beta}_{2} x_{12}right)^{2}+left(y_{2}-hat{beta}_{1} x_{21}-hat{beta}_{2} x_{22}right)^{2}+lambdaleft(left|hat{beta}_{1}right|+left|hat{beta}_{2}right|right)
(y1−β^1x11−β^2x12)2+(y2−β^1x21−β^2x22)2+λ(∣∣∣β^1∣∣∣+∣∣∣β^2∣∣∣)
(d)
这是上述 c 中方程解的几何解释。
我们使用lasso约束的替代形式
∣
β
^
1
∣
+
∣
β
^
2
∣
<
s
left|hat{beta}_{1}right|+left|hat{beta}_{2}right| Lasso 约束采用以上形式,绘制时采用以原点 (0,0) 为中心的菱形形状。
考虑平方优化约束
(
y
1
−
β
^
1
x
11
−
β
^
2
x
12
)
2
+
(
y
2
−
β
^
1
x
21
−
β
^
2
x
22
)
2
left(y_{1}-hat{beta}_{1} x_{11}-hat{beta}_{2} x_{12}right)^{2}+left(y_{2}-hat{beta}_{1} x_{21}-hat{beta}_{2} x_{22}right)^{2}
(y1−β^1x11−β^2x12)2+(y2−β^1x21−β^2x22)2
利用
x
11
=
x
12
,
x
21
=
x
22
,
x
11
+
x
21
=
0
,
x
12
+
x
22
=
0
x_{11}=x_{12}, x_{21}=x_{22}, x_{11}+x_{21}=0, x_{12}+x_{22}=0
x11=x12,x21=x22,x11+x21=0,x12+x22=0 和
y
1
+
y
2
=
0
y_{1}+y_{2}=0
y1+y2=0简化为:
最小化2
(
y
1
−
(
β
^
1
+
β
^
2
)
x
11
)
2
left(y_{1}-left(hat{beta}_{1}+hat{beta}_{2}right) x_{11}right)^{2}
(y1−(β^1+β^2)x11)2
这个优化问题有一个简单的解决方案:
β
^
1
+
β
^
2
=
y
1
x
11
hat{beta}_{1}+hat{beta}_{2}=frac{y_{1}}{x_{11}}
β^1+β^2=x11y1。这是一条平行于 Lasso-diamond 边缘的线
β
^
1
+
β
^
2
=
s
hat{beta}_{1}+hat{beta}_{2}=s
β^1+β^2=s
现在原始lasso优化问题的解决方案是函数
(
y
1
−
(
β
^
1
+
β
^
2
)
x
11
)
2
left(y_{1}-left(hat{beta}_{1}+hat{beta}_{2}right) x_{11}right)^{2}
(y1−(β^1+β^2)x11)2的轮廓与lasso的菱形图
β
^
1
+
β
^
2
=
s
hat{beta}_{1}+hat{beta}_{2}=s
β^1+β^2=s接触。
最后,由于
β
^
1
hat{beta}_{1}
β^1和
β
^
2
hat{beta}_{2}
β^2在
β
^
1
+
β
^
2
=
y
1
x
11
hat{beta}_{1}+hat{beta}_{2}=frac{y_{1}}{x_{11}}
β^1+β^2=x11y1上与lasso的图有不同的接触点,因此
β
^
1
+
β
^
2
=
s
hat{beta}_{1}+hat{beta}_{2}=s
β^1+β^2=s的整个边缘都是lasso优化问题的潜在解决方案。
在
β
^
1
+
β
^
2
=
−
s
hat{beta}_{1}+hat{beta}_{2}=-s
β^1+β^2=−s也是同样道理。
因此,lasso问题没有唯一解。 解的一般形式由两条线段给出:
β
^
1
+
β
^
2
=
s
;
β
^
1
≥
0
;
β
^
2
≥
0
hat{beta}_{1}+hat{beta}_{2}=s ; hat{beta}_{1} geq 0 ; hat{beta}_{2} geq 0
β^1+β^2=s;β^1≥0;β^2≥0
β
^
1
+
β
^
2
=
−
s
;
β
^
1
≤
0
;
β
^
2
≤
0
hat{beta}_{1}+hat{beta}_{2}=-s ; hat{beta}_{1} leq 0 ; hat{beta}_{2} leq 0
β^1+β^2=−s;β^1≤0;β^2≤0
(a)
对于
p
=
1
p=1
p=1,用公式
(
y
−
β
)
2
+
λ
β
2
(y-beta)^{2}+lambda beta^{2}
(y−β)2+λβ2,用
y
=
2
,
λ
=
2
y=2,λ=2
y=2,λ=2绘制函数图像。
> y = 2 > lambda = 2 > betas = seq(-10, 10, 0.1) > func = (y - betas)^2 + lambda * betas^2 > plot(betas, func, pch = 20, xlab = "beta", ylab = "Ridge optimization") > est.beta = y/(1 + lambda) > est.func = (y - est.beta)^2 + lambda * est.beta^2 > points(est.beta, est.func, col = "red", pch = 4, lwd = 5, cex = est.beta)
红十字表明该函数确实在
β
=
y
/
(
1
+
λ
)
β=y/(1+λ)
β=y/(1+λ) 处最小化。
(b)
对于
p
=
1
p=1
p=1,用公式
(
y
−
β
)
2
+
λ
∣
β
∣
(y-beta)^{2}+lambda|beta|
(y−β)2+λ∣β∣,用y=2,λ=2y=2,λ=2绘制函数图像。
> y = 2 > lambda = 2 > betas = seq(-3, 3, 0.01) > func = (y - betas)^2 + lambda * abs(betas) > plot(betas, func, pch = 20, xlab = "beta", ylab = "Lasso optimization") > est.beta = y - lambda/2 > est.func = (y - est.beta)^2 + lambda * abs(est.beta) > points(est.beta, est.func, col = "red", pch = 4, lwd = 5, cex = est.beta)
红十字表明该函数确实在
β
=
y
−
λ
/
2
β=y-λ/2
β=y−λ/2 处最小化。
(a)
似然函数:
L
(
θ
∣
β
)
=
p
(
β
∣
θ
)
=
p
(
β
1
∣
θ
)
×
⋯
×
p
(
β
n
∣
θ
)
=
∏
i
=
1
n
p
(
β
i
∣
θ
)
=
∏
i
=
1
n
1
σ
2
π
exp
(
−
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
2
σ
2
)
=
(
1
σ
2
π
)
n
exp
(
−
1
2
σ
2
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
)
begin{aligned} L(theta mid beta) &=p(beta mid theta) \ &=pleft(beta_{1} mid thetaright) times cdots times pleft(beta_{n} mid thetaright) \ &=prod_{i=1}^{n} pleft(beta_{i} mid thetaright) \ &=prod_{i=1}^{n} frac{1}{sigma sqrt{2 pi}} exp left(-frac{Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)}{2 sigma^{2}}right) \ &=left(frac{1}{sigma sqrt{2 pi}}right)^{n} exp left(-frac{1}{2 sigma^{2}} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}right) end{aligned}
L(θ∣β)=p(β∣θ)=p(β1∣θ)×⋯×p(βn∣θ)=i=1∏np(βi∣θ)=i=1∏nσ2π
1exp⎝⎛−2σ2Yi−(β0+∑j=1pβjXij)⎠⎞=(σ2π
1)nexp⎝⎛−2σ21i=1∑n[Yi−(β0+j=1∑pβjXij)]2⎠⎞
(b)
β的后验分布:
具有双指数(拉普拉斯分布)且均值为 0 和公共尺度参数 b 的后验,即
p
(
β
)
=
1
2
b
exp
(
−
∣
β
∣
/
b
)
p(beta)=frac{1}{2 b} exp (-|beta| / b)
p(β)=2b1exp(−∣β∣/b)为:
f
(
β
∣
X
,
Y
)
∝
f
(
Y
∣
X
,
β
)
p
(
β
∣
X
)
=
f
(
Y
∣
X
,
β
)
p
(
β
)
f(beta mid X, Y) propto f(Y mid X, beta) p(beta mid X)=f(Y mid X, beta) p(beta)
f(β∣X,Y)∝f(Y∣X,β)p(β∣X)=f(Y∣X,β)p(β)
用(a)和密度函数替换:
f
(
Y
∣
X
,
β
)
p
(
β
)
=
(
1
σ
2
π
)
n
exp
(
−
1
2
σ
2
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
)
(
1
2
b
exp
(
−
∣
β
∣
/
b
)
)
=
(
1
σ
2
π
)
n
(
1
2
b
)
exp
(
−
1
2
σ
2
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
−
∣
β
∣
b
)
begin{aligned} f(Y mid X, beta) p(beta) &=left(frac{1}{sigma sqrt{2 pi}}right)^{n} exp left(-frac{1}{2 sigma^{2}} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}right)left(frac{1}{2 b} exp (-|beta| / b)right) \ &=left(frac{1}{sigma sqrt{2} pi}right)^{n}left(frac{1}{2 b}right) exp left(-frac{1}{2 sigma^{2}} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}-frac{|beta|}{b}right) end{aligned}
f(Y∣X,β)p(β)=(σ2π
1)nexp⎝⎛−2σ21i=1∑n[Yi−(β0+j=1∑pβjXij)]2⎠⎞(2b1exp(−∣β∣/b))=(σ2
π1)n(2b1)exp⎝⎛−2σ21i=1∑n[Yi−(β0+j=1∑pβjXij)]2−b∣β∣⎠⎞
(c)
表明 β 的lasso估计是此后验分布下的众数,这与表明 β 的最可能值由具有特定 λ 的lasso解给出的结果相同。
通过采用似然和后验并证明它可以简化为书中的规范lasso方程 6.7 来做到这一点。
首先通过取两边的对数来简化:
log
f
(
Y
∣
X
,
β
)
p
(
β
)
=
log
[
(
1
σ
2
π
)
n
(
1
2
b
)
exp
(
−
1
2
σ
2
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
−
∣
β
∣
b
)
]
=
log
[
(
1
σ
2
π
)
n
(
1
2
b
)
]
−
(
1
2
σ
2
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
+
∣
β
∣
b
)
begin{aligned} log f(Y mid X, beta) p(beta) &=log left[left(frac{1}{sigma sqrt{2 pi}}right)^{n}left(frac{1}{2 b}right) exp left(-frac{1}{2 sigma^{2}} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}-frac{|beta|}{b}right)right] \ &=log left[left(frac{1}{sigma sqrt{2 pi}}right)^{n}left(frac{1}{2 b}right)right]-left(frac{1}{2 sigma^{2}} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}+frac{|beta|}{b}right) end{aligned}
logf(Y∣X,β)p(β)=log⎣⎡(σ2π
1)n(2b1)exp⎝⎛−2σ21i=1∑n[Yi−(β0+j=1∑pβjXij)]2−b∣β∣⎠⎞⎦⎤=log[(σ2π
1)n(2b1)]−⎝⎛2σ21i=1∑n[Yi−(β0+j=1∑pβjXij)]2+b∣β∣⎠⎞
想要最大化后验:
arg
max
β
f
(
β
∣
X
,
Y
)
=
arg
max
β
log
[
(
1
σ
2
π
)
n
(
1
2
b
)
]
−
(
1
2
σ
2
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
+
∣
β
∣
b
)
arg max _{beta} f(beta mid X, Y)=arg max _{beta} log left[left(frac{1}{sigma sqrt{2 pi}}right)^{n}left(frac{1}{2 b}right)right]-left(frac{1}{2 sigma^{2}} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}+frac{|beta|}{b}right)
argβmaxf(β∣X,Y)=argβmaxlog[(σ2π
1)n(2b1)]−⎝⎛2σ21i=1∑n[Yi−(β0+j=1∑pβjXij)]2+b∣β∣⎠⎞
由于我们取的是两个值的差值,因此该值的最大值相当于取第二个值的差值(以 β 表示)。 这导致:
=
arg
min
β
1
2
σ
2
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
+
∣
β
∣
b
=
arg
min
β
1
2
σ
2
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
+
1
b
∑
j
=
1
p
∣
β
j
∣
=
arg
min
β
1
2
σ
2
(
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
+
2
σ
2
b
∑
j
=
1
p
∣
β
j
∣
)
begin{aligned} &=arg min _{beta} frac{1}{2 sigma^{2}} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}+frac{|beta|}{b} \ &=arg min _{beta} frac{1}{2 sigma^{2}} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}+frac{1}{b} sum_{j=1}^{p}left|beta_{j}right| \ &=arg min _{beta} frac{1}{2 sigma^{2}}left(sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}+frac{2 sigma^{2}}{b} sum_{j=1}^{p}left|beta_{j}right|right) end{aligned}
=argβmin2σ21i=1∑n[Yi−(β0+j=1∑pβjXij)]2+b∣β∣=argβmin2σ21i=1∑n[Yi−(β0+j=1∑pβjXij)]2+b1j=1∑p∣βj∣=argβmin2σ21⎝⎛i=1∑n[Yi−(β0+j=1∑pβjXij)]2+b2σ2j=1∑p∣βj∣⎠⎞
令
λ
=
2
σ
2
/
b
λ=2σ2/b
λ=2σ2/b,则:
=
arg
min
β
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
+
λ
∑
j
=
1
p
∣
β
j
∣
=
arg
min
β
R
S
S
+
λ
∑
j
=
1
p
∣
β
j
∣
begin{aligned} &=arg min _{beta} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}+lambda sum_{j=1}^{p}left|beta_{j}right| \ &=arg min _{beta} mathrm{RSS}+lambda sum_{j=1}^{p}left|beta_{j}right| end{aligned}
=argβmini=1∑n[Yi−(β0+j=1∑pβjXij)]2+λj=1∑p∣βj∣=argβminRSS+λj=1∑p∣βj∣
当后验来自均值为 0 且公共尺度参数为 b 的拉普拉斯分布时,β 的众数由
λ
=
2
σ
2
/
b
λ=2σ2/b
λ=2σ2/b 时的 Lasso 解给出。
(d)
根据均值为 0 且方差为 c 的正态分布的后验分布为:
f
(
β
∣
X
,
Y
)
∝
f
(
Y
∣
X
,
β
)
p
(
β
∣
X
)
=
f
(
Y
∣
X
,
β
)
p
(
β
)
f(beta mid X, Y) propto f(Y mid X, beta) p(beta mid X)=f(Y mid X, beta) p(beta)
f(β∣X,Y)∝f(Y∣X,β)p(β∣X)=f(Y∣X,β)p(β)
概率分布函数变为:
p
(
β
)
=
∏
i
=
1
p
p
(
β
i
)
=
∏
i
=
1
p
1
2
c
π
exp
(
−
β
i
2
2
c
)
=
(
1
2
c
π
)
p
exp
(
−
1
2
c
∑
i
=
1
p
β
i
2
)
p(beta)=prod_{i=1}^{p} pleft(beta_{i}right)=prod_{i=1}^{p} frac{1}{sqrt{2 c pi}} exp left(-frac{beta_{i}^{2}}{2 c}right)=left(frac{1}{sqrt{2 c pi}}right)^{p} exp left(-frac{1}{2 c} sum_{i=1}^{p} beta_{i}^{2}right)
p(β)=i=1∏pp(βi)=i=1∏p2cπ
1exp(−2cβi2)=(2cπ
1)pexp(−2c1i=1∑pβi2)
用(a)和密度函数替换:
f
(
Y
∣
X
,
β
)
p
(
β
)
=
(
1
σ
2
π
)
n
exp
(
−
1
2
σ
2
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
)
(
1
2
c
π
)
p
exp
(
−
1
2
c
∑
i
=
1
p
β
i
2
)
=
(
1
σ
2
π
)
n
(
1
2
c
π
)
p
exp
(
−
1
2
σ
2
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
−
1
2
c
∑
i
=
1
p
β
i
2
)
begin{aligned} f(Y mid X, beta) p(beta) &=left(frac{1}{sigma sqrt{2 pi}}right)^{n} exp left(-frac{1}{2 sigma^{2}} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}right)left(frac{1}{sqrt{2 c pi}}right)^{p} exp left(-frac{1}{2 c} sum_{i=1}^{p} beta_{i}^{2}right) \ &=left(frac{1}{sigma sqrt{2 pi}}right)^{n}left(frac{1}{sqrt{2 c pi}}right)^{p} exp left(-frac{1}{2 sigma^{2}} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}-frac{1}{2 c} sum_{i=1}^{p} beta_{i}^{2}right) end{aligned}
f(Y∣X,β)p(β)=(σ2π
1)nexp⎝⎛−2σ21i=1∑n[Yi−(β0+j=1∑pβjXij)]2⎠⎞(2cπ
1)pexp(−2c1i=1∑pβi2)=(σ2π
1)n(2cπ
1)pexp⎝⎛−2σ21i=1∑n[Yi−(β0+j=1∑pβjXij)]2−2c1i=1∑pβi2⎠⎞
(e)
与 c 部分一样,表明 β 的岭回归估计是此后验分布下的众数和均值,这与表明 β 的最可能值由具有特定 λ 的lasso解给出的结果相同。
我们可以通过采用似然和后验并证明它可以简化为书中的规范岭回归方程 6.5 来做到这一点。
首先通过取两边的对数来简化:
log
f
(
Y
∣
X
,
β
)
p
(
β
)
=
(
1
σ
2
π
)
n
(
1
2
c
π
)
p
exp
(
−
1
2
σ
2
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
−
1
2
c
∑
i
=
1
p
β
i
2
)
=
log
[
(
1
σ
2
π
)
n
(
1
2
c
π
)
p
]
−
(
1
2
σ
2
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
+
1
2
c
∑
i
=
1
p
β
i
2
)
begin{aligned} log f(Y mid X, beta) p(beta) &=left(frac{1}{sigma sqrt{2 pi}}right)^{n}left(frac{1}{sqrt{2 c pi}}right)^{p} exp left(-frac{1}{2 sigma^{2}} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}-frac{1}{2 c} sum_{i=1}^{p} beta_{i}^{2}right) \ &=log left[left(frac{1}{sigma sqrt{2 pi}}right)^{n}left(frac{1}{sqrt{2 c pi}}right)^{p}right]-left(frac{1}{2 sigma^{2}} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}+frac{1}{2 c} sum_{i=1}^{p} beta_{i}^{2}right) end{aligned}
logf(Y∣X,β)p(β)=(σ2π
1)n(2cπ
1)pexp⎝⎛−2σ21i=1∑n[Yi−(β0+j=1∑pβjXij)]2−2c1i=1∑pβi2⎠⎞=log[(σ2π
1)n(2cπ
1)p]−⎝⎛2σ21i=1∑n[Yi−(β0+j=1∑pβjXij)]2+2c1i=1∑pβi2⎠⎞
想要最大化后验:
arg
max
β
f
(
β
∣
X
,
Y
)
=
arg
max
β
log
[
(
1
σ
2
π
)
n
(
1
2
c
π
)
p
]
−
(
1
2
σ
2
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
+
1
2
c
∑
i
=
1
p
β
i
2
)
arg max _{beta} f(beta mid X, Y)=arg max _{beta} log left[left(frac{1}{sigma sqrt{2} pi}right)^{n}left(frac{1}{sqrt{2 c pi}}right)^{p}right]-left(frac{1}{2 sigma^{2}} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}+frac{1}{2 c} sum_{i=1}^{p} beta_{i}^{2}right)
argβmaxf(β∣X,Y)=argβmaxlog[(σ2
π1)n(2cπ
1)p]−⎝⎛2σ21i=1∑n[Yi−(β0+j=1∑pβjXij)]2+2c1i=1∑pβi2⎠⎞
由于我们取的是两个值的差值,因此该值的最大值相当于取第二个值的差值(以 β 表示)。 这导致:
=
arg
min
β
(
1
2
σ
2
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
+
1
2
c
∑
i
=
1
p
β
i
2
)
=
arg
min
β
(
1
2
σ
2
)
(
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
+
σ
2
c
∑
i
=
1
p
β
i
2
)
begin{aligned} &=arg min _{beta}left(frac{1}{2 sigma^{2}} sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}+frac{1}{2 c} sum_{i=1}^{p} beta_{i}^{2}right) \ &=arg min _{beta}left(frac{1}{2 sigma^{2}}right)left(sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}+frac{sigma^{2}}{c} sum_{i=1}^{p} beta_{i}^{2}right) end{aligned}
=argβmin⎝⎛2σ21i=1∑n[Yi−(β0+j=1∑pβjXij)]2+2c1i=1∑pβi2⎠⎞=argβmin(2σ21)⎝⎛i=1∑n[Yi−(β0+j=1∑pβjXij)]2+cσ2i=1∑pβi2⎠⎞
令
λ
=
σ
2
/
c
λ=σ2/c
λ=σ2/c,最终得到:
=
arg
min
β
(
1
2
σ
2
)
(
∑
i
=
1
n
[
Y
i
−
(
β
0
+
∑
j
=
1
p
β
j
X
i
j
)
]
2
+
λ
∑
i
=
1
p
β
i
2
)
=
arg
min
β
R
S
S
+
λ
∑
i
=
1
p
β
i
2
begin{aligned} &=arg min _{beta}left(frac{1}{2 sigma^{2}}right)left(sum_{i=1}^{n}left[Y_{i}-left(beta_{0}+sum_{j=1}^{p} beta_{j} X_{i j}right)right]^{2}+lambda sum_{i=1}^{p} beta_{i}^{2}right) \ &=arg min _{beta} mathrm{RSS}+lambda sum_{i=1}^{p} beta_{i}^{2} end{aligned}
=argβmin(2σ21)⎝⎛i=1∑n[Yi−(β0+j=1∑pβjXij)]2+λi=1∑pβi2⎠⎞=argβminRSS+λi=1∑pβi2
当后验来自均值为 0 且方差为 c 的正态分布时,当
λ
=
σ
2
/
c
λ=σ2/c
λ=σ2/c 时,β 的众数由岭回归解给出。 由于后验是高斯的,我们也知道它是后验均值。
(a)
创建 100 个 X 和 ϵ 变量
set.seed(1) X = rnorm(100) eps = rnorm(100)
(b)
我们选择 β0=3、β1=2、β2=−3 和 β3=0.3
beta0 = 3 beta1 = 2 beta2 = -3 beta3 = 0.3 Y = beta0 + beta1 * X + beta2 * X^2 + beta3 * X^3 + eps
(c)
使用 regsubsets 选择具有 10 次 X 多项式的最佳模型。
library(leaps) data.full = data.frame(y = Y, x = X) mod.full = regsubsets(y ~ poly(x, 10, raw = T), data = data.full, nvmax = 10) mod.summary = summary(mod.full) # Find the model size for best cp, BIC and adjr2 which.min(mod.summary$cp) ## [1] 3 which.min(mod.summary$bic) ## [1] 3 which.max(mod.summary$adjr2) ## [1] 3 # Plot cp, BIC and adjr2 plot(mod.summary$cp, xlab = "Subset Size", ylab = "Cp", pch = 20, type = "l") points(3, mod.summary$cp[3], pch = 4, col = "red", lwd = 7)
plot(mod.summary$bic, xlab = "Subset Size", ylab = "BIC", pch = 20, type = "l") points(3, mod.summary$bic[3], pch = 4, col = "red", lwd = 7)
plot(mod.summary$adjr2, xlab = "Subset Size", ylab = "Adjusted R2", pch = 20,
type = "l")
points(3, mod.summary$adjr2[3], pch = 4, col = "red", lwd = 7)
我们发现使用 Cp、BIC 和调整后的 R2 标准,分别选择了 3、3 和 3 个变量模型。
coefficients(mod.full, id = 3) ## (Intercept) poly(x, 10, raw = T)1 poly(x, 10, raw = T)2 ## 3.07627 2.35624 -3.16515 ## poly(x, 10, raw = T)7 ## 0.01047
所有统计数据都选择 X7 而不是 X3。 其余的系数非常接近 β 。
(d)
我们将向前和向后逐步模型拟合到数据中。
mod.fwd = regsubsets(y ~ poly(x, 10, raw = T), data = data.full, nvmax = 10,
method = "forward")
mod.bwd = regsubsets(y ~ poly(x, 10, raw = T), data = data.full, nvmax = 10,
method = "backward")
fwd.summary = summary(mod.fwd)
bwd.summary = summary(mod.bwd)
which.min(fwd.summary$cp)
## [1] 3
which.min(bwd.summary$cp)
## [1] 3
which.min(fwd.summary$bic)
## [1] 3
which.min(bwd.summary$bic)
## [1] 3
which.max(fwd.summary$adjr2)
## [1] 3
which.max(bwd.summary$adjr2)
## [1] 4
# Plot the statistics
par(mfrow = c(3, 2))
plot(fwd.summary$cp, xlab = "Subset Size", ylab = "Forward Cp", pch = 20, type = "l")
points(3, fwd.summary$cp[3], pch = 4, col = "red", lwd = 7)
plot(bwd.summary$cp, xlab = "Subset Size", ylab = "Backward Cp", pch = 20, type = "l")
points(3, bwd.summary$cp[3], pch = 4, col = "red", lwd = 7)
plot(fwd.summary$bic, xlab = "Subset Size", ylab = "Forward BIC", pch = 20,
type = "l")
points(3, fwd.summary$bic[3], pch = 4, col = "red", lwd = 7)
plot(bwd.summary$bic, xlab = "Subset Size", ylab = "Backward BIC", pch = 20,
type = "l")
points(3, bwd.summary$bic[3], pch = 4, col = "red", lwd = 7)
plot(fwd.summary$adjr2, xlab = "Subset Size", ylab = "Forward Adjusted R2",
pch = 20, type = "l")
points(3, fwd.summary$adjr2[3], pch = 4, col = "red", lwd = 7)
plot(bwd.summary$adjr2, xlab = "Subset Size", ylab = "Backward Adjusted R2",
pch = 20, type = "l")
points(4, bwd.summary$adjr2[4], pch = 4, col = "red", lwd = 7)
我们看到所有统计数据都选择了 3 个变量模型,除了使用调整后的 R2 逐步向后。 这是系数。
coefficients(mod.fwd, id = 3) ## (Intercept) poly(x, 10, raw = T)1 poly(x, 10, raw = T)2 ## 3.07627 2.35624 -3.16515 ## poly(x, 10, raw = T)7 ## 0.01047 coefficients(mod.bwd, id = 3) ## (Intercept) poly(x, 10, raw = T)1 poly(x, 10, raw = T)2 ## 3.07888 2.41982 -3.17724 ## poly(x, 10, raw = T)9 ## 0.00187 coefficients(mod.fwd, id = 4) ## (Intercept) poly(x, 10, raw = T)1 poly(x, 10, raw = T)2 ## 3.112359 2.369859 -3.275727 ## poly(x, 10, raw = T)4 poly(x, 10, raw = T)7 ## 0.027674 0.009997
这里向前逐步选择 X7 而不是 X3。 使用 3 个变量逐步向后选择 X9,而使用 4 个变量逐步向后选择 X4 和 X7。 所有其他系数都接近于 β 。
(e)
在数据上训练 Lasso
library(glmnet) ## Loading required package: Matrix ## Loading required package: lattice ## Loaded glmnet 1.9-5 xmat = model.matrix(y ~ poly(x, 10, raw = T), data = data.full)[, -1] mod.lasso = cv.glmnet(xmat, Y, alpha = 1) best.lambda = mod.lasso$lambda.min best.lambda ## [1] 0.03991 plot(mod.lasso)
# Next fit the model on entire data using best lambda best.model = glmnet(xmat, Y, alpha = 1) predict(best.model, s = best.lambda, type = "coefficients") ## 11 x 1 sparse Matrix of class "dgCMatrix" ## 1 ## (Intercept) 3.0398151 ## poly(x, 10, raw = T)1 2.2303371 ## poly(x, 10, raw = T)2 -3.1033193 ## poly(x, 10, raw = T)3 . ## poly(x, 10, raw = T)4 . ## poly(x, 10, raw = T)5 0.0498411 ## poly(x, 10, raw = T)6 . ## poly(x, 10, raw = T)7 0.0008068 ## poly(x, 10, raw = T)8 . ## poly(x, 10, raw = T)9 . ## poly(x, 10, raw = T)10 .
Lasso 也选择 X5 而不是 X3。 它还选择系数可以忽略不计的 X7。
(f)
用不同的 β7=7 创建新的 Y。
beta7 = 7 Y = beta0 + beta7 * X^7 + eps # Predict using regsubsets data.full = data.frame(y = Y, x = X) mod.full = regsubsets(y ~ poly(x, 10, raw = T), data = data.full, nvmax = 10) mod.summary = summary(mod.full) # Find the model size for best cp, BIC and adjr2 which.min(mod.summary$cp) ## [1] 2 which.min(mod.summary$bic) ## [1] 1 which.max(mod.summary$adjr2) ## [1] 4 coefficients(mod.full, id = 1) ## (Intercept) poly(x, 10, raw = T)7 ## 2.959 7.001 coefficients(mod.full, id = 2) ## (Intercept) poly(x, 10, raw = T)2 poly(x, 10, raw = T)7 ## 3.0705 -0.1417 7.0016 coefficients(mod.full, id = 4) ## (Intercept) poly(x, 10, raw = T)1 poly(x, 10, raw = T)2 ## 3.0763 0.2914 -0.1618 ## poly(x, 10, raw = T)3 poly(x, 10, raw = T)7 ## -0.2527 7.0091 # 我们看到 BIC 选择了最准确的具有匹配系数的 1 变量模型。 其他标准选择其他变量。 xmat = model.matrix(y ~ poly(x, 10, raw = T), data = data.full)[, -1] mod.lasso = cv.glmnet(xmat, Y, alpha = 1) best.lambda = mod.lasso$lambda.min best.lambda ## [1] 12.37 best.model = glmnet(xmat, Y, alpha = 1) predict(best.model, s = best.lambda, type = "coefficients") ## 11 x 1 sparse Matrix of class "dgCMatrix" ## 1 ## (Intercept) 3.820 ## poly(x, 10, raw = T)1 . ## poly(x, 10, raw = T)2 . ## poly(x, 10, raw = T)3 . ## poly(x, 10, raw = T)4 . ## poly(x, 10, raw = T)5 . ## poly(x, 10, raw = T)6 . ## poly(x, 10, raw = T)7 6.797 ## poly(x, 10, raw = T)8 . ## poly(x, 10, raw = T)9 . ## poly(x, 10, raw = T)10 .
Lasso 还选择了最好的 1 变量模型,但 intercet 很差(3.8 vs 3)。
(a)
加载和拆分 College 数据
library(ISLR) set.seed(11) sum(is.na(College)) ## [1] 0 train.size = dim(College)[1] / 2 train = sample(1:dim(College)[1], train.size) test = -train College.train = College[train, ] College.test = College[test, ]
(b)
应用程序数量是应用程序变量。
lm.fit = lm(Apps~., data=College.train) lm.pred = predict(lm.fit, College.test) mean((College.test[, "Apps"] - lm.pred)^2) ## [1] 1538442
RSS=1538442
(c)
使用 College.train 选择 λ 并在 College.test 上报告错误
library(glmnet) ## Warning: package 'glmnet' was built under R version 2.15.2 ## Loading required package: Matrix ## Warning: package 'Matrix' was built under R version 2.15.3 ## Loading required package: lattice ## Warning: package 'lattice' was built under R version 2.15.3 ## Loaded glmnet 1.9-3 train.mat = model.matrix(Apps~., data=College.train) test.mat = model.matrix(Apps~., data=College.test) grid = 10 ^ seq(4, -2, length=100) mod.ridge = cv.glmnet(train.mat, College.train[, "Apps"], alpha=0, lambda=grid, thresh=1e-12) lambda.best = mod.ridge$lambda.min lambda.best ## [1] 18.74 ridge.pred = predict(mod.ridge, newx=test.mat, s=lambda.best) mean((College.test[, "Apps"] - ridge.pred)^2) ## [1] 1608859
测试 RSS 略高于 OLS,1608859.
(d)
mod.lasso = cv.glmnet(train.mat, College.train[, "Apps"], alpha=1, lambda=grid, thresh=1e-12) lambda.best = mod.lasso$lambda.min lambda.best ## [1] 21.54 lasso.pred = predict(mod.lasso, newx=test.mat, s=lambda.best) mean((College.test[, "Apps"] - lasso.pred)^2) ## [1] 1635280
同样,测试 RSS 略高于 OLS,1635280
系数为:
mod.lasso = glmnet(model.matrix(Apps~., data=College), College[, "Apps"], alpha=1) predict(mod.lasso, s=lambda.best, type="coefficients") ## 19 x 1 sparse Matrix of class "dgCMatrix" ## 1 ## (Intercept) -6.038e+02 ## (Intercept) . ## PrivateYes -4.235e+02 ## Accept 1.455e+00 ## Enroll -2.004e-01 ## Top10perc 3.368e+01 ## Top25perc -2.403e+00 ## F.Undergrad . ## P.Undergrad 2.086e-02 ## Outstate -5.782e-02 ## Room.Board 1.246e-01 ## Books . ## Personal 1.833e-05 ## PhD -5.601e+00 ## Terminal -3.314e+00 ## S.F.Ratio 4.479e+00 ## perc.alumni -9.797e-01 ## Expend 6.968e-02 ## Grad.Rate 5.160e+00
(e)
使用验证集来拟合 pcr
library(pls) ## ## Attaching package: 'pls' ## ## The following object(s) are masked from 'package:stats': ## ## loadings pcr.fit = pcr(Apps~., data=College.train, scale=T, validation="CV") validationplot(pcr.fit, val.type="MSEP")
pcr.pred = predict(pcr.fit, College.test, ncomp=10) mean((College.test[, "Apps"] - data.frame(pcr.pred))^2) ## [1] 3014496
PCR 的测试 RSS 约为 3014496。
(f)
pls.fit = plsr(Apps~., data=College.train, scale=T, validation="CV") validationplot(pls.fit, val.type="MSEP")
pls.pred = predict(pls.fit, College.test, ncomp=10) mean((College.test[, "Apps"] - data.frame(pls.pred))^2) ## [1] 1508987
PLS 的测试 RSS 约为 1508987。
(g)
OLS、Lasso、Ridge 的结果具有可比性。 Lasso 将 F.Undergrad 和 Books 变量减少到零并缩小其他变量的系数。 这是所有型号的测试 R2。
test.avg = mean(College.test[, "Apps"])
lm.test.r2 = 1 - mean((College.test[, "Apps"] - lm.pred)^2) /mean((College.test[, "Apps"] - test.avg)^2)
ridge.test.r2 = 1 - mean((College.test[, "Apps"] - ridge.pred)^2) /mean((College.test[, "Apps"] - test.avg)^2)
lasso.test.r2 = 1 - mean((College.test[, "Apps"] - lasso.pred)^2) /mean((College.test[, "Apps"] - test.avg)^2)
pcr.test.r2 = 1 - mean((College.test[, "Apps"] - data.frame(pcr.pred))^2) /mean((College.test[, "Apps"] - test.avg)^2)
pls.test.r2 = 1 - mean((College.test[, "Apps"] - data.frame(pls.pred))^2) /mean((College.test[, "Apps"] - test.avg)^2)
barplot(c(lm.test.r2, ridge.test.r2, lasso.test.r2, pcr.test.r2, pls.test.r2), col="red", names.arg=c("OLS", "Ridge", "Lasso", "PCR", "PLS"), main="Test R-squared")
该图显示除 PCR 之外的所有模型的测试 R2 都在 0.9 左右,PLS 的测试 R2 略高于其他模型。 PCR 具有小于 0.8 的较小测试 R2。 除 PCR 外的所有模型都以高精度预测大学申请。
(a)
set.seed(1) p = 20 n = 1000 x = matrix(rnorm(n * p), n, p) B = rnorm(p) B[3] = 0 B[4] = 0 B[9] = 0 B[19] = 0 B[10] = 0 eps = rnorm(p) y = x %*% B + eps
(b)
train = sample(seq(1000), 100, replace = FALSE) y.train = y[train, ] y.test = y[-train, ] x.train = x[train, ] x.test = x[-train, ]
(c)
library(leaps)
regfit.full = regsubsets(y ~ ., data = data.frame(x = x.train, y = y.train),
nvmax = p)
val.errors = rep(NA, p)
x_cols = colnames(x, do.NULL = FALSE, prefix = "x.")
for (i in 1:p) {
coefi = coef(regfit.full, id = i)
pred = as.matrix(x.train[, x_cols %in% names(coefi)]) %*% coefi[names(coefi) %in%
x_cols]
val.errors[i] = mean((y.train - pred)^2)
}
plot(val.errors, ylab = "Training MSE", pch = 19, type = "b")
(d)
val.errors = rep(NA, p)
for (i in 1:p) {
coefi = coef(regfit.full, id = i)
pred = as.matrix(x.test[, x_cols %in% names(coefi)]) %*% coefi[names(coefi) %in%
x_cols]
val.errors[i] = mean((y.test - pred)^2)
}
plot(val.errors, ylab = "Test MSE", pch = 19, type = "b")
(e)
which.min(val.errors) ## [1] 16
16 参数模型具有最小的测试 MSE
(f)
coef(regfit.full, id = 16) ## (Intercept) x.1 x.2 x.5 x.6 x.7 ## 0.09613 0.28257 0.19386 0.99995 -0.28598 -1.50482 ## x.8 x.11 x.12 x.13 x.14 x.15 ## 0.77817 0.90816 0.48478 -0.19747 -0.71979 -0.74282 ## x.16 x.17 x.18 x.19 x.20 ## -0.33901 0.12235 1.74270 -0.12435 -1.03003
在 x.19 处捕获除一个归零系数之外的所有系数
(g)
val.errors = rep(NA, p)
a = rep(NA, p)
b = rep(NA, p)
for (i in 1:p) {
coefi = coef(regfit.full, id = i)
a[i] = length(coefi) - 1
b[i] = sqrt(sum((B[x_cols %in% names(coefi)] - coefi[names(coefi) %in% x_cols])^2) +
sum(B[!(x_cols %in% names(coefi))])^2)
}
plot(x = a, y = b, xlab = "number of coefficients", ylab = "error between estimated and true coefficients")
which.min(b) ## [1] 16
具有 9 个系数(10 个带截距)的模型最大限度地减少了估计系数和真实系数之间的误差。 使用 16 参数模型将测试误差降至最低。 此处测量的真实系数的更好拟合并不意味着模型将具有较低的测试 MSE。
(a)
set.seed(1) library(MASS) library(leaps) library(glmnet) ## Loading required package: Matrix Loading required package: lattice Loaded ## glmnet 1.9-5
Best subset selection
predict.regsubsets = function(object, newdata, id, ...) {
form = as.formula(object$call[[2]])
mat = model.matrix(form, newdata)
coefi = coef(object, id = id)
mat[, names(coefi)] %*% coefi
}
k = 10
p = ncol(Boston) - 1
folds = sample(rep(1:k, length = nrow(Boston)))
cv.errors = matrix(NA, k, p)
for (i in 1:k) {
best.fit = regsubsets(crim ~ ., data = Boston[folds != i, ], nvmax = p)
for (j in 1:p) {
pred = predict(best.fit, Boston[folds == i, ], id = j)
cv.errors[i, j] = mean((Boston$crim[folds == i] - pred)^2)
}
}
rmse.cv = sqrt(apply(cv.errors, 2, mean))
plot(rmse.cv, pch = 19, type = "b")
which.min(rmse.cv) ## [1] 9 rmse.cv[which.min(rmse.cv)] ## [1] 6.593
LASSO
x = model.matrix(crim ~ . - 1, data = Boston) y = Boston$crim cv.lasso = cv.glmnet(x, y, type.measure = "mse") plot(cv.lasso)
coef(cv.lasso) ## 14 x 1 sparse Matrix of class "dgCMatrix" ## 1 ## (Intercept) 1.0894 ## zn . ## indus . ## chas . ## nox . ## rm . ## age . ## dis . ## rad 0.2643 ## tax . ## ptratio . ## black . ## lstat . ## medv . sqrt(cv.lasso$cvm[cv.lasso$lambda == cv.lasso$lambda.1se]) ## [1] 7.405
Ridge regression
x = model.matrix(crim ~ . - 1, data = Boston) y = Boston$crim cv.ridge = cv.glmnet(x, y, type.measure = "mse", alpha = 0) plot(cv.ridge)
coef(cv.ridge) ## 14 x 1 sparse Matrix of class "dgCMatrix" ## 1 ## (Intercept) 1.017517 ## zn -0.002806 ## indus 0.034406 ## chas -0.225251 ## nox 2.249887 ## rm -0.162546 ## age 0.007343 ## dis -0.114929 ## rad 0.059814 ## tax 0.002659 ## ptratio 0.086423 ## black -0.003342 ## lstat 0.044495 ## medv -0.029125 sqrt(cv.ridge$cvm[cv.ridge$lambda == cv.ridge$lambda.1se]) ## [1] 7.457
PCR
library(pls) ## Attaching package: 'pls' ## ## The following object is masked from 'package:stats': ## ## loadings pcr.fit = pcr(crim ~ ., data = Boston, scale = TRUE, validation = "CV") summary(pcr.fit) ## data: X dimension: 506 13 ## Y dimension: 506 1 ## Fit method: svdpc ## Number of components considered: 13 ## ## VALIDATION: RMSEP ## Cross-validated using 10 random segments. ## (Intercept) 1 comps 2 comps 3 comps 4 comps 5 comps 6 comps ## CV 8.61 7.170 7.163 6.733 6.728 6.740 6.765 ## adjCV 8.61 7.169 7.162 6.730 6.723 6.737 6.760 ## 7 comps 8 comps 9 comps 10 comps 11 comps 12 comps 13 comps ## CV 6.760 6.634 6.652 6.642 6.652 6.607 6.546 ## adjCV 6.754 6.628 6.644 6.635 6.643 6.598 6.536 ## ## TRAINING: % variance explained ## 1 comps 2 comps 3 comps 4 comps 5 comps 6 comps 7 comps ## X 47.70 60.36 69.67 76.45 82.99 88.00 91.14 ## crim 30.69 30.87 39.27 39.61 39.61 39.86 40.14 ## 8 comps 9 comps 10 comps 11 comps 12 comps 13 comps ## X 93.45 95.40 97.04 98.46 99.52 100.0 ## crim 42.47 42.55 42.78 43.04 44.13 45.4
13 组件 pcr 配合具有最低的 CV/adjCV RMSEP
(b)
(c)
选择 9 参数最佳子集模型,因为它具有最好的交叉验证 RMSE,仅次于 PCR,但它比 13 组件 PCR 模型更简单。



