栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Python

BatchNorm原理以及PyTorch实现

Python 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

BatchNorm原理以及PyTorch实现

BatchNorm算法


简单来说BatchNorm对输入的特征按照通道计算期望和方差,并标准化(均值为0,方差为1)。但这会降低网络的表达能力,因此,BN在标准化后还要进行缩放平移,也就是可学习的参数 γ gamma γ和 β beta β,也对应每个通道。

BatchNorm的原理并不清楚,可能是降低了Internal Covariate Shift,也可能是使得optimization landscape变得平滑。

优点
  • 提高训练稳定性,可使用更大的learning rate、降低初始化参数的要求并可以构建更深更宽的网络;
  • 加速网络收敛。
缺点
  • 增加计算量和内存开销,降低推理速度;
  • 增加训练和推理时的差异;
  • 打破了minibatch之间的独立性;
  • 小batch效果差。

BatchNorm 在训练时,仅用当前Batch的均值和方差,而测试推理时,使用EMA计算的均值和方差。

PyTorch Code

以nn.BatchNorm2d为例。其继承关系为:Module → to →_Normbase → to →_BatchNorm → to →BatchNorm2d。Module 是所有PyTorch构建网络模块的父类。

_Normbase

_Normbase主要是注册和初始化参数

class _Normbase(Module):
    """Common base of _InstanceNorm and _BatchNorm"""
    def __init__(
        self,
        num_features: int, # 特征通道数
        eps: float = 1e-5,	# 防止分母为0
        momentum: float = 0.1, # 
        affine: bool = True, # 标准化后是否进行缩放,是否使用gamma 和 beta
        track_running_stats: bool = True, # 使用均值方差进行标准化
        device=None,
        dtype=None
    ) -> None:
        factory_kwargs = {'device': device, 'dtype': dtype}
        super(_Normbase, self).__init__()
        self.num_features = num_features
        self.eps = eps
        self.momentum = momentum
        self.affine = affine
        self.track_running_stats = track_running_stats
        if self.affine:
            self.weight = Parameter(torch.empty(num_features, **factory_kwargs)) # 注册gamma,后续初始化为1
            self.bias = Parameter(torch.empty(num_features, **factory_kwargs)) # 注册beta,后续初始化为0
        else:
            self.register_parameter("weight", None)
            self.register_parameter("bias", None)
        if self.track_running_stats:
            self.register_buffer('running_mean', torch.zeros(num_features, **factory_kwargs)) # 注册期望,后续初始化为0
            self.register_buffer('running_var', torch.ones(num_features, **factory_kwargs)) # 注册方差,后续初始化为1
            self.running_mean: Optional[Tensor]
            self.running_var: Optional[Tensor]
            self.register_buffer('num_batches_tracked',
                                 torch.tensor(0, dtype=torch.long,
                                              **{k: v for k, v in factory_kwargs.items() if k != 'dtype'}))
            self.num_batches_tracked: Optional[Tensor]
        else:
            self.register_buffer("running_mean", None)
            self.register_buffer("running_var", None)
            self.register_buffer("num_batches_tracked", None)
        self.reset_parameters()

    def reset_running_stats(self) -> None:
        if self.track_running_stats:
            # running_mean/running_var/num_batches... are registered at runtime depending
            # if self.track_running_stats is on
            self.running_mean.zero_()  # type: ignore[union-attr]
            self.running_var.fill_(1)  # type: ignore[union-attr]
            self.num_batches_tracked.zero_()  # type: ignore[union-attr,operator]

	# 参数初始化,gamma 为 1,beta 为 0.
    def reset_parameters(self) -> None:
        self.reset_running_stats()
        if self.affine:
            init.ones_(self.weight)
            init.zeros_(self.bias)

    def _check_input_dim(self, input):
        raise NotImplementedError
_BatchNorm

调用nn.functional.batch_norm 对每个通道进行计算:

class _BatchNorm(_Normbase):
    def __init__(
        self,
        num_features,
        eps=1e-5,
        momentum=0.1,	# 见下一章节
        affine=True,
        track_running_stats=True,
        device=None,
        dtype=None
    ):
        factory_kwargs = {'device': device, 'dtype': dtype}
        super(_BatchNorm, self).__init__(
            num_features, eps, momentum, affine, track_running_stats, **factory_kwargs
        )

    def forward(self, input: Tensor) -> Tensor:
        self._check_input_dim(input)

        # exponential_average_factor is set to self.momentum
        # (when it is available) only so that it gets updated
        # in onNX graph when this node is exported to ONNX.
        if self.momentum is None:
            exponential_average_factor = 0.0
        else:
            exponential_average_factor = self.momentum

        if self.training and self.track_running_stats:
            # TODO: if statement only here to tell the jit to skip emitting this when it is None
            if self.num_batches_tracked is not None:  # type: ignore[has-type]
                self.num_batches_tracked = self.num_batches_tracked + 1  # type: ignore[has-type]
                if self.momentum is None:  # use cumulative moving average
                    exponential_average_factor = 1.0 / float(self.num_batches_tracked)
                else:  # use exponential moving average
                    exponential_average_factor = self.momentum

        r"""
        Decide whether the mini-batch stats should be used for normalization rather than the buffers.
        Mini-batch stats are used in training mode, and in eval mode when buffers are None.
        """
        if self.training:
            bn_training = True
        else:
            bn_training = (self.running_mean is None) and (self.running_var is None)

        r"""
        Buffers are only updated if they are to be tracked and we are in training mode. Thus they only need to be
        passed when the update should occur (i.e. in training mode when they are tracked), or when buffer stats are
        used for normalization (i.e. in eval mode when buffers are not None).
        """
        return F.batch_norm(
            input,
            # If buffers are not to be tracked, ensure that they won't be updated
            self.running_mean
            if not self.training or self.track_running_stats
            else None,
            self.running_var if not self.training or self.track_running_stats else None,
            self.weight,
            self.bias,
            bn_training,
            exponential_average_factor,
            self.eps,
        )
BatchNorm2d

特化了输入检查

class BatchNorm2d(_BatchNorm):
    def _check_input_dim(self, input):
        if input.dim() != 4:
            raise ValueError("expected 4D input (got {}D input)".format(input.dim()))
关于momentum参数

按照Pytorch注释,momentum参与running_mean和running_var的计算。置为None时,简单计算平均(累积移动平均)。默认值为0.1。

在_BatchNorm中,赋值给了

exponential_average_factor = self.momentum

当其不为None时,也就是指数平均(Exponential Moving Average, EMA)。其计算公式为:
x ˉ t = β μ t + ( 1 − β ) x ˉ t − 1 bar{x}_t = beta mu_t + (1-beta)bar{x}_{t-1} xˉt​=βμt​+(1−β)xˉt−1​
其中, μ t mu_t μt​是当前Batch的均值或方差, β beta β为exponential_average_factor。展开
x ˉ t = β μ t + ( 1 − β ) ( β μ t − 1 + ( 1 − β ) ( β μ t − 2 + ( 1 − β ) x ˉ t − 3 ) ) = β μ t + ( 1 − β ) β μ t − 1 + ( 1 − β ) 2 β μ t − 2 + . . . + ( 1 − β ) t β μ 0 begin{aligned} bar{x}_t &= beta mu_t + (1-beta)(beta mu_{t-1} + (1-beta)(beta mu_{t-2} + (1-beta)bar{x}_{t-3}))\\ &= beta mu_t + (1-beta)beta mu_{t-1} + (1-beta)^2beta mu_{t-2} + ... + (1-beta)^tbeta mu_0 end{aligned} xˉt​​=βμt​+(1−β)(βμt−1​+(1−β)(βμt−2​+(1−β)xˉt−3​))=βμt​+(1−β)βμt−1​+(1−β)2βμt−2​+...+(1−β)tβμ0​​
从公式可以看出,越靠近当前的数据占的比重越大,比重按指数衰减。其值约等于最近
1 β frac{1}{beta} β1​
次的均值。

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/580761.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号