加权BCE_loss用来针对类别不均衡的二分类的语义分割,对于前景像素乘以一个比较大的权重,对背景像素乘以一个比较小的权重。
我知道的有两种实现方法:
假设网络预测为inputs,标签为target
第一种:
首先计算普通的BCEloss如下
import torch import torch.nn as nn inputs = torch.rand(1,1,3,3) targets = torch.randint(1,3,(1,1,3,3)) loss = nn.BCELoss(reduction='none')(inputs,target.float()) print(loss)
Out[40]:
tensor([[[[0.9652, 0.1823, 2.8128],
[1.8717, 0.3409, 1.6916],
[5.1682, 0.5676, 0.1342]]]])
然后加上权重,对于背景权重为0.3,前景权重为0.7
weight = torch.zeros_like(targets).float().cuda() weight = torch.fill_(weight,0.3) weight[target>0]=0.7 loss = torch.mean(weight*loss) print(loss)
Out[68]: tensor(0.8863)
第二种方式:
inputs = torch.rand(1,1,3,3) targets = torch.randint(1,3,(1,1,3,3)) weight = torch.zeros_like(targets).float().cuda() weight = torch.fill_(weight,0.3) weight[target>0]=0.7 loss = nn.BCELoss(weight=weight)(inputs,target.float())
Out[71]: tensor(0.8863)
加权交叉熵损失:
加权CE_loss和BCE_loss稍有不同
1.标签为long类型,BCE标签为float类型
2.当reduction为mean时计算每个像素点的损失的平均,BCE除以像素数得到平均值,CE除以像素对应的权重之和得到平均值。
inp = torch.rand(1,2,3,3)#因为是二分类,所以把inputs转换成两个通道, inp[0,0]=1-inputs[0,0] inp[0,1]=inputs[0,0] loss = nn.CrossEntropyLoss(weight=torch.tensor([0.3,0.7]))(inp,target.squeeze(1)) print(loss)
Out[81]: tensor(0.8659)
参考https://blog.csdn.net/pain_gain0/article/details/106643607,写的很好



