虽然少壮不努力但是老大还是要亡(精)羊(卫)补(填)牢(海)
torch.nn.init里的一些函数
均匀分布
nn.init.uniform_(tensor, a=0, b=1)
用均匀分布生成值来填充tensor
t1 = torch.zeros(2, 2) print(t1) t2 = nn.init.uniform_(t1) print(t2) >>> tensor([[0., 0.], >>> [0., 0.]]) >>> tensor([[0.0174, 0.9981], >>> [0.7287, 0.9384]])
正态分布
nn.init.normal_(tensor, mean=0., std=1.)
用给定均值(mean)和标准差(std)的正态分布中的值来填充tensor
t1 = torch.zeros(2, 2) print(t1) t2 = nn.init.normal_(t1) print(t2) >>> tensor([[0., 0.], >>> [0., 0.]]) >>> tensor([[-0.0445, -0.9591], >>> [ 0.9115, 0.7865]])
截断正态分布(详情点击链接自行了解
nn.init.trunc_normal_(tensor, mean=0.0, std=1.0, a=-2.0, b=2.0)
t1 = torch.zeros(2, 2) print(t1) t2 = nn.init.trunc_normal_(t1, 0, 2, -1, 1) print(t2) >>> tensor([[0., 0.], >>> [0., 0.]]) >>> tensor([[ 0.2673, 0.2764], >>> [ 0.4828, -0.0619]])
常数
nn.init.constant_(tensor, val)
初始化成常数
t1 = torch.zeros(2, 2) print(t1) t2 = nn.init.constant_(t1, 233) print(t2) >>> tensor([[0., 0.], >>> [0., 0.]]) >>> tensor([[233., 233.], >>> [233., 233.]])
1 / 0
nn.init.ones_(tensor) nn.init.zeros_(tensor)
初始化成常数1 / 0(float)
太简单了不示例了
单位矩阵
nn.init.eye_(tensor)
填充二维tensor为单位矩阵
t1 = torch.zeros(2, 3) print(t1) t2 = nn.init.eye_(t1,) print(t2) >>> tensor([[0., 0., 0.], >>> [0., 0., 0.]]) >>> tensor([[1., 0., 0.], >>> [0., 1., 0.]])
狄拉克函数
nn.init.dirac_(tensor, groups=1)
用 dirac delat function (详情点击链接自行了解)概率密度函数初始化tensor仅限 3,4,5维
t1 = torch.zeros(2, 3, 4) print(t1) t2 = nn.init.dirac_(t1) print(t2) >>> tensor([[[0., 0., 0., 0.], >>> [0., 0., 0., 0.], >>> [0., 0., 0., 0.]], >>> >>> [[0., 0., 0., 0.], >>> [0., 0., 0., 0.], >>> [0., 0., 0., 0.]]]) >>> tensor([[[0., 0., 1., 0.], >>> [0., 0., 0., 0.], >>> [0., 0., 0., 0.]], >>> >>> [[0., 0., 0., 0.], >>> [0., 0., 1., 0.], >>> [0., 0., 0., 0.]]])
----------未完待续----------


![[Daily]torch里的一些初始化 [Daily]torch里的一些初始化](http://www.mshxw.com/aiimages/31/273002.png)
