栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Python

深度学习-torch-从零到线性回归

Python 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

深度学习-torch-从零到线性回归

一、框架准备
Python3.7+pip+Pycharm+torch+numpy+matplotlib
(1)下载Python:进官网https://www.python.org/下载Python3.7,电脑->属性->高级系统设置->环境变量->系统变量(path)->新建,添加环境变量:D:SoftwarePythonPython37;
(2)添加pip的环境变量:添加环境变量:D:SoftwarePythonPython37scripts;打开cmd窗口升级pip:python -m pip install --upgrade pip;
(3)下载Pycharm:进官网/www.jetbrains.com/pycharm/下载Pycharm专业版,需要进行破解;
(4)下载torch:进入官网https://pytorch.org/根据自己的系统(win10),下载方式(pip),语言(python),计算平台(cpu)选择自己适合的torch,复制下载命令,进入cmd窗口下载。
(5)numpy和matplotlib的下载:进入pycharm,在 Terminal用命令pip install numpy和pip install matplotlib依次下载,然后file->Setting->Project Interpreter->展开栏右侧’+'号->依次搜索numpy和matplotib进行添加。

二、线性模型
Linear Model: y_pred=x*w
Training Loss: loss=(y_pred-y)^2
Mean Square Error: cost=1/N ∑(y_pred-y)^2

import numpy as np	#数值计算库
import matplotlib.pyplot as plt		#画图库

#初始化两个列表,x,y表示输入的数据集
x_data=[1.0, 2.0, 3.0]
y_data=[2.0, 4.0, 6.0]

#定义forward函数(predict function)
def forward(x):
    return x*w

#定义损失函数
def loss(x,y):
    y_pred=forward(x)
    return (y_pred-y)**2

#定义两个空的列表
w_list=[]
mse_list=[]

#函数arange从0.0到4.1按照步长0.1进行遍历
for w in np.arange(0.0, 4.1, 0.1):
    print("w=", w)
    l_sum=0
    #遍历x_data, y_data两个列表
    for x_val,y_val in zip(x_data, y_data):
        y_pred_val=forward(x_val)	#y_predict
        loss_val=loss(x_val, y_val)    
        l_sum += loss_val
        print('t',x_val,y_val,y_pred_val,loss_val)
    print('MSE=', l_sum/3)
    w_list.append(w)	#权重加到权重列表
    mse_list.append(l_sum/3)	#平均平方误差加到MSE列表
#将两个列表数据加载到坐标系
plt.plot(w_list, mse_list)
#横坐标是w,纵坐标是Loss
plt.xlabel('w')
plt.ylabel('Loss')
#二维图像显示
plt.show()

结果输出:

三、梯度下降
Gradient: 1/N ∑ (2x(xw-y)^2 )
Update: w=w-α
(1/N ∑ (2x(x*w-y)^2 ))

import matplotlib.pyplot as plt

x_data=[1.0, 2.0, 3.0]
y_data=[2.0, 4.0, 6.0]

#初始化w
w=1.0


#forward function
def forward(x):
    return x*w

#cost function
def cost(xs, ys):
    cost = 0
    for x, y in zip(xs,ys):
        y_pred = forward(x)
        cost += (y_pred - y)**2
    return cost / len(xs)

#gradient function
def gradient(xs,ys):
    grad = 0
    for x, y in zip(xs,ys):
        grad += 2*x*(x*w - y)
    return grad / len(xs)

#列表初始化,epoch_list用来记录训练第几轮,cost_list用来记录每一轮的损失
epoch_list = []
cost_list = []

#训练前输出一次当x输入为4时的预测值
print('predict (before training)', 4, forward(4))
for epoch in range(100):
    cost_val = cost(x_data, y_data)
    grad_val = gradient(x_data, y_data)
    w=w- 0.01 * grad_val  # 0.01 learning rate
    print('epoch:', epoch, 'w=', w, 'loss=', cost_val)
    epoch_list.append(epoch)
    cost_list.append(cost_val)
    
#训练后输出一次当x输入为4时的预测值
print('predict (after training)', 4, forward(4))
plt.plot(epoch_list,cost_list)
plt.ylabel('cost')
plt.xlabel('epoch')
plt.show()

输出结果:

四、反向传播

import torch

x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]

w = torch.tensor([1.0])  # w的初值为1.0
w.requires_grad = True  # 需要对w求梯度

#forward function
def forward(x):
    return x * w  # w是一个Tensor

#loss function
def loss(x, y):
    y_pred = forward(x)
    return (y_pred - y) ** 2


print("predict (before training)", 4, forward(4).item())

for epoch in range(100):

    for x, y in zip(x_data, y_data):
        l = loss(x, y)  # l是一个张量,tensor主要是在建立计算图 forward, compute the loss
        l.backward()  # backward
        print('tgrad:', x, y, w.grad.item())   #得到梯度的标量
        w.data = w.data - 0.01 * w.grad.data  # 权重更新时,需要用到标量,注意grad也是一个tensor
        w.grad.data.zero_()  # after update, remember set the grad to zero

    print('progress:', epoch, l.item())  # 取出loss使用l.item,不要直接使用l(l是tensor会构建计算图)

print("predict (after training)", 4, forward(4).item())   #取loss的值

五、线性回归

import torch

# prepare dataset

# x,y是矩阵,3行1列 也就是说总共有3个数据,每个数据只有1个特征

x_data = torch.tensor([[1.0], [2.0], [3.0]])

y_data = torch.tensor([[2.0], [4.0], [6.0]])

# design model using class

"""

our model class should be inherit from nn.Module, which is base class for all neural network modules.

member methods __init__() and forward() have to be implemented

class nn.linear contain two member Tensors: weight and bias

class nn.Linear has implemented the magic method __call__(),which enable the instance of the class can

be called just like a function.Normally the forward() will be called 

"""


class LinearModel(torch.nn.Module):

    def __init__(self):
        super(LinearModel, self).__init__()

        # (1,1)是指输入x和输出y的特征维度,这里数据集中的x和y的特征都是1维的

        # 该线性层需要学习的参数是w和b  获取w/b的方式分别是~linear.weight/linear.bias

        self.linear = torch.nn.Linear(1, 1)

    def forward(self, x):
        y_pred = self.linear(x)

        return y_pred


model = LinearModel()

# construct loss and optimizer

# criterion = torch.nn.MSELoss(size_average = False)

criterion = torch.nn.MSELoss(reduction='sum')

optimizer = torch.optim.SGD(model.parameters(), lr=0.01)  # model.parameters()自动完成参数的初始化操作

# training cycle forward, backward, update

for epoch in range(100):
    y_pred = model(x_data)  # forward:predict

    loss = criterion(y_pred, y_data)  # forward: loss

    print(epoch, loss.item())

    optimizer.zero_grad()  # the grad computer by .backward() will be accumulated. so before backward, remember set the grad to zero

    loss.backward()  # backward: autograd,自动计算梯度

    optimizer.step()  # update 参数,即更新w和b的值

print('w = ', model.linear.weight.item())

print('b = ', model.linear.bias.item())

x_test = torch.tensor([[4.0]])

y_test = model(x_test)

print('y_pred = ', y_test.data)
转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/303111.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号