栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Python

随手写的numpy实现一元线性回归(拟合三次函数)

Python 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

随手写的numpy实现一元线性回归(拟合三次函数)

import numpy as np
import matplotlib.pyplot as plt

learning_rate=1e-1    # 多次调整
epochs=1000
# input_features=1
# input_size=1000(and sum)
# output_features=1
# output_size=1000(and sum)

w=np.ones((1000,))
b=np.ones((1000,))
x=np.random.randn(1000,)
y=np.array([xi**3 for xi in x])   
print(x.shape,y.shape) 
plt.scatter(x,y)
plt.show()
(1000,) (1000,)

l o s s = 1 n ( w x + b − y ) 2 δ l o s s δ w = 2 x n [ w x + ( b − y ) ] δ l o s s δ b = 2 n [ w x + b − y ] loss=frac{1}{n}(wx+b-y)^2\frac{delta loss}{delta w}=frac{2x}{n}[wx+(b-y)]\frac{delta loss}{delta b}=frac{2}{n}[wx+b-y] \ loss=n1​(wx+b−y)2δwδloss​=n2x​[wx+(b−y)]δbδloss​=n2​[wx+b−y]

def getloss(pred,label):
    """
    pred:prediction array whose shape is (n,)
    label:label array whose shape is (n,)
    """
    # using MAE loss function
    n=len(pred)
    loss=np.sum((pred-label))/n
    return loss

def gradient_decent(init_weight,init_bias,x_train,y_train,epochs,lr):
    loss=0.
    pred=0.
    grad_b=0.
    grad_w=0.
    w=init_weight
    b=init_bias
    n=len(x_train)
    loss_list=[]
    for epoch in range(epochs):
        if (epoch+1)%50==0:
            print("Epoch {}/{}:".format(epoch+1,epochs))
        # 前向传播  
        pred=w*x_train+b
        loss=getloss(pred,y_train)
        loss_list.append(loss)
        # 后向传播
        grad_w=np.sum((pred-y_train)*(2*x_train)/n)
        grad_b=np.sum((pred-y_train)*(2/n))
        w=w-learning_rate*grad_w
        b=b-learning_rate*grad_b
        if (epoch+1)%50==0:
            print("Loss:{}".format(loss)) 
    return w,b,loss_list
    
w,b,loss=gradient_decent(w,b,x,y,epochs,learning_rate)
loss=np.array(loss)
Epochs=np.array(range(1,epochs+1))
plot_x=np.linspace(-3,3,1000)
prediction=w*plot_x+b
print(plot_x.shape,prediction.shape)
plt.plot(plot_x,prediction,c='r')
plt.scatter(x,y)
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
plt.plot(Epochs,loss)
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.show()
Epoch 50/1000:
Loss:0.017706093923100227
Epoch 100/1000:
Loss:0.016060874992968197
Epoch 150/1000:
Loss:0.014568561314171443
Epoch 200/1000:
Loss:0.013214938732651933
Epoch 250/1000:
Loss:0.011987114760282535
Epoch 300/1000:
Loss:0.010873395653728185
Epoch 350/1000:
Loss:0.009863174928349911
Epoch 400/1000:
Loss:0.008946832243113071
Epoch 450/1000:
Loss:0.008115641691495604
Epoch 500/1000:
Loss:0.0073616886232058506
Epoch 550/1000:
Loss:0.0066777942029764914
Epoch 600/1000:
Loss:0.0060574469865672075
Epoch 650/1000:
Loss:0.005494740861105241
Epoch 700/1000:
Loss:0.004984318757650579
Epoch 750/1000:
Loss:0.004521321598973595
Epoch 800/1000:
Loss:0.0041013419955055926
Epoch 850/1000:
Loss:0.003720382247745059
Epoch 900/1000:
Loss:0.0033748162545040243
Epoch 950/1000:
Loss:0.0030613549636569408
Epoch 1000/1000:
Loss:0.002777015035862922
(1000,) (1000,)

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/725797.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号