栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Python

PyTorch处理多维特征输入的案例

Python 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

PyTorch处理多维特征输入的案例

Index 目录索引
  • 写在前面
  • PyTorch代码实现
  • 参考文章


写在前面

本文将用一个糖尿病人病情数据分析的分类案例,使用PyTorch来搭建人工智能神经网络1【这是深度学习数学原理专题系列的第五篇文章】


PyTorch代码实现

具体的讲解尽在注释中:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# @Time    : 2021/11/30 19:50
# @Author  : William Baker
# @FileName: lesson7_multi_dim.py
# @Software: PyCharm
# @Blog    : https://blog.csdn.net/weixin_43051346
import os
os.environ['KMP_DUPLICATE_LIB_OK'] = 'TRUE'
import numpy as np
import torch
import matplotlib.pyplot as plt

xy = np.loadtxt('./dataset/diabetes.csv.gz', delimiter=',', dtype=np.float32)
x_data = torch.from_numpy(xy[:, :-1])    #  :-1 表示最后1列数据不要
y_data = torch.from_numpy(xy[:, [-1]])      # 表示只取最后一列,并且是矩阵类型

class Model(torch.nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.linear1 = torch.nn.Linear(8, 6)
        self.linear2 = torch.nn.Linear(6, 4)
        self.linear3 = torch.nn.Linear(4, 1)
        self.sigmoid = torch.nn.Sigmoid()

    def forward(self, x):
        x = self.sigmoid(self.linear1(x))
        x = self.sigmoid(self.linear2(x))
        x = self.sigmoid(self.linear3(x))
        return x

model = Model()

criterion = torch.nn.BCELoss(reduction='mean')
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)

epoch_list = []
loss_list = []
for epoch in range(100):
    # Forward
    epoch_list.append(epoch)

    y_pred = model(x_data)
    loss = criterion(y_pred, y_data)
    print('Epoch:{}, Loss:{}'.format(epoch, loss.item()))

    loss_list.append(loss.item())

    # Backward
    optimizer.zero_grad()
    loss.backward()

    # Update
    optimizer.step()

plt.plot(epoch_list, loss_list)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()

输出结果为:

Epoch:0, Loss:0.6519893407821655
Epoch:1, Loss:0.6513597965240479
Epoch:2, Loss:0.6507869362831116
Epoch:3, Loss:0.6502658128738403
Epoch:4, Loss:0.6497914791107178
Epoch:5, Loss:0.6493598818778992
Epoch:6, Loss:0.6489669680595398
Epoch:7, Loss:0.6486091613769531
Epoch:8, Loss:0.6482835412025452
Epoch:9, Loss:0.647986888885498
Epoch:10, Loss:0.6477167010307312
Epoch:11, Loss:0.6474705934524536
Epoch:12, Loss:0.6472463607788086
Epoch:13, Loss:0.647041916847229
Epoch:14, Loss:0.6468556523323059
Epoch:15, Loss:0.646685779094696
Epoch:16, Loss:0.6465309262275696
Epoch:17, Loss:0.6463896632194519
Epoch:18, Loss:0.6462607979774475
Epoch:19, Loss:0.6461433172225952
Epoch:20, Loss:0.6460360884666443
Epoch:21, Loss:0.6459380388259888
Epoch:22, Loss:0.6458486914634705
Epoch:23, Loss:0.6457669734954834
Epoch:24, Loss:0.6456923484802246
Epoch:25, Loss:0.6456241607666016
Epoch:26, Loss:0.6455617547035217
Epoch:27, Loss:0.6455047726631165
Epoch:28, Loss:0.6454525589942932
Epoch:29, Loss:0.6454048156738281
Epoch:30, Loss:0.645361065864563
Epoch:31, Loss:0.6453210115432739
Epoch:32, Loss:0.6452842950820923
Epoch:33, Loss:0.6452506184577942
Epoch:34, Loss:0.6452196836471558
Epoch:35, Loss:0.6451913118362427
Epoch:36, Loss:0.645165205001831
Epoch:37, Loss:0.6451411843299866
Epoch:38, Loss:0.6451191902160645
Epoch:39, Loss:0.6450988054275513
Epoch:40, Loss:0.6450800895690918
Epoch:41, Loss:0.6450628042221069
Epoch:42, Loss:0.6450467705726624
Epoch:43, Loss:0.6450319886207581
Epoch:44, Loss:0.6450183391571045
Epoch:45, Loss:0.6450056433677673
Epoch:46, Loss:0.6449939012527466
Epoch:47, Loss:0.6449829339981079
Epoch:48, Loss:0.6449727416038513
Epoch:49, Loss:0.6449632048606873
Epoch:50, Loss:0.6449543833732605
Epoch:51, Loss:0.6449460387229919
Epoch:52, Loss:0.6449382901191711
Epoch:53, Loss:0.6449309587478638
Epoch:54, Loss:0.6449241042137146
Epoch:55, Loss:0.6449176073074341
Epoch:56, Loss:0.6449114680290222
Epoch:57, Loss:0.6449057459831238
Epoch:58, Loss:0.6449002623558044
Epoch:59, Loss:0.644895076751709
Epoch:60, Loss:0.6448901295661926
Epoch:61, Loss:0.6448853611946106
Epoch:62, Loss:0.6448808312416077
Epoch:63, Loss:0.6448764801025391
Epoch:64, Loss:0.6448723077774048
Epoch:65, Loss:0.6448683738708496
Epoch:66, Loss:0.6448644995689392
Epoch:67, Loss:0.6448607444763184
Epoch:68, Loss:0.6448571681976318
Epoch:69, Loss:0.6448536515235901
Epoch:70, Loss:0.6448502540588379
Epoch:71, Loss:0.6448469758033752
Epoch:72, Loss:0.6448436975479126
Epoch:73, Loss:0.6448405385017395
Epoch:74, Loss:0.6448374390602112
Epoch:75, Loss:0.6448343992233276
Epoch:76, Loss:0.6448314785957336
Epoch:77, Loss:0.6448284983634949
Epoch:78, Loss:0.6448256373405457
Epoch:79, Loss:0.6448228359222412
Epoch:80, Loss:0.6448200345039368
Epoch:81, Loss:0.6448173522949219
Epoch:82, Loss:0.6448145508766174
Epoch:83, Loss:0.6448119282722473
Epoch:84, Loss:0.6448091864585876
Epoch:85, Loss:0.6448065638542175
Epoch:86, Loss:0.6448039412498474
Epoch:87, Loss:0.6448012590408325
Epoch:88, Loss:0.644798755645752
Epoch:89, Loss:0.6447961926460266
Epoch:90, Loss:0.6447936296463013
Epoch:91, Loss:0.6447911262512207
Epoch:92, Loss:0.6447885632514954
Epoch:93, Loss:0.6447860598564148
Epoch:94, Loss:0.644783616065979
Epoch:95, Loss:0.6447811126708984
Epoch:96, Loss:0.6447786092758179
Epoch:97, Loss:0.6447760462760925
Epoch:98, Loss:0.6447736024856567
Epoch:99, Loss:0.644771158695221


写到这里,差不多本文也就要结束了,如有错误,敬请指正。如果我的这篇文章帮助到了你,那我也会感到很高兴,一个人能走多远,在于与谁同行


参考文章
  1. 《PyTorch深度学习实践》完结合集 - 07.处理多维特征的输入
    ↩︎

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/618917.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号