栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Python

【pytorch】将训练好的模型部署至生产环境:借助opencv中的dnn模块直接加载(C++及ONNX协议)

Python 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

【pytorch】将训练好的模型部署至生产环境:借助opencv中的dnn模块直接加载(C++及ONNX协议)

(一)待训练模型采用
CIFAR10,10分类
按上述源码训练后得到模型参数文件:saveTextOnlyParams.pth
关于onnx及onnxruntime使用见:
【pytorch】将训练好的模型部署至生产环境:onnx及onnxruntime使用
实现思路为:将pytorch中训练好的模型使用ONNX导出,再使用opencv中的dnn模块直接进行加载使用:
下面是pytorch中训练好的模型,通过ONNX进行导出的源代码;为了方便引用,将原代码预处理及后处理内容部分移至模型内:

import os.path
from typing import Iterator
import numpy as np
import torch
import cv2
from PIL import Image
from torch.utils.data import Dataset,DataLoader,Subset,random_split
import re
from functools import reduce
from torch.utils.tensorboard import SummaryWriter as Writer
from torchvision import transforms,datasets
import torchvision as tv
from torch import nn
import torch.nn.functional as F
import time
import onnx

#查看命令:tensorboard --logdir=./myBorderText
#可用pycharm中code中的generater功能实现:

class myCustomerNetWork(nn.Module):
    def __init__(self):
        super().__init__()
        #输入3通道输出6通道:
        self.features=nn.Sequential(nn.Conv2d(3, 64, (3, 3)),nn.ReLU(),nn.Conv2d(64,128,(3,3)),
                                    nn.ReLU(),nn.Conv2d(128,256,(3,3)),nn.ReLU(),nn.AdaptiveAvgPool2d(1))

        self.classfired=nn.Sequential(nn.Flatten(),nn.Linear(256,80),nn.Dropout(),nn.Linear(80,10))

    def rgb2NetInput(self,input):
        '''这里将读入的原始图片width, height,3转为torch.Size([BATCH, 3, 32, 32])的格式'''
        inputX = torch.FloatTensor(input).cuda()
        inputX = inputX.permute(2, 0, 1).contiguous()
        inputX = inputX.unsqueeze(0)
        return inputX
    def forward(self,x):
        x=self.rgb2NetInput(x)
        return self.classfired(self.features(x))
#网络输入要求为torch.Size([32, 3, 32, 32])格式

myNet=myCustomerNetWork()
pthfile = r'D:flask_pytorchsaveTextOnlyParams.pth'
#当strict=false时,参数文件匹配得上就加载,没有就默认初始化。
myNet.load_state_dict(torch.load(pthfile),strict=False)
if torch.cuda.is_available():
    myNet=myNet.cuda()
myNet.eval()


if __name__ == '__main__':
    dummy_input = torch.rand(32, 32, 3)
    torch.onnx.export(myNet, dummy_input, r'./model_static.onnx',
                      input_names=['in'], output_names=['out'],verbose=True)
    #检验导出模型的正确性:
    onnx_model = onnx.load(r"./model_static.onnx")
    try:
        onnx.checker.check_model(onnx_model)
    except Exception:
        print("Model incorrect")
    else:
        print("Model correct")

测试输出:
前面若干行为输入及各层参数,后面各行为图的结构。

graph(%in : Float(32, 32, 3, strides=[96, 3, 1], requires_grad=0, device=cpu),
      %features.0.weight : Float(64, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=1, device=cuda:0),
      %features.0.bias : Float(64, strides=[1], requires_grad=1, device=cuda:0),
      %features.2.weight : Float(128, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=1, device=cuda:0),
      %features.2.bias : Float(128, strides=[1], requires_grad=1, device=cuda:0),
      %features.4.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=1, device=cuda:0),
      %features.4.bias : Float(256, strides=[1], requires_grad=1, device=cuda:0),
      %classfired.1.weight : Float(80, 256, strides=[256, 1], requires_grad=1, device=cuda:0),
      %classfired.1.bias : Float(80, strides=[1], requires_grad=1, device=cuda:0),
      %classfired.3.weight : Float(10, 80, strides=[80, 1], requires_grad=1, device=cuda:0),
      %classfired.3.bias : Float(10, strides=[1], requires_grad=1, device=cuda:0)):
  %11 : Float(32, 32, 3, strides=[96, 3, 1], requires_grad=0, device=cuda:0) = onnx::Cast[to=1](%in) # D:flask_pytorchmodel.py:32:0
  %12 : Float(3, 32, 32, strides=[1024, 32, 1], requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[2, 0, 1]](%11) # D:flask_pytorchmodel.py:33:0
  %input : Float(1, 3, 32, 32, strides=[3072, 1024, 32, 1], requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[0]](%12) # D:flask_pytorchmodel.py:34:0
  %input.3 : Float(1, 64, 30, 30, strides=[57600, 900, 30, 1], requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[1, 1]](%input, %features.0.weight, %features.0.bias) # D:anacondaenvsmypytorchlibsite-packagestorchnnmodulesconv.py:443:0
  %input.7 : Float(1, 64, 30, 30, strides=[57600, 900, 30, 1], requires_grad=1, device=cuda:0) = onnx::Relu(%input.3) # D:anacondaenvsmypytorchlibsite-packagestorchnnfunctional.py:1442:0
  %input.11 : Float(1, 128, 28, 28, strides=[100352, 784, 28, 1], requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[1, 1]](%input.7, %features.2.weight, %features.2.bias) # D:anacondaenvsmypytorchlibsite-packagestorchnnmodulesconv.py:443:0
  %input.15 : Float(1, 128, 28, 28, strides=[100352, 784, 28, 1], requires_grad=1, device=cuda:0) = onnx::Relu(%input.11) # D:anacondaenvsmypytorchlibsite-packagestorchnnfunctional.py:1442:0
  %input.19 : Float(1, 256, 26, 26, strides=[173056, 676, 26, 1], requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[1, 1]](%input.15, %features.4.weight, %features.4.bias) # D:anacondaenvsmypytorchlibsite-packagestorchnnmodulesconv.py:443:0
  %input.23 : Float(1, 256, 26, 26, strides=[173056, 676, 26, 1], requires_grad=1, device=cuda:0) = onnx::Relu(%input.19) # D:anacondaenvsmypytorchlibsite-packagestorchnnfunctional.py:1442:0
  %20 : Float(1, 256, 1, 1, strides=[256, 1, 1, 1], requires_grad=1, device=cuda:0) = onnx::GlobalAveragePool(%input.23) # D:anacondaenvsmypytorchlibsite-packagestorchnnfunctional.py:1241:0
  %21 : Float(1, 256, strides=[256, 1], requires_grad=1, device=cuda:0) = onnx::Flatten[axis=1](%20) # D:anacondaenvsmypytorchlibsite-packagestorchnnmodulesflatten.py:45:0
  %input.27 : Float(1, 80, strides=[80, 1], requires_grad=1, device=cuda:0) = onnx::Gemm[alpha=1., beta=1., transB=1](%21, %classfired.1.weight, %classfired.1.bias) # D:anacondaenvsmypytorchlibsite-packagestorchnnmoduleslinear.py:103:0
  %out : Float(1, 10, strides=[10, 1], requires_grad=1, device=cuda:0) = onnx::Gemm[alpha=1., beta=1., transB=1](%input.27, %classfired.3.weight, %classfired.3.bias) # D:anacondaenvsmypytorchlibsite-packagestorchnnmoduleslinear.py:103:0
  return (%out)

Model correct

安装可视化工具,将生成的model_static.onnx文件可视化。
pip install netron

netron -b model_static.onnx


先采用onnxruntime的python接口进行运行测试,使用:
pip install onnxruntime 进行安装。安装后测试代码如下:

#onnx输入的字典对应的值应该是numpy类型
    dummy_input = torch.rand(32, 32, 3).numpy()
    session = onnxruntime.InferenceSession(r'./model_static.onnx')
    inputs = {'in': dummy_input}
    # 其第一个参数为输出张量名的列表,第二个参数为输入值的字典
    output = session.run(['out'], inputs)
    print(output)

输出为:

[array([[-0.7123281 ,  0.00981204, -0.05517036,  0.42574304, -0.04109055,
        -2.0644906 ,  3.5208652 , -3.2603176 ,  2.1770291 ,  1.1719888 ]],
      dtype=float32)]

借助opencv中的dnn模块直接加载:

未完待续。

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/853915.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号