- 1.ShuffleNetV2网络结构讲解
- 2.重新搭建ShuffleNetV2模型结构(便于训练)
- (1)关于数据集和训练的整个代码可以参考我这篇文章
- 3.训练结果
- 4.图像测试
https://blog.csdn.net/Keep_Trying_Go/article/details/124774129
2.重新搭建ShuffleNetV2模型结构(便于训练) (1)关于数据集和训练的整个代码可以参考我这篇文章
https://mydreamambitious.blog.csdn.net/article/details/123966676
注:只要将代码中的网络结构换成下面的这个ShuffleNetV2结构即可(如果采用的是面向对象的方式设计的网络结构的话,在训练数据的时候需要做一些调整)并且再将代码中的一些名称修改一下即可。
需要修改的名称如下:
# #通道重排
def Shuffle_Channels(inputs):
# 获取图片batch,高度,宽度,通道数
batch, h, w, c = inputs.shape
# 对通道进行变形
input_reshape = tf.reshape(inputs, [-1, h, w, 2, c // 2])
# 退通道进行转置
input_transposed = tf.transpose(input_reshape, [0, 1, 2, 4, 3])
# 再对通道进行变形并且将通道分为两组
output_reshape = tf.reshape(input_transposed, [2, -1, h, w, c // 2])
return output_reshape[0], output_reshape[1]
def ShuffleNetV2UnitC(inputs, inp, oup):
x0, x1 = Shuffle_Channels(inputs)
inp = inp // 2
output_channel = oup - inp
x = layers.Conv2D(oup // 2, kernel_size=[1, 1], strides=[1, 1], padding='same')(x1)
x = layers.BatchNormalization()(x)
x = layers.Activation('relu')(x)
x = layers.DepthwiseConv2D(kernel_size=[3, 3], strides=[1, 1], padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.Conv2D(output_channel, kernel_size=[1, 1], strides=[1, 1], padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.Activation('relu')(x)
x_out = tf.concat([
x0, x
], axis=3)
return x_out
def ShuffleNetV2UnitD(inputs, inp, oup):
output_channel = oup - inp
# right
x = layers.Conv2D(oup // 2, kernel_size=[1, 1], strides=[1, 1], padding='same')(inputs)
x = layers.BatchNormalization()(x)
x = layers.Activation('relu')(x)
x = layers.DepthwiseConv2D(kernel_size=[3, 3], strides=[2, 2], padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.Conv2D(output_channel, kernel_size=[1, 1], strides=[1, 1], padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.Activation('relu')(x)
# left
x0 = layers.DepthwiseConv2D(kernel_size=[3, 3], strides=[2, 2], padding='same')(inputs)
x0 = layers.BatchNormalization()(x0)
x0 = layers.Conv2D(inp, kernel_size=[1, 1], strides=[1, 1], padding='same')(x0)
x0 = layers.BatchNormalization()(x0)
x0 = layers.Activation('relu')(x0)
x_out = tf.concat([
x0, x
], axis=3)
return x_out
def ShuffleNetV2(inputFilter, numlayers, StageFilter, input_shape=(224, 224, 3)):
inputs = keras.Input(input_shape)
x = layers.Conv2D(inputFilter, kernel_size=[3, 3], strides=[2, 2], padding='same')(inputs)
x = layers.BatchNormalization()(x)
x = layers.Activation('relu')(x)
x = layers.MaxPool2D(pool_size=[3, 3], strides=[2, 2], padding='same')(x)
def StageX(inputs, numlayers, inputFilter, outFilter):
x = ShuffleNetV2UnitD(inputs, inp=inputFilter, oup=outFilter)
for i in range(numlayers):
inputFilter = outFilter
x = ShuffleNetV2UnitC(x, inp=inputFilter, oup=outFilter)
return x
x = StageX(x, numlayers[0], inputFilter, StageFilter[0])
x = StageX(x, numlayers[1], StageFilter[0], StageFilter[1])
x = StageX(x, numlayers[2], StageFilter[1], StageFilter[2])
x = layers.Conv2D(1024, kernel_size=[1,1], strides=[1, 1], padding='same')(x)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(2)(x)
x = layers.Dropout(0.5)(x)
x = layers.Activation('softmax')(x)
model = Model(inputs=inputs, outputs=x)
return model
model_shufflenetv2 = ShuffleNetV2(inputFilter=24, numlayers=[3, 7, 3], StageFilter=[116, 232, 464],
input_shape=(224, 224, 3))
# model_shufflenetv2.summary()
3.训练结果
由于数据集不是很多,只有近1000张图片,并且之包含两个类别(猫-cat,狗-dot)所以训练的最后效果并不怎么样(中间还需要调参),读者可以根据自己的需要收集图片进行训练。
4.图像测试
只需要将flask中加载的权重文件换成自己训练的就可以了(flask这部分代码在从Github下载的文件中)。
https://github.com/KeepTryingTo/-.git



