栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 面试经验 > 面试问答

来自数据框的神经网络LSTM输入形状

面试问答 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

来自数据框的神经网络LSTM输入形状

以下是设置时间序列数据以训练LSTM的示例。该模型输出是胡说八道,因为我仅将其设置为演示如何构建模型。

import pandas as pdimport numpy as np# Get some time series datadf = pd.read_csv("https://raw.githubusercontent.com/plotly/datasets/master/timeseries.csv")df.head()

时间序列数据帧:

Date      A       B       C      D      E      F      G0   2008-03-18  24.68  164.93  114.73  26.27  19.21  28.87  63.441   2008-03-19  24.18  164.89  114.75  26.22  19.07  27.76  59.982   2008-03-20  23.99  164.63  115.04  25.78  19.01  27.04  59.613   2008-03-25  24.14  163.92  114.85  27.41  19.61  27.84  59.414   2008-03-26  24.44  163.45  114.84  26.86  19.53  28.02  60.09

您可以将输入输入构建为向量,然后使用pandas

.cumsum()
函数构建时间序列的序列:

# Put your inputs into a single listdf['single_input_vector'] = df[input_cols].apply(tuple, axis=1).apply(list)# Double-encapsulate list so that you can sum it in the next step and keep time steps as separate elementsdf['single_input_vector'] = df.single_input_vector.apply(lambda x: [list(x)])# Use .cumsum() to include previous row vectors in the current row list of vectorsdf['cumulative_input_vectors'] = df.single_input_vector.cumsum()

可以类似的方式设置输出,但是它将是单个向量而不是序列:

# If your output is multi-dimensional, you need to capture those dimensions in one object# If your output is a single dimension, this step may be unnecessarydf['output_vector'] = df[output_cols].apply(tuple, axis=1).apply(list)

输入序列必须具有相同的长度才能在模型中运行,因此您需要将其填充为累积向量的最大长度:

# Pad your sequences so they are the same lengthfrom keras.preprocessing.sequence import pad_sequencesmax_sequence_length = df.cumulative_input_vectors.apply(len).max()# Save it as a list   padded_sequences = pad_sequences(df.cumulative_input_vectors.tolist(), max_sequence_length).tolist()df['padded_input_vectors'] = pd.Series(padded_sequences).apply(np.asarray)

训练数据可以从数据框中提取并放入numpy数组中。 请注意,从数据框中出来的输入数据不会构成3D数组。 它使数组成为数组,这是不一样的。

您可以使用hstack和reshape来构建3D输入数组。

# Extract your training dataX_train_init = np.asarray(df.padded_input_vectors)# Use hstack to and reshape to make the inputs a 3d vectorX_train = np.hstack(X_train_init).reshape(len(df),max_sequence_length,len(input_cols))y_train = np.hstack(np.asarray(df.output_vector)).reshape(len(df),len(output_cols))

为了证明这一点:

>>> print(X_train_init.shape)(11,)>>> print(X_train.shape)(11, 11, 6)>>> print(X_train == X_train_init)False

获得训练数据后,您可以定义输入层和输出层的尺寸。

# Get your input dimensions# Input length is the length for one input sequence (i.e. the number of rows for your sample)# Input dim is the number of dimensions in one input vector (i.e. number of input columns)input_length = X_train.shape[1]input_dim = X_train.shape[2]# Output dimensions is the shape of a single output vector# In this case it's just 1, but it could be moreoutput_dim = len(y_train[0])

建立模型:

from keras.models import Model, Sequentialfrom keras.layers import LSTM, Dense# Build the modelmodel = Sequential()# I arbitrarily picked the output dimensions as 4model.add(LSTM(4, input_dim = input_dim, input_length = input_length))# The max output value is > 1 so relu is used as final activation.model.add(Dense(output_dim, activation='relu'))model.compile(loss='mean_squared_error',   optimizer='sgd',   metrics=['accuracy'])

最后,您可以训练模型并将训练日志保存为历史记录:

# Set batch_size to 7 to show that it doesn't have to be a factor or multiple of your sample sizehistory = model.fit(X_train, y_train,   batch_size=7, nb_epoch=3,   verbose = 1)

输出:

Epoch 1/311/11 [==============================] - 0s - loss: 3498.5756 - acc: 0.0000e+00     Epoch 2/311/11 [==============================] - 0s - loss: 3498.5755 - acc: 0.0000e+00     Epoch 3/311/11 [==============================] - 0s - loss: 3498.5757 - acc: 0.0000e+00

而已。使用

model.predict(X)
where
X
与为
X_train
从模型进行预测时使用相同的格式(样本数量除外)。



转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/660557.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号