您可以从
load_model和
save_model函数中提取重要的行。
要保存优化器状态,请在
save_model:
# Save optimizer weights.symbolic_weights = getattr(model.optimizer, 'weights')if symbolic_weights: optimizer_weights_group = f.create_group('optimizer_weights') weight_values = K.batch_get_value(symbolic_weights)要加载优化器状态,请参见
load_model:
# Set optimizer weights.if 'optimizer_weights' in f: # Build train function (to get weight updates). if isinstance(model, Sequential): model.model._make_train_function() else: model._make_train_function() # ... try: model.optimizer.set_weights(optimizer_weight_values)
结合以上各行,这是一个示例:
首先将模型拟合5个时期。
X, y = np.random.rand(100, 50), np.random.randint(2, size=100)
x = Input((50,))
out = Dense(1, activation=’sigmoid’)(x)
model = Model(x, out)
model.compile(optimizer=’adam’, loss=’binary_crossentropy’)
model.fit(X, y, epochs=5)Epoch 1/5
100/100 [==============================] - 0s 4ms/step - loss: 0.7716
Epoch 2/5
100/100 [==============================] - 0s 64us/step - loss: 0.7678
Epoch 3/5
100/100 [==============================] - 0s 82us/step - loss: 0.7665
Epoch 4/5
100/100 [==============================] - 0s 56us/step - loss: 0.7647
Epoch 5/5
100/100 [==============================] - 0s 76us/step - loss: 0.7638现在保存权重和优化器状态。
model.save_weights(‘weights.h5’)
symbolic_weights = getattr(model.optimizer, ‘weights’)
weight_values = K.batch_get_value(symbolic_weights)
with open(‘optimizer.pkl’, ‘wb’) as f:
pickle.dump(weight_values, f)在另一个python会话中重建模型,并加载权重。
x = Input((50,))
out = Dense(1, activation=’sigmoid’)(x)
model = Model(x, out)
model.compile(optimizer=’adam’, loss=’binary_crossentropy’)model.load_weights(‘weights.h5’)
model._make_train_function()
with open(‘optimizer.pkl’, ‘rb’) as f:
weight_values = pickle.load(f)
model.optimizer.set_weights(weight_values)继续进行模型训练。
model.fit(X, y, epochs=5)
Epoch 1/5
100/100 [==============================] - 0s 674us/step - loss: 0.7629
Epoch 2/5
100/100 [==============================] - 0s 49us/step - loss: 0.7617
Epoch 3/5
100/100 [==============================] - 0s 49us/step - loss: 0.7611
Epoch 4/5
100/100 [==============================] - 0s 55us/step - loss: 0.7601
Epoch 5/5
100/100 [==============================] - 0s 49us/step - loss: 0.7594



