看看
yieldpython中有什么功能以及生成器的概念。您无需在一开始就加载所有数据。您应该使自己
batch_size足够小,以免出现内存错误。您的生成器可能如下所示:
def generator(fileobj, labels, memory_one_pic=1024, batch_size): start = 0 end = start + batch_size while True: X_batch = fileobj.read(memory_one_pic*batch_size) y_batch = labels[start:end] start += batch_size end += batch_size if not X_batch: break if start >= amount_of_datasets: start = 0 end = batch_size yield (X_batch, y_batch)
当您已经准备好架构之后
train_generator = generator(open('traindata.csv','rb'), labels, batch_size)train_steps = amount_of_datasets//batch_size + 1model.fit_generator(generator=train_generator, steps_per_epoch=train_steps, epochs=epochs)您还应该阅读有关的信息
batch_normalization,从根本上讲,它有助于更快,更准确地学习。



