您可以
cross_entropy通过将的值添加到的参数列表中来获取
sess.run(...)。例如,您的
for-loop可以重写如下:
for i in range(100): batch_xs, batch_ys = mnist.train.next_batch(100) cross_entropy = -tf.reduce_sum(y_ * tf.log(y)) _, loss_val = sess.run([train_step, cross_entropy], feed_dict={x: batch_xs, y_: batch_ys}) print 'loss = ' + loss_val可以使用相同的方法来打印变量的当前值。假设,除了的值
cross_entropy,您还想打印被
tf.Variable调用的值
W,您可以执行以下操作:
for i in range(100): batch_xs, batch_ys = mnist.train.next_batch(100) cross_entropy = -tf.reduce_sum(y_ * tf.log(y)) _, loss_val, W_val = sess.run([train_step, cross_entropy, W], feed_dict={x: batch_xs, y_: batch_ys}) print 'loss = %s' % loss_val print 'W = %s' % W_val


