以交叉熵为例:
tf的loss api一般通过如 tf.losses.SparseCategoricalCrossentropy(from_logits=True) 创建instance,或者不需要pass in参数的情况下:tf.losses.sparse_categorical_crossentropy, 然后通过loss(y_true, y_pred)来call这个函数
Subclass损失函数需要重写 __init__()和 call() 两个函数
- __init__(self): accept parameters to pass during the call of your loss function
- call(self, y_true, y_pred): use the targets (y_true) and the model predictions (y_pred) to compute the model's loss
class CustomMSE(keras.losses.Loss):
def __init__(self, regularization_factor=0.1, name="custom_mse"):
super().__init__(name=name)
self.regularization_factor = regularization_factor
def call(self, y_true, y_pred):
mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
return mse + reg * self.regularization_factor
Metrics
metric需要implement以下四个函数
add_loss and add_metric
- __init__(self), in which you will create state variables for your metric.
- update_state(self, y_true, y_pred, sample_weight=None), which uses the targets y_true and the model predictions y_pred to update the state variables.
- result(self), which uses the state variables to compute the final results.
- reset_states(self), which reinitializes the state of the metric.
you can call self.add_loss(loss_value) from inside the call method of a custom layer. Losses added in this way get added to the "main" loss during training (the one passed to compile())
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(tf.reduce_sum(inputs) * 0.1)
return inputs # Pass-through layer.
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
# The `aggregation` argument defines
# how to aggregate the per-batch values
# over each epoch:
# in this case we simply average them.
self.add_metric(
keras.backend.std(inputs), name="std_of_activation", aggregation="mean"
)
return inputs # Pass-through layer.
In the Functional API, you can also call model.add_loss(loss_tensor), or model.add_metric(metric_tensor, name, aggregation).



