Skip to content Skip to sidebar Skip to footer

Make A Custom Loss Function In Keras In Detail

I try to make a custom loss function in Keras. I want to make this loss function The dimension of output is 80. Batch size is 5000. So I build this loss function below. But this d

Solution 1:

Loss functions should avoid all kinds of operations that are not from keras backend. The values are tensors and you must keep them like tensors.

And you don't need to reshape things unless you actually want them to behave in a specific way.

If you have shapes (5000,80) and (5000,1), you can make operations with them without needing K.repeat_elements() (the equivalent to numpy.tile).

So, supposing that 5000 is the batch size (number of samples) and 80 is the only actual dimension belonging to a sample:

defnormalize_loss(yTrue,yPred):
    nb_divide = K.sqrt(K.sum(K.square(yPred),axis=1,keepdims=True)) 
        #keepdims=True keeps the shape like (5000,1)#this is not summing the entire batch, but only a single sample, is that correct?

    predicted = yPred/nb_divide
    return K.sum(K.square(yTrue-predicted))

Some observations:

  • (I'm not a loss function expert here) You're dividing only the predicted part, but not the true part. Wouldn't that create big differences between both values and result in a misguiding loss function? (Again, I'm not the expert here)

  • Usually people use K.mean() at the end of the loss function, but I see you used K.sum(). This is not a problem and doesn't prevent training from working. But you may like to visualize this same loss function for data with different sizes, and be able to compare them size-independently.

Post a Comment for "Make A Custom Loss Function In Keras In Detail"