-
-
Notifications
You must be signed in to change notification settings - Fork 23
Loss functions
Loss functions written below are provided as default by dannjs, see how to add more
These functions are represented below with yhat
being the dannjs model predictions and y
being the target values. The value n
represents the length of the model's output array.
Binary Cross Entropy Loss. This function is common in machine learning especially for classification tasks.
Definition:
Mean Squared Error, this is one of the most commonly used loss functions in deep learning. This function determines a loss value by averaging the square of the difference between the predicted and desired output. It is also the default value for a Dannjs model.
Definition:
Mean Cubed Error, this is an experimental function. Cubing a number can output a negative value, this explains the |x|
.
Definition:
Root Mean Squared Error, this function is the root of an mse output.
Definition:
Mean Absolute Error, this function determines the loss value by averaging the absolute difference between predicted and desired output.
Definition:
Mean Bias Error, this function determines a loss value by averaging the raw difference between the predicted and desired output. The output of this function can be negative, which makes this function less preferable than others.
Definition:
Log Cosh Loss, this function determines a loss value by averaging the of the difference between the predicted and desired output.
Definition:
Mean absolute exponential loss, this activaiton function is similar to mae
but it offers a faster descent when approximately x = [-30.085,30.085]
.
Definition:
Here is the graphed loss functions. The value x
is the difference between y
and yhat