| Package | Description |
|---|---|
| org.nd4j.autodiff.loss | |
| org.nd4j.autodiff.samediff.ops | |
| org.nd4j.linalg.api.ops.impl.loss | |
| org.nd4j.linalg.api.ops.impl.loss.bp | |
| org.nd4j.linalg.factory.ops |
| Modifier and Type | Method and Description |
|---|---|
static LossReduce |
LossReduce.valueOf(String name)
Returns the enum constant of this type with the specified name.
|
static LossReduce[] |
LossReduce.values()
Returns an array containing the constants of this enum type, in
the order they are declared.
|
| Modifier and Type | Method and Description |
|---|---|
SDVariable |
SDLoss.absoluteDifference(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
SDVariable |
SDLoss.absoluteDifference(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
SDVariable |
SDLoss.cosineDistance(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
SDVariable |
SDLoss.cosineDistance(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
SDVariable |
SDLoss.hingeLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
SDVariable |
SDLoss.hingeLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
SDVariable |
SDLoss.huberLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
SDLoss.huberLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
SDLoss.logLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
SDLoss.logLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
SDLoss.logPoisson(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
SDVariable |
SDLoss.logPoisson(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
SDVariable |
SDLoss.meanPairwiseSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
SDVariable |
SDLoss.meanPairwiseSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
SDVariable |
SDLoss.meanSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean squared error loss function.
|
SDVariable |
SDLoss.meanSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean squared error loss function.
|
SDVariable |
SDLoss.sigmoidCrossEntropy(SDVariable label,
SDVariable predictionLogits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
SDVariable |
SDLoss.sigmoidCrossEntropy(String name,
SDVariable label,
SDVariable predictionLogits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
SDVariable |
SDLoss.softmaxCrossEntropy(SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits)If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels; otherwise, the output is a scalar. |
SDVariable |
SDLoss.softmaxCrossEntropy(String name,
SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits)If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels; otherwise, the output is a scalar. |
| Modifier and Type | Field and Description |
|---|---|
protected LossReduce |
BaseLoss.lossReduce |
| Constructor and Description |
|---|
AbsoluteDifferenceLoss(INDArray labels,
INDArray predictions,
INDArray weights,
LossReduce lossReduce) |
AbsoluteDifferenceLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
AbsoluteDifferenceLoss(SameDiff sameDiff,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
BaseLoss(@NonNull LossReduce lossReduce,
@NonNull INDArray predictions,
INDArray weights,
@NonNull INDArray labels) |
BaseLoss(@NonNull SameDiff sameDiff,
@NonNull LossReduce lossReduce,
@NonNull SDVariable predictions,
SDVariable weights,
@NonNull SDVariable labels) |
CosineDistanceLoss(INDArray labels,
INDArray predictions,
INDArray weights,
LossReduce lossReduce,
int dimension) |
CosineDistanceLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
int dimension) |
CosineDistanceLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension) |
HingeLoss(INDArray labels,
INDArray predictions,
INDArray weights,
LossReduce lossReduce) |
HingeLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
HingeLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
HuberLoss(INDArray labels,
INDArray predictions,
INDArray weights,
LossReduce lossReduce,
double delta) |
HuberLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
double delta) |
HuberLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta) |
LogLoss(INDArray labels,
INDArray predictions,
INDArray weights,
LossReduce lossReduce,
double epsilon) |
LogLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
double epsilon) |
LogLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon) |
LogPoissonLoss(INDArray labels,
INDArray predictions,
INDArray weights,
LossReduce lossReduce,
boolean full) |
LogPoissonLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
LogPoissonLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
boolean full) |
LogPoissonLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
boolean full) |
MeanPairwiseSquaredErrorLoss(INDArray labels,
INDArray predictions,
INDArray weights,
LossReduce lossReduce) |
MeanPairwiseSquaredErrorLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
MeanPairwiseSquaredErrorLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
MeanSquaredErrorLoss(INDArray labels,
INDArray predictions,
INDArray weights,
LossReduce lossReduce) |
MeanSquaredErrorLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
MeanSquaredErrorLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SigmoidCrossEntropyLoss(INDArray labels,
INDArray predictions,
INDArray weights,
LossReduce lossReduce,
double labelSmoothing) |
SigmoidCrossEntropyLoss(SameDiff sameDiff,
LossReduce reductionMode,
SDVariable logits,
SDVariable weights,
SDVariable labels) |
SigmoidCrossEntropyLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable logits,
SDVariable weights,
SDVariable labels,
double labelSmoothing) |
SigmoidCrossEntropyLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable logits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing) |
SoftmaxCrossEntropyLoss(INDArray labels,
INDArray predictions,
INDArray weights,
LossReduce lossReduce,
double labelSmoothing) |
SoftmaxCrossEntropyLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable logits,
SDVariable weights,
SDVariable labels) |
SoftmaxCrossEntropyLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable logits,
SDVariable weights,
SDVariable labels,
double labelSmoothing) |
SoftmaxCrossEntropyLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable logits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing) |
| Modifier and Type | Field and Description |
|---|---|
protected LossReduce |
BaseLossBp.lossReduce |
| Constructor and Description |
|---|
AbsoluteDifferenceLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
BaseLossBp(@NonNull SameDiff sameDiff,
@NonNull LossReduce lossReduce,
@NonNull SDVariable predictions,
@NonNull SDVariable weights,
@NonNull SDVariable labels) |
CosineDistanceLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
int dimension) |
HingeLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
HuberLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
double delta) |
LogLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
double epsilon) |
LogPoissonLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
LogPoissonLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
boolean full) |
MeanPairwiseSquaredErrorLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
MeanSquaredErrorLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
SigmoidCrossEntropyLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable logits,
SDVariable weights,
SDVariable labels,
double labelSmoothing) |
SoftmaxCrossEntropyLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable logits,
SDVariable weights,
SDVariable labels,
double labelSmoothing) |
| Modifier and Type | Method and Description |
|---|---|
INDArray |
NDLoss.absoluteDifference(INDArray label,
INDArray predictions,
INDArray weights,
LossReduce lossReduce)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
INDArray |
NDLoss.cosineDistance(INDArray label,
INDArray predictions,
INDArray weights,
LossReduce lossReduce,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
INDArray |
NDLoss.hingeLoss(INDArray label,
INDArray predictions,
INDArray weights,
LossReduce lossReduce)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
INDArray |
NDLoss.huberLoss(INDArray label,
INDArray predictions,
INDArray weights,
LossReduce lossReduce,
double delta)
Huber loss function, used for robust regression.
|
INDArray |
NDLoss.logLoss(INDArray label,
INDArray predictions,
INDArray weights,
LossReduce lossReduce,
double epsilon)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
INDArray |
NDLoss.logPoisson(INDArray label,
INDArray predictions,
INDArray weights,
LossReduce lossReduce,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
INDArray |
NDLoss.meanPairwiseSquaredError(INDArray label,
INDArray predictions,
INDArray weights,
LossReduce lossReduce)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
INDArray |
NDLoss.meanSquaredError(INDArray label,
INDArray predictions,
INDArray weights,
LossReduce lossReduce)
Mean squared error loss function.
|
INDArray |
NDLoss.sigmoidCrossEntropy(INDArray label,
INDArray predictionLogits,
INDArray weights,
LossReduce lossReduce,
double labelSmoothing)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
INDArray |
NDLoss.softmaxCrossEntropy(INDArray oneHotLabels,
INDArray logitPredictions,
INDArray weights,
LossReduce lossReduce,
double labelSmoothing)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits)If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels; otherwise, the output is a scalar. |
Copyright © 2021. All rights reserved.