| Modifier and Type | Method and Description |
|---|---|
INDArray[] |
BasicGraphExecutioner.executeGraph(int id,
SDVariable... variables)
This method executes
|
INDArray[] |
GraphExecutioner.executeGraph(int id,
SDVariable... variables)
This method executes
|
| Modifier and Type | Method and Description |
|---|---|
SDVariable |
DifferentialFunction.arg()
Return the first argument
|
SDVariable |
DifferentialFunction.arg(int num)
Return the specified argument for this function
|
SDVariable[] |
DifferentialFunction.args()
Return the arguments for a given function
|
SDVariable |
DifferentialFunction.larg()
The left argument for this function
|
SDVariable |
DifferentialFunction.outputVariable() |
SDVariable[] |
DifferentialFunction.outputVariables()
Return the output variables for this differential function.
|
abstract SDVariable[] |
DifferentialFunction.outputVariables(String baseName)
Return the output functions for this differential function.
|
SDVariable |
DifferentialFunction.rarg()
The right argument for this function.
|
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
DifferentialFunction.diff(List<SDVariable> i_v1)
Perform automatic differentiation
wrt the input variables
|
abstract List<SDVariable> |
DifferentialFunction.doDiff(List<SDVariable> f1)
The actual implementation for automatic differentiation.
|
List<SDVariable> |
DifferentialFunction.outputs() |
| Modifier and Type | Method and Description |
|---|---|
void |
DifferentialFunction.replaceArg(int i,
SDVariable newArg)
Replace argument at the specfied index
|
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
DifferentialFunction.diff(List<SDVariable> i_v1)
Perform automatic differentiation
wrt the input variables
|
abstract List<SDVariable> |
DifferentialFunction.doDiff(List<SDVariable> f1)
The actual implementation for automatic differentiation.
|
| Constructor and Description |
|---|
DifferentialFunction(SameDiff sameDiff,
boolean inPlace,
SDVariable[] args)
Add the various arguments for
this function
|
DifferentialFunction(SameDiff sameDiff,
SDVariable[] args) |
| Modifier and Type | Method and Description |
|---|---|
ListenerVariables.Builder |
ListenerVariables.Builder.evaluationVariables(SDVariable... variables)
Add required variables for evaluation
|
ListenerVariables.Builder |
ListenerVariables.Builder.inferenceVariables(SDVariable... variables)
Add required variables for inference
|
ListenerVariables.Builder |
ListenerVariables.Builder.requireVariables(@NonNull Operation op,
SDVariable... variables)
Add required variables for the specified op
|
ListenerEvaluations.Builder |
ListenerEvaluations.Builder.trainEvaluation(@NonNull SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
Add requested training evaluations for a parm/variable
|
ListenerVariables.Builder |
ListenerVariables.Builder.trainingVariables(SDVariable... variables)
Add required variables for training
|
ListenerEvaluations.Builder |
ListenerEvaluations.Builder.validationEvaluation(@NonNull SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
Add requested validation evaluations for a parm/variable
|
ListenerVariables.Builder |
ListenerVariables.Builder.validationVariables(SDVariable... variables)
Add required variables for validation
|
| Modifier and Type | Method and Description |
|---|---|
<T extends IEvaluation> |
EvaluationRecord.evaluation(SDVariable param)
Get the evaluation for a given param/variable
|
<T extends IEvaluation<T>> |
EvaluationRecord.evaluation(SDVariable param,
Class<T> evalClass)
Get the evaluation of a given type, for a given param/variable
|
IEvaluation |
EvaluationRecord.evaluation(SDVariable param,
int index)
Get the evaluation for param at the specified index
|
List<IEvaluation> |
EvaluationRecord.evaluations(SDVariable param)
Get evaluations for a given param/variable
|
double |
EvaluationRecord.getValue(SDVariable param,
IMetric metric)
Get the metric's value for the evaluation of the metric's type, for a given param/variable
|
double |
EvaluationRecord.getValue(SDVariable param,
int index,
IMetric metric)
Get the metric's value for the evaluation for a given param/variable at the given index
|
double |
LossCurve.lastMeanDelta(SDVariable loss)
Return the loss delta between the last epoch and the one before it, for a given variable.
|
float |
LossCurve.lastMeanLoss(@NonNull SDVariable loss)
Return the mean loss value for a given variable on the last epoch.
|
float[] |
LossCurve.meanLoss(@NonNull SDVariable loss)
Return all mean loss values for a given variable
|
float |
LossCurve.meanLoss(@NonNull SDVariable loss,
int epoch)
Return the mean loss value for a given variable on a given epoch.
|
List<IEvaluation> |
History.trainingEval(SDVariable param)
Get the results of a training evaluation on a given parameter
Only works if there is only one evaluation for param.
|
List<Double> |
History.trainingEval(SDVariable param,
IMetric metric)
Get the results of a training evaluation on a given parameter for a given metric
Only works if there is only one evaluation with the given metric for param
|
List<IEvaluation> |
History.trainingEval(SDVariable param,
int index)
Get the results of a training evaluation on a given parameter at a given index
Note that it returns all recorded evaluations.
|
List<Double> |
History.trainingEval(SDVariable param,
int index,
IMetric metric)
Get the results of a training evaluation on a given parameter at a given index, for a given metric
Note that it returns all recorded evaluations.
|
List<IEvaluation> |
History.validationEval(SDVariable param)
Get the results of a validation evaluation on a given parameter
Only works if there is only one evaluation for param.
|
List<Double> |
History.validationEval(SDVariable param,
IMetric metric)
Get the results of a validation evaluation on a given parameter for a given metric
Only works if there is only one evaluation with the given metric for param
|
List<IEvaluation> |
History.validationEval(SDVariable param,
int index)
Get the results of a validation evaluation on a given parameter at a given index
Note that it returns all recorded evaluations.
|
List<Double> |
History.validationEval(SDVariable param,
int index,
IMetric metric)
Get the results of a validation evaluation on a given parameter at a given index, for a given metric
Note that it returns all recorded evaluations.
|
| Modifier and Type | Method and Description |
|---|---|
<X extends SDVariable> |
SameDiff.setupFunction(X function)
Attempts to insert the
DifferentialFunction reference in to this SameDiff instance. |
| Modifier and Type | Method and Description |
|---|---|
SDVariable |
SDVariable.add(double scalar)
|
SDVariable |
SDVariable.add(SDVariable other)
|
SDVariable |
SDVariable.add(String varName,
double scalar)
Scalar addition:
out = this + scalarOutput variable has the same shape as the input variable |
SDVariable |
SDVariable.add(String name,
SDVariable x)
Addition operation: elementwise
this + xIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SameDiff.addVariable(SDVariable variable)
Add the specified variable to this SameDiff instance
|
SDVariable |
SDVariable.argmax(int... dimensions)
|
SDVariable |
SDVariable.argmax(String name,
boolean keepDims,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.argmax(String name,
int... dimensions)
|
SDVariable |
SDVariable.argmin(int... dimensions)
|
SDVariable |
SDVariable.argmin(String name,
boolean keepDims,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.argmin(String name,
int... dimensions)
|
SDVariable |
SDVariable.assign(Number value)
Return a variable with equal shape to the input, but all elements set to the specified value
|
SDVariable |
SDVariable.castTo(@NonNull DataType dataType) |
SDVariable |
SDVariable.castTo(String name,
@NonNull DataType dataType) |
SDVariable |
SDVariable.clone(SameDiff sd) |
SDVariable |
SameDiff.constant(double value)
Create a new double scalar constant (rank 0) with the specified value.
Constants are not modified by training/backprop. |
SDVariable |
SameDiff.constant(float value)
Create a new float scalar constant (rank 0) with the specified value
Constants are not modified by training/backprop. |
SDVariable |
SameDiff.constant(@NonNull INDArray constant)
Create an SDVariable with a fixed/constant value, with a generated name
Constants are not modified by training/backprop. |
SDVariable |
SameDiff.constant(int value)
Create a new integer scalar constant (rank 0) with the specified value
|
SDVariable |
SameDiff.constant(long value)
Create a new long scalar constant (rank 0) with the specified value
|
SDVariable |
SameDiff.constant(String name,
DataType dataType,
Number value)
Create a new scalar constant (rank 0) with the specified value and datatype
|
SDVariable |
SameDiff.constant(String name,
double value)
Create a new double scalar constant (rank 0) with the specified value
|
SDVariable |
SameDiff.constant(String name,
float value)
Create a new float scalar constant (rank 0) with the specified value
|
SDVariable |
SameDiff.constant(String name,
@NonNull INDArray constant)
Create an SDVariable with a fixed/constant value
Constants are not modified by training/backprop. |
SDVariable |
SameDiff.constant(String name,
int value)
Create a new integer scalar constant (rank 0) with the specified value
|
SDVariable |
SameDiff.constant(String name,
long value)
Create a new long scalar constant (rank 0) with the specified value
|
SDVariable |
SDVariable.convertToConstant()
Convert this variable to a constant.
|
SDVariable |
SameDiff.convertToConstant(@NonNull SDVariable variable)
Convert the specified variable to a constant.
|
SDVariable |
SDVariable.convertToVariable()
Convert this variable to a VARIABLE type SDVariable.
This can only be done for constants and placeholders, not ARRAY type variables (which are usually network activations). |
SDVariable |
SameDiff.convertToVariable(@NonNull SDVariable constant)
Convert the specified variable to a VARIABLE type SDVariable.
This can only be done for constants and placeholders, not ARRAY type variables (which are usually network activations). |
SDVariable |
SameDiffNoArgSingleLambda.define(SameDiff sameDiff) |
SDVariable[] |
SameDiffFunctionDefinition.define(SameDiff sameDiff,
Map<String,INDArray> inputs,
SDVariable[] variableInputs) |
SDVariable[] |
SameDiffLambda.define(SameDiff sameDiff,
SDVariable[] inputs) |
SDVariable |
SameDiffSingleLambda.define(SameDiff sameDiff,
SDVariable[] inputs) |
SDVariable |
SDVariable.div(double scalar)
|
SDVariable |
SDVariable.div(SDVariable x)
|
SDVariable |
SDVariable.div(String varName,
double scalar)
Scalar division:
out = this / scalarOutput variable has the same shape as the input variable |
SDVariable |
SDVariable.div(String name,
SDVariable x)
Division operation: elementwise
this / xIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.dot(SDVariable other,
int... dimensions)
|
SDVariable |
SDVariable.dot(String name,
SDVariable other,
int... dimensions)
Matrix dot product: out = dot(this,other, dimensions)
|
SDVariable |
SDVariable.dup()
Create a new SDVariable, the contents of which is copied from this current variable
|
SDVariable |
SDVariable.eq(double value)
|
SDVariable |
SDVariable.eq(SDVariable other)
|
SDVariable |
SDVariable.eq(String name,
double value)
Equals operation: elementwise
this == valueReturns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDVariable.eq(String name,
SDVariable other)
Equal to operation: elementwise
this == yIf x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SameDiffConditional.eval(SameDiff context,
SameDiffFunctionDefinition body,
SDVariable[] inputVars) |
SDVariable |
SDVariable.fdiv(String name,
SDVariable x)
Floor division operation: elementwise
this // xIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable[] |
SameDiff.generateOutputVariableForOp(DifferentialFunction function)
Generate the variables based on the given input op
and return the output variable names.
|
SDVariable[] |
SameDiff.generateOutputVariableForOp(DifferentialFunction function,
String baseName,
boolean isImport)
Generate the variables based on the given input op and return the output variable names.
|
SDVariable |
SDVariable.get(SDIndex... indices)
Get a variable with content equal to a specified sub-array of this variable.
Can be used (for example) to get rows, columns, sub-matrices, etc. |
SDVariable |
SameDiff.getGradForVariable(String varName)
Get the gradient for the variable with the specified name.
The gradient variable is the variable that represents the derivative of the loss function with respect to the output of this variable. |
SDVariable |
SDVariable.getGradient()
The gradient variable is the variable that represents the derivative of the loss function with respect
to the output of this variable.
|
SDVariable[] |
SameDiff.getInputVariablesForOp(DifferentialFunction function)
Get the input variable(s) for the specified differential function
|
SDVariable[] |
SameDiff.getOutputVariablesForOp(DifferentialFunction function)
Get the output variable(s) for the specified differential function
|
SDVariable |
SameDiff.getVariable(String name)
Get the variable based on the opName
|
SDVariable |
SameDiff.grad(String varName)
Get the gradient for the variable with the specified variable name.
|
SDVariable |
SDVariable.gradient()
Alias for the gradient variable - same as
getGradient(). |
SDVariable |
SDVariable.gt(double value)
|
SDVariable |
SDVariable.gt(SDVariable other)
|
SDVariable |
SDVariable.gt(String name,
double value)
Greater than operation: elementwise
this > valueReturns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDVariable.gt(String name,
SDVariable other)
Greater than operation: elementwise
this > yIf x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.gte(double value)
|
SDVariable |
SDVariable.gte(SDVariable other)
|
SDVariable |
SDVariable.gte(String name,
double value)
Greater than or equals operation: elementwise
this >= valueReturns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDVariable.gte(String name,
SDVariable other)
Greater than or equal to operation: elementwise
this >= yIf x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SameDiff.ifCond(@NonNull SameDiffNoArgSingleLambda cond,
@NonNull SameDiffNoArgSingleLambda trueBody,
@NonNull SameDiffNoArgSingleLambda falseBody)
|
SDVariable |
SameDiff.ifCond(String ifName,
@NonNull SameDiffNoArgSingleLambda cond,
@NonNull SameDiffNoArgSingleLambda trueBody,
@NonNull SameDiffNoArgSingleLambda falseBody)
|
SDVariable |
SameDiff.ifCond(String outputName,
String ifName,
@NonNull SameDiffNoArgSingleLambda cond,
@NonNull SameDiffNoArgSingleLambda trueBody,
@NonNull SameDiffNoArgSingleLambda falseBody)
Constructs a If statement using the tensorflow style control flow operations (Switch and Merge)
If the result of cond is true, returns the result of trueBody, otherwise returns the result of falseBody
Note that cond and body lambdas are only called once to construct the graph.
|
SDVariable |
ArgumentInterceptor.intercept(SDVariable argument) |
SDVariable |
SameDiff.invokeFunctionOn(String functionName,
SameDiff with) |
SDVariable |
SameDiff.invokeGraphOn(SameDiff sameDiff) |
SDVariable |
SDVariable.lt(double value)
|
SDVariable |
SDVariable.lt(SDVariable other)
|
SDVariable |
SDVariable.lt(String name,
double value)
Less than operation: elementwise
this < valueReturns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDVariable.lt(String name,
SDVariable other)
Less than operation: elementwise
this < yIf x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.lte(double value)
|
SDVariable |
SDVariable.lte(SDVariable other)
|
SDVariable |
SDVariable.lte(String name,
double value)
Less than or equals operation: elementwise
this <= valueReturns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDVariable.lte(String name,
SDVariable other)
Less than or equal to operation: elementwise
this <= yIf x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.max(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.max(int... dimensions)
|
SDVariable |
SDVariable.max(String name,
boolean keepDims,
int... dimensions)
Maximum array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.max(String name,
int... dimensions)
|
SDVariable |
SDVariable.mean(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.mean(int... dimensions)
|
SDVariable |
SDVariable.mean(String name,
boolean keepDims,
int... dimensions)
Mean (average) array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.mean(String name,
int... dimensions)
|
SDVariable |
SDVariable.min(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.min(int... dimensions)
|
SDVariable |
SDVariable.min(String name,
boolean keepDims,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDVariable.min(String name,
int... dimensions)
|
SDVariable |
SDVariable.minus(double other)
For Kotlin operator interop
|
SDVariable |
SDVariable.minus(SDVariable other)
For Kotlin operator interop
|
SDVariable |
SDVariable.mmul(SDVariable other)
|
SDVariable |
SDVariable.mmul(String name,
SDVariable other)
Matrix multiplication: out = mmul(this,other)
|
SDVariable |
SDVariable.mmul(String name,
SDVariable other,
@NonNull MMulTranspose mMulTranspose)
Matrix multiplication: out = mmul(this,other)
|
SDVariable |
SDVariable.mod(String name,
SDVariable x)
Modulo operation: elementwise
this / xIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.mul(double scalar)
|
SDVariable |
SDVariable.mul(SDVariable x)
|
SDVariable |
SDVariable.mul(String varName,
double scalar)
Scalar multiplication:
out = this * scalarOutput variable has the same shape as the input variable |
SDVariable |
SDVariable.mul(String name,
SDVariable x)
Multiplication operation: elementwise
this * xIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.neg()
Negate op - returns a new variable with the values of the current variable negated
|
SDVariable |
SDVariable.neg(String name)
Negate op - returns a new variable with the values of the current variable negated
|
SDVariable |
SDVariable.neq(double value)
See
neq(SDVariable) |
SDVariable |
SDVariable.neq(SDVariable other)
|
SDVariable |
SDVariable.neq(String name,
double value)
Not equals operation: elementwise
this != valueReturns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDVariable.neq(String name,
SDVariable other)
Not equal to operation: elementwise
this != yIf x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.norm1(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.norm1(int... dimensions)
|
SDVariable |
SDVariable.norm1(String name,
boolean keepDims,
int... dimensions)
Norm1 (L1 norm) reduction operation: The output contains the L1 norm for each tensor/subset along the specified dimensions:
out = sum_i abs(x[i])Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.norm1(String name,
int... dimensions)
|
SDVariable |
SDVariable.norm2(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.norm2(int... dimensions)
|
SDVariable |
SDVariable.norm2(String name,
boolean keepDims,
int... dimensions)
Norm2 (L2 norm) reduction operation: The output contains the L2 norm for each tensor/subset along the specified dimensions:
out = sqrt(sum_i x[i]^2)Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.norm2(String name,
int... dimensions)
|
SDVariable |
SDVariable.normmax(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.normmax(int... dimensions)
|
SDVariable |
SDVariable.normmax(String name,
boolean keepDims,
int... dimensions)
Max norm (infinity norm) reduction operation: The output contains the max norm for each tensor/subset along the
specified dimensions:
out = max(abs(x[i]))Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.normmax(String name,
int... dimensions)
|
SDVariable |
SameDiff.one(String name,
DataType dataType,
int... shape)
Create a new variable with the specified shape, with all values initialized to 1.0.
|
SDVariable |
SameDiff.one(String name,
DataType dataType,
long... shape)
Create a new variable with the specified shape, with all values initialized to 1.0.
|
SDVariable |
SameDiff.one(String name,
int... shape)
|
SDVariable |
SameDiff.one(String name,
long... shape)
|
SDVariable |
SDVariable.permute(int... dimensions)
Permute the dimensions of the current variable according to the specified permutation indices.
Example: if the current variable has shape [a,b,c] and dimensions = [2,0,1] the output has shape [c,a,b] |
SDVariable |
SDVariable.permute(SDVariable dimensions) |
SDVariable |
SameDiff.placeHolder(@NonNull String name,
DataType dataType,
long... shape)
Create a a placeholder variable.
|
SDVariable |
SDVariable.plus(double other)
For Kotlin operator interop
|
SDVariable |
SDVariable.plus(SDVariable other)
For Kotlin operator interop
|
SDVariable |
SDVariable.pow(double scalar)
|
SDVariable |
SDVariable.pow(String varName,
double scalar)
Scalar power operation:
out = this ^ scalarOutput variable has the same shape as the input variable |
SDVariable |
SDVariable.prod(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.prod(int... dimensions)
|
SDVariable |
SDVariable.prod(String name,
boolean keepDims,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.prod(String name,
int... dimensions)
|
SDVariable |
SDVariable.rank()
Get the rank of this variable as a dynamic SDVariable
|
SDVariable |
SDVariable.rdiv(double scalar)
|
SDVariable |
SDVariable.rdiv(SDVariable sameDiffVariable)
|
SDVariable |
SDVariable.rdiv(String varName,
double scalar)
Scalar reverse division:
out = scalar / thisOutput variable has the same shape as the input variable |
SDVariable |
SDVariable.rdiv(String name,
SDVariable x)
Reverse division operation: elementwise
x / thisIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.rename(String newName)
Rename this variable to a new name.
|
SDVariable |
SDVariable.reshape(int... newShape)
Reshape the current variable to the specified shape.
|
SDVariable |
SDVariable.reshape(long... newShape)
Reshape the current variable to the specified shape.
|
SDVariable |
SDVariable.reshape(SDVariable newShape)
Reshape the current variable to the specified (dynamic) shape.
|
SDVariable |
SDVariable.rsub(double scalar)
|
SDVariable |
SDVariable.rsub(SDVariable x)
|
SDVariable |
SDVariable.rsub(String varName,
double scalar)
Scalar reverse subtraction:
out = scalar - thisOutput variable has the same shape as the input variable |
SDVariable |
SDVariable.rsub(String name,
SDVariable x)
Reverse subtraction operation: elementwise
x - thisIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SameDiff.scalar(String name,
DataType dataType,
Number value)
Create a new scalar (rank 0) SDVariable with the specified value and datatype
|
SDVariable |
SameDiff.scalar(String name,
double value)
Create a new double scalar (rank 0) SDVariable with the specified value
|
SDVariable |
SameDiff.scalar(String name,
float value)
Create a new float scalar (rank 0) SDVariable with the specified value
|
SDVariable |
SameDiff.scalar(String name,
int value)
Create a new integer scalar (rank 0) SDVariable with the specified value
|
SDVariable |
SameDiff.scalar(String name,
long value)
Create a new long scalar (rank 0) SDVariable with the specified value
|
SDVariable |
SDVariable.setArray(INDArray array)
Associate the specified array with this variable
|
SDVariable |
SDVariable.shape()
Get the shape of the array as a dynamic SDVariable
|
SDVariable |
SDVariable.squaredDifference(SDVariable x)
|
SDVariable |
SDVariable.squaredDifference(String name,
SDVariable x)
Squared difference operation:
(this - x)^2 |
SDVariable |
SDVariable.std(boolean biasCorrected,
int... dimensions)
|
SDVariable |
SDVariable.std(String name,
boolean biasCorrected,
boolean keepDims,
int... dimensions)
Stardard deviation array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.std(String name,
boolean biasCorrected,
int... dimensions)
|
SDVariable |
SDVariable.sub(double scalar)
|
SDVariable |
SDVariable.sub(SDVariable x)
|
SDVariable |
SDVariable.sub(String varName,
double scalar)
Scalar subtraction:
out = this - scalarOutput variable has the same shape as the input variable |
SDVariable |
SDVariable.sub(String name,
SDVariable x)
Subtraction operation: elementwise
this - xIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.sum(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.sum(int... dimensions)
|
SDVariable |
SDVariable.sum(String name,
boolean keepDims,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions.
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.sum(String name,
int... dimensions)
|
SDVariable |
SDVariable.times(double other)
For Kotlin operator interop
|
SDVariable |
SDVariable.times(SDVariable other)
For Kotlin operator interop
|
SDVariable |
SameDiff.updateVariableNameAndReference(SameDiffOp opToRename,
SDVariable varToUpdate,
String newVarName)
Updates the variable name property on the passed in variable, the reference in samediff, and returns the variable.
|
SDVariable |
SameDiff.updateVariableNameAndReference(SDVariable varToUpdate,
String newVarName)
Updates the variable name property on the passed in variable, the reference in samediff, and returns the variable.
|
SDVariable[] |
SameDiff.updateVariableNamesAndReferences(SDVariable[] variablesToUpdate,
String[] newVariableNames)
Updates the variable name property on the passed in variables, its reference in samediff, and returns the variable.
|
SDVariable |
SameDiff.var(DataType dataType,
int... shape)
Creates a
SDVariable with the specified shape and a generated nameAny array will be generated with all zeros for the values This method creates a VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(DataType dataType,
long... shape)
Creates a
SDVariable with the specified shape and a generated nameAny array will be generated with all zeros for the values This method creates a VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(INDArray arr)
Create an
SDVariable with a generated name, and assocate the specified array with it.This is a VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(@NonNull SDVariable v)
Initialize a
SDVariable reference tying this variable to this samediff instance. |
SDVariable |
SameDiff.var(String name,
DataType dataType,
int... shape)
Creates a
SDVariable with the given shape and nameAny array will be generated with all zeros for the values |
SDVariable |
SameDiff.var(String name,
DataType dataType,
long... shape)
Creates a
SDVariable with the given shape and nameAny array will be generated with all zeros for the values This is a VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(String name,
@NonNull INDArray arr)
Create an
SDVariable with the specified name, and associate the specified array with itThis is a VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(String name,
int... shape)
Creates a
SDVariable with the given shape and nameAny array will be generated with all zeros for the values. |
SDVariable |
SameDiff.var(String name,
long... shape)
Creates a
SDVariable with the given shape and nameAny array will be generated with all zeros for the values. |
SDVariable |
SameDiff.var(String name,
LongShapeDescriptor shapeDesc)
Creates a
SDVariable with the given shape and nameAny array will be generated with all zeros for the values This is a VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(@NonNull String name,
@NonNull LongShapeDescriptor shape,
WeightInitScheme weightInitScheme)
Creates a
SDVariable with the given shape and nameThe underlying array will be initialized using the specified weight initilization scheme This is a VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(@NonNull String name,
@NonNull VariableType variableType,
WeightInitScheme weightInitScheme,
DataType dataType,
long... shape)
Variable initialization with a specified
WeightInitScheme
This method creates VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(@NonNull String name,
@NonNull WeightInitScheme weightInitScheme,
DataType dataType,
long... shape)
Variable initialization with a specified
WeightInitScheme
This method creates VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(@NonNull String name,
@NonNull WeightInitScheme weightInitScheme,
long... shape)
Variable initialization with a specified
WeightInitScheme. |
SDVariable |
SameDiff.var(WeightInitScheme weightInitScheme,
DataType dataType,
long... shape)
Creates a
SDVariable with the specified shape and a generated name. |
SDVariable[] |
SameDiff.whileLoop(@NonNull SDVariable[] loopVars,
@NonNull SameDiffSingleLambda cond,
@NonNull SameDiffLambda body)
|
SDVariable[] |
SameDiff.whileLoop(String[] outputNames,
String loopName,
@NonNull SDVariable[] loopVars,
@NonNull SameDiffSingleLambda cond,
@NonNull SameDiffLambda body)
Constructs a While loop using the tensorflow style control flow operations (Switch, Merge, Enter, Exit, and NextIteration)
Repeatedly executes body on the loop variables and updates them with the results, until cond evaluates to false
Note that cond and body lambdas are only called once to construct the graph.
|
SDVariable[] |
SameDiff.whileLoop(String loopName,
@NonNull SDVariable[] loopVars,
@NonNull SameDiffSingleLambda cond,
@NonNull SameDiffLambda body)
|
SDVariable |
SameDiff.zero(String name,
DataType dataType,
int... shape)
Create a new variable with the specified shape, with all values initialized to 0.
|
SDVariable |
SameDiff.zero(String name,
DataType dataType,
long... shape)
Create a new variable with the specified shape, with all values initialized to 0.
|
SDVariable |
SameDiff.zero(String name,
int... shape)
|
SDVariable |
SameDiff.zero(String name,
long... shape)
|
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
SameDiff.getVariablesInScope(NameScope scope)
Gets all variables in a given name scope.
|
List<SDVariable> |
SameDiff.getVariablesInScope(String scope)
|
Map<String,SDVariable> |
SameDiff.variableMap()
Return a copy of the internal variable map
|
List<SDVariable> |
SameDiff.variables()
The list of all variables in the graph
|
| Modifier and Type | Method and Description |
|---|---|
SDVariable |
SDVariable.add(SDVariable other)
|
SDVariable |
SDVariable.add(String name,
SDVariable x)
Addition operation: elementwise
this + xIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
void |
SameDiff.addArgsFor(SDVariable[] variables,
DifferentialFunction function)
Adds incoming arguments for the specified differential function to the graph
|
void |
SDVariable.addControlDependency(SDVariable controlDependency)
Add a control dependency for this variable on the specified variable.
Control dependencies can be used to enforce the execution order. |
void |
SameDiff.addLossVariable(@NonNull SDVariable variable)
|
void |
SameDiff.addOutgoingFor(SDVariable[] variables,
DifferentialFunction function)
Adds outgoing arguments to the graph for the specified DifferentialFunction
Also checks for input arguments and updates the graph adding an appropriate edge when the full graph is declared.
|
SDVariable |
SameDiff.addVariable(SDVariable variable)
Add the specified variable to this SameDiff instance
|
void |
SameDiff.assignArray(@NonNull INDArray arr,
@NonNull SDVariable variable)
Update the constant or variable type SDVariable with the values from the specified
array.
|
void |
SameDiff.associateArrayWithVariable(INDArray arr,
SDVariable variable)
Associate the array with the given variable.
|
SDVariable |
SameDiff.convertToConstant(@NonNull SDVariable variable)
Convert the specified variable to a constant.
|
SDVariable |
SameDiff.convertToVariable(@NonNull SDVariable constant)
Convert the specified variable to a VARIABLE type SDVariable.
This can only be done for constants and placeholders, not ARRAY type variables (which are usually network activations). |
SDVariable[] |
SameDiffFunctionDefinition.define(SameDiff sameDiff,
Map<String,INDArray> inputs,
SDVariable[] variableInputs) |
SDVariable[] |
SameDiffLambda.define(SameDiff sameDiff,
SDVariable[] inputs) |
SDVariable |
SameDiffSingleLambda.define(SameDiff sameDiff,
SDVariable[] inputs) |
SameDiff |
SameDiff.defineFunction(String function,
SameDiffFunctionDefinition functionDefinition,
SDVariable[] variables) |
SDVariable |
SDVariable.div(SDVariable x)
|
SDVariable |
SDVariable.div(String name,
SDVariable x)
Division operation: elementwise
this / xIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.dot(SDVariable other,
int... dimensions)
|
SDVariable |
SDVariable.dot(String name,
SDVariable other,
int... dimensions)
Matrix dot product: out = dot(this,other, dimensions)
|
SDVariable |
SDVariable.eq(SDVariable other)
|
SDVariable |
SDVariable.eq(String name,
SDVariable other)
Equal to operation: elementwise
this == yIf x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SameDiffConditional.eval(SameDiff context,
SameDiffFunctionDefinition body,
SDVariable[] inputVars) |
SDVariable |
SDVariable.fdiv(String name,
SDVariable x)
Floor division operation: elementwise
this // xIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.gt(SDVariable other)
|
SDVariable |
SDVariable.gt(String name,
SDVariable other)
Greater than operation: elementwise
this > yIf x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.gte(SDVariable other)
|
SDVariable |
SDVariable.gte(String name,
SDVariable other)
Greater than or equal to operation: elementwise
this >= yIf x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
ArgumentInterceptor.intercept(SDVariable argument) |
SDVariable |
SDVariable.lt(SDVariable other)
|
SDVariable |
SDVariable.lt(String name,
SDVariable other)
Less than operation: elementwise
this < yIf x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.lte(SDVariable other)
|
SDVariable |
SDVariable.lte(String name,
SDVariable other)
Less than or equal to operation: elementwise
this <= yIf x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.minus(SDVariable other)
For Kotlin operator interop
|
SDVariable |
SDVariable.mmul(SDVariable other)
|
SDVariable |
SDVariable.mmul(String name,
SDVariable other)
Matrix multiplication: out = mmul(this,other)
|
SDVariable |
SDVariable.mmul(String name,
SDVariable other,
@NonNull MMulTranspose mMulTranspose)
Matrix multiplication: out = mmul(this,other)
|
SDVariable |
SDVariable.mod(String name,
SDVariable x)
Modulo operation: elementwise
this / xIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.mul(SDVariable x)
|
SDVariable |
SDVariable.mul(String name,
SDVariable x)
Multiplication operation: elementwise
this * xIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.neq(SDVariable other)
|
SDVariable |
SDVariable.neq(String name,
SDVariable other)
Not equal to operation: elementwise
this != yIf x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.permute(SDVariable dimensions) |
SDVariable |
SDVariable.plus(SDVariable other)
For Kotlin operator interop
|
SDVariable |
SDVariable.rdiv(SDVariable sameDiffVariable)
|
SDVariable |
SDVariable.rdiv(String name,
SDVariable x)
Reverse division operation: elementwise
x / thisIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
void |
SameDiff.replaceArgFor(int i,
@NonNull SDVariable newArg,
@NonNull DifferentialFunction function)
Replaces the argument at i with newArg for function
Does not use (or remove) ArgumentInterceptor stuff
|
SDVariable |
SDVariable.reshape(SDVariable newShape)
Reshape the current variable to the specified (dynamic) shape.
|
SDVariable |
SDVariable.rsub(SDVariable x)
|
SDVariable |
SDVariable.rsub(String name,
SDVariable x)
Reverse subtraction operation: elementwise
x - thisIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
void |
SameDiff.setGradientForVariableName(String variableName,
SDVariable variable)
Assign a SDVariable to represent the gradient of the SDVariable with the specified name
|
void |
SameDiff.setLossVariables(SDVariable... lossVariables)
|
SDVariable |
SDVariable.squaredDifference(SDVariable x)
|
SDVariable |
SDVariable.squaredDifference(String name,
SDVariable x)
Squared difference operation:
(this - x)^2 |
SDVariable |
SDVariable.sub(SDVariable x)
|
SDVariable |
SDVariable.sub(String name,
SDVariable x)
Subtraction operation: elementwise
this - xIf this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.times(SDVariable other)
For Kotlin operator interop
|
TrainingConfig.Builder |
TrainingConfig.Builder.trainEvaluation(@NonNull SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
Add requested History training evaluations for a parm/variable.
|
SDVariable |
SameDiff.updateVariableNameAndReference(SameDiffOp opToRename,
SDVariable varToUpdate,
String newVarName)
Updates the variable name property on the passed in variable, the reference in samediff, and returns the variable.
|
SDVariable |
SameDiff.updateVariableNameAndReference(SDVariable varToUpdate,
String newVarName)
Updates the variable name property on the passed in variable, the reference in samediff, and returns the variable.
|
SDVariable[] |
SameDiff.updateVariableNamesAndReferences(SDVariable[] variablesToUpdate,
String[] newVariableNames)
Updates the variable name property on the passed in variables, its reference in samediff, and returns the variable.
|
TrainingConfig.Builder |
TrainingConfig.Builder.validationEvaluation(@NonNull SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
Add requested History validation evaluations for a parm/variable.
|
SDVariable |
SameDiff.var(@NonNull SDVariable v)
Initialize a
SDVariable reference tying this variable to this samediff instance. |
SDVariable[] |
SameDiff.whileLoop(@NonNull SDVariable[] loopVars,
@NonNull SameDiffSingleLambda cond,
@NonNull SameDiffLambda body)
|
SDVariable[] |
SameDiff.whileLoop(String[] outputNames,
String loopName,
@NonNull SDVariable[] loopVars,
@NonNull SameDiffSingleLambda cond,
@NonNull SameDiffLambda body)
Constructs a While loop using the tensorflow style control flow operations (Switch, Merge, Enter, Exit, and NextIteration)
Repeatedly executes body on the loop variables and updates them with the results, until cond evaluates to false
Note that cond and body lambdas are only called once to construct the graph.
|
SDVariable[] |
SameDiff.whileLoop(String loopName,
@NonNull SDVariable[] loopVars,
@NonNull SameDiffSingleLambda cond,
@NonNull SameDiffLambda body)
|
| Modifier and Type | Method and Description |
|---|---|
void |
SameDiff.convertToConstants(List<SDVariable> variables)
Convert all of the specified variables to constants.
|
| Modifier and Type | Method and Description |
|---|---|
EvaluationConfig |
EvaluationConfig.evaluate(@NonNull SDVariable variable,
IEvaluation... evaluations)
|
EvaluationConfig |
EvaluationConfig.evaluate(@NonNull SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
|
BatchOutputConfig |
BatchOutputConfig.input(@NonNull SDVariable variable,
@NonNull INDArray placeholder)
|
EvaluationConfig |
EvaluationConfig.labelIndex(@NonNull SDVariable variable,
int labelIndex)
|
BatchOutputConfig |
BatchOutputConfig.output(SDVariable... outputs)
Add required outputs
|
OutputConfig |
OutputConfig.output(SDVariable... outputs)
Add required outputs
|
| Modifier and Type | Method and Description |
|---|---|
SDVariable |
DefaultSameDiffConditional.eval(SameDiff context,
SameDiffFunctionDefinition body,
SDVariable[] inputVars) |
| Modifier and Type | Method and Description |
|---|---|
SDVariable |
DefaultSameDiffConditional.eval(SameDiff context,
SameDiffFunctionDefinition body,
SDVariable[] inputVars) |
| Modifier and Type | Field and Description |
|---|---|
protected SDVariable |
Variable.gradient |
protected SDVariable |
Variable.variable |
| Modifier and Type | Method and Description |
|---|---|
protected INDArray |
InferenceSession.getArray(SDVariable sdv,
Collection<AbstractSession.VarId> opInputs,
Collection<AbstractSession.VarId> allIterInputs) |
| Modifier and Type | Method and Description |
|---|---|
SDVariable |
SDMath.abs(SDVariable x)
Elementwise absolute value operation: out = abs(x)
|
SDVariable |
SDMath.abs(String name,
SDVariable x)
Elementwise absolute value operation: out = abs(x)
|
SDVariable |
SDLoss.absoluteDifference(SDVariable label,
SDVariable predictions,
SDVariable weights)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
SDVariable |
SDLoss.absoluteDifference(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
SDVariable |
SDLoss.absoluteDifference(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
SDVariable |
SDLoss.absoluteDifference(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
SDVariable |
SDMath.acos(SDVariable x)
Elementwise acos (arccosine, inverse cosine) operation: out = arccos(x)
|
SDVariable |
SDMath.acos(String name,
SDVariable x)
Elementwise acos (arccosine, inverse cosine) operation: out = arccos(x)
|
SDVariable |
SDMath.acosh(SDVariable x)
Elementwise acosh (inverse hyperbolic cosine) function: out = acosh(x)
|
SDVariable |
SDMath.acosh(String name,
SDVariable x)
Elementwise acosh (inverse hyperbolic cosine) function: out = acosh(x)
|
SDVariable |
SDMath.add(SDVariable x,
double value)
Scalar add operation, out = in + scalar
|
SDVariable |
SDMath.add(SDVariable x,
SDVariable y)
Pairwise addition operation, out = x + y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.add(String name,
SDVariable x,
double value)
Scalar add operation, out = in + scalar
|
SDVariable |
SDMath.add(String name,
SDVariable x,
SDVariable y)
Pairwise addition operation, out = x + y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDImage.adjustContrast(SDVariable in,
double factor)
Adjusts contrast of RGB or grayscale images.
|
SDVariable |
SDImage.adjustContrast(String name,
SDVariable in,
double factor)
Adjusts contrast of RGB or grayscale images.
|
SDVariable |
SDImage.adjustHue(SDVariable in,
double delta)
Adjust hue of RGB image
|
SDVariable |
SDImage.adjustHue(String name,
SDVariable in,
double delta)
Adjust hue of RGB image
|
SDVariable |
SDImage.adjustSaturation(SDVariable in,
double factor)
Adjust saturation of RGB images
|
SDVariable |
SDImage.adjustSaturation(String name,
SDVariable in,
double factor)
Adjust saturation of RGB images
|
SDVariable |
SDBaseOps.all(SDVariable x,
int... dimensions)
Boolean and array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.all(String name,
SDVariable x,
int... dimensions)
Boolean and array reduction operation, optionally along specified dimensions
|
SDVariable |
SDMath.amax(SDVariable in,
int... dimensions)
Absolute max array reduction operation, optionally along specified dimensions: out = max(abs(x))
|
SDVariable |
SDMath.amax(String name,
SDVariable in,
int... dimensions)
Absolute max array reduction operation, optionally along specified dimensions: out = max(abs(x))
|
SDVariable |
SDMath.amean(SDVariable in,
int... dimensions)
Absolute mean array reduction operation, optionally along specified dimensions: out = mean(abs(x))
|
SDVariable |
SDMath.amean(String name,
SDVariable in,
int... dimensions)
Absolute mean array reduction operation, optionally along specified dimensions: out = mean(abs(x))
|
SDVariable |
SDMath.amin(SDVariable in,
int... dimensions)
Absolute min array reduction operation, optionally along specified dimensions: out = min(abs(x))
|
SDVariable |
SDMath.amin(String name,
SDVariable in,
int... dimensions)
Absolute min array reduction operation, optionally along specified dimensions: out = min(abs(x))
|
SDVariable |
SDBitwise.and(SDVariable x,
SDVariable y)
Bitwise AND operation.
|
SDVariable |
SDMath.and(SDVariable x,
SDVariable y)
Boolean AND operation: elementwise (x != 0) && (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBitwise.and(String name,
SDVariable x,
SDVariable y)
Bitwise AND operation.
|
SDVariable |
SDMath.and(String name,
SDVariable x,
SDVariable y)
Boolean AND operation: elementwise (x != 0) && (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.any(SDVariable x,
int... dimensions)
Boolean or array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.any(String name,
SDVariable x,
int... dimensions)
Boolean or array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.argmax(SDVariable in,
boolean keepDims,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmax(SDVariable in,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmax(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmax(String name,
SDVariable in,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmin(SDVariable in,
boolean keepDims,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmin(SDVariable in,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmin(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmin(String name,
SDVariable in,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.asin(SDVariable x)
Elementwise asin (arcsin, inverse sine) operation: out = arcsin(x)
|
SDVariable |
SDMath.asin(String name,
SDVariable x)
Elementwise asin (arcsin, inverse sine) operation: out = arcsin(x)
|
SDVariable |
SDMath.asinh(SDVariable x)
Elementwise asinh (inverse hyperbolic sine) function: out = asinh(x)
|
SDVariable |
SDMath.asinh(String name,
SDVariable x)
Elementwise asinh (inverse hyperbolic sine) function: out = asinh(x)
|
SDVariable |
SDMath.asum(SDVariable in,
int... dimensions)
Absolute sum array reduction operation, optionally along specified dimensions: out = sum(abs(x))
|
SDVariable |
SDMath.asum(String name,
SDVariable in,
int... dimensions)
Absolute sum array reduction operation, optionally along specified dimensions: out = sum(abs(x))
|
SDVariable |
SDMath.atan(SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = arctangent(x)
|
SDVariable |
SDMath.atan(String name,
SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = arctangent(x)
|
SDVariable |
SDMath.atan2(SDVariable y,
SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = atan2(x,y).
Similar to atan(y/x) but sigts of x and y are used to determine the location of the result |
SDVariable |
SDMath.atan2(String name,
SDVariable y,
SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = atan2(x,y).
Similar to atan(y/x) but sigts of x and y are used to determine the location of the result |
SDVariable |
SDMath.atanh(SDVariable x)
Elementwise atanh (inverse hyperbolic tangent) function: out = atanh(x)
|
SDVariable |
SDMath.atanh(String name,
SDVariable x)
Elementwise atanh (inverse hyperbolic tangent) function: out = atanh(x)
|
SDVariable |
SDCNN.avgPooling2d(SDVariable input,
Pooling2DConfig Pooling2DConfig)
2D Convolution layer operation - average pooling 2d
|
SDVariable |
SDCNN.avgPooling2d(String name,
SDVariable input,
Pooling2DConfig Pooling2DConfig)
2D Convolution layer operation - average pooling 2d
|
SDVariable |
SDCNN.avgPooling3d(SDVariable input,
Pooling3DConfig Pooling3DConfig)
3D convolution layer operation - average pooling 3d
|
SDVariable |
SDCNN.avgPooling3d(String name,
SDVariable input,
Pooling3DConfig Pooling3DConfig)
3D convolution layer operation - average pooling 3d
|
SDVariable[] |
SDBaseOps.batchMmul(SDVariable[] inputsA,
SDVariable... inputsB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(SDVariable[] inputsA,
SDVariable[] inputsB,
boolean transposeA,
boolean transposeB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(String[] names,
SDVariable[] inputsA,
SDVariable... inputsB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(String[] names,
SDVariable[] inputsA,
SDVariable[] inputsB,
boolean transposeA,
boolean transposeB)
Matrix multiply a batch of matrices.
|
SDVariable |
SDNN.batchNorm(SDVariable input,
SDVariable mean,
SDVariable variance,
SDVariable gamma,
SDVariable beta,
double epsilon,
int... axis)
|
SDVariable |
SDNN.batchNorm(String name,
SDVariable input,
SDVariable mean,
SDVariable variance,
SDVariable gamma,
SDVariable beta,
double epsilon,
int... axis)
|
SDVariable |
SDCNN.batchToSpace(SDVariable x,
int[] blocks,
int[] croppingTop,
int... croppingBottom)
Convolution 2d layer batch to space operation on 4d input.
Reduces input batch dimension by rearranging data into a larger spatial dimensions |
SDVariable |
SDCNN.batchToSpace(String name,
SDVariable x,
int[] blocks,
int[] croppingTop,
int... croppingBottom)
Convolution 2d layer batch to space operation on 4d input.
Reduces input batch dimension by rearranging data into a larger spatial dimensions |
SDVariable |
SDRandom.bernoulli(double p,
DataType datatype,
long... shape)
Generate a new random INDArray, where values are randomly sampled according to a Bernoulli distribution,
with the specified probability. |
SDVariable |
SDRandom.bernoulli(String name,
double p,
DataType datatype,
long... shape)
Generate a new random INDArray, where values are randomly sampled according to a Bernoulli distribution,
with the specified probability. |
SDVariable |
SDNN.biasAdd(SDVariable input,
SDVariable bias,
boolean nchw)
Bias addition operation: a special case of addition, typically used with CNN 4D activations and a 1D bias vector
|
SDVariable |
SDNN.biasAdd(String name,
SDVariable input,
SDVariable bias,
boolean nchw)
Bias addition operation: a special case of addition, typically used with CNN 4D activations and a 1D bias vector
|
SDVariable |
SDRandom.binomial(int nTrials,
double p,
DataType datatype,
long... shape)
Generate a new random INDArray, where values are randomly sampled according to a Binomial distribution,
with the specified number of trials and probability. |
SDVariable |
SDRandom.binomial(String name,
int nTrials,
double p,
DataType datatype,
long... shape)
Generate a new random INDArray, where values are randomly sampled according to a Binomial distribution,
with the specified number of trials and probability. |
SDVariable |
SDBitwise.bitRotl(SDVariable x,
SDVariable shift)
Roll integer bits to the left, i.e.
|
SDVariable |
SDBitwise.bitRotl(String name,
SDVariable x,
SDVariable shift)
Roll integer bits to the left, i.e.
|
SDVariable |
SDBitwise.bitRotr(SDVariable x,
SDVariable shift)
Roll integer bits to the right, i.e.
|
SDVariable |
SDBitwise.bitRotr(String name,
SDVariable x,
SDVariable shift)
Roll integer bits to the right, i.e.
|
SDVariable |
SDBitwise.bitsHammingDistance(SDVariable x,
SDVariable y)
Bitwise Hamming distance reduction over all elements of both input arrays.
For example, if x=01100000 and y=1010000 then the bitwise Hamming distance is 2 (due to differences at positions 0 and 1) Inputs must satisfy the following constraints: Must be same types: isSameType(x, y) |
SDVariable |
SDBitwise.bitsHammingDistance(String name,
SDVariable x,
SDVariable y)
Bitwise Hamming distance reduction over all elements of both input arrays.
For example, if x=01100000 and y=1010000 then the bitwise Hamming distance is 2 (due to differences at positions 0 and 1) Inputs must satisfy the following constraints: Must be same types: isSameType(x, y) |
SDVariable |
SDBitwise.bitShift(SDVariable x,
SDVariable shift)
Shift integer bits to the left, i.e.
|
SDVariable |
SDMath.bitShift(SDVariable x,
SDVariable shift)
Bit shift operation
|
SDVariable |
SDBitwise.bitShift(String name,
SDVariable x,
SDVariable shift)
Shift integer bits to the left, i.e.
|
SDVariable |
SDMath.bitShift(String name,
SDVariable x,
SDVariable shift)
Bit shift operation
|
SDVariable |
SDBitwise.bitShiftRight(SDVariable x,
SDVariable shift)
Shift integer bits to the right, i.e.
|
SDVariable |
SDMath.bitShiftRight(SDVariable x,
SDVariable shift)
Right bit shift operation
|
SDVariable |
SDBitwise.bitShiftRight(String name,
SDVariable x,
SDVariable shift)
Shift integer bits to the right, i.e.
|
SDVariable |
SDMath.bitShiftRight(String name,
SDVariable x,
SDVariable shift)
Right bit shift operation
|
SDVariable |
SDMath.bitShiftRotl(SDVariable x,
SDVariable shift)
Cyclic bit shift operation
|
SDVariable |
SDMath.bitShiftRotl(String name,
SDVariable x,
SDVariable shift)
Cyclic bit shift operation
|
SDVariable |
SDMath.bitShiftRotr(SDVariable x,
SDVariable shift)
Cyclic right shift operation
|
SDVariable |
SDMath.bitShiftRotr(String name,
SDVariable x,
SDVariable shift)
Cyclic right shift operation
|
SDVariable |
SDBaseOps.castTo(SDVariable arg,
DataType datatype)
Cast the array to a new datatype - for example, Integer -> Float
|
SDVariable |
SDBaseOps.castTo(String name,
SDVariable arg,
DataType datatype)
Cast the array to a new datatype - for example, Integer -> Float
|
SDVariable |
SDMath.ceil(SDVariable x)
Element-wise ceiling function: out = ceil(x).
Rounds each value up to the nearest integer value (if not already an integer) |
SDVariable |
SDMath.ceil(String name,
SDVariable x)
Element-wise ceiling function: out = ceil(x).
Rounds each value up to the nearest integer value (if not already an integer) |
SDVariable |
SDLinalg.cholesky(SDVariable input)
Computes the Cholesky decomposition of one or more square matrices.
|
SDVariable |
SDLinalg.cholesky(String name,
SDVariable input)
Computes the Cholesky decomposition of one or more square matrices.
|
SDVariable |
SDMath.clipByAvgNorm(SDVariable x,
double clipValue,
int... dimensions)
Clips tensor values to a maximum average L2-norm.
|
SDVariable |
SDMath.clipByAvgNorm(String name,
SDVariable x,
double clipValue,
int... dimensions)
Clips tensor values to a maximum average L2-norm.
|
SDVariable |
SDMath.clipByNorm(SDVariable x,
double clipValue,
int... dimensions)
Clipping by L2 norm, optionally along dimension(s)
if l2Norm(x,dimension) < clipValue, then input is returned unmodifed Otherwise, out[i] = in[i] * clipValue / l2Norm(in, dimensions) where each value is clipped according to the corresponding l2Norm along the specified dimensions |
SDVariable |
SDMath.clipByNorm(String name,
SDVariable x,
double clipValue,
int... dimensions)
Clipping by L2 norm, optionally along dimension(s)
if l2Norm(x,dimension) < clipValue, then input is returned unmodifed Otherwise, out[i] = in[i] * clipValue / l2Norm(in, dimensions) where each value is clipped according to the corresponding l2Norm along the specified dimensions |
SDVariable |
SDMath.clipByValue(SDVariable x,
double clipValueMin,
double clipValueMax)
Element-wise clipping function:
out[i] = in[i] if in[i] >= clipValueMin and in[i] <= clipValueMax out[i] = clipValueMin if in[i] < clipValueMin out[i] = clipValueMax if in[i] > clipValueMax |
SDVariable |
SDMath.clipByValue(String name,
SDVariable x,
double clipValueMin,
double clipValueMax)
Element-wise clipping function:
out[i] = in[i] if in[i] >= clipValueMin and in[i] <= clipValueMax out[i] = clipValueMin if in[i] < clipValueMin out[i] = clipValueMax if in[i] > clipValueMax |
SDVariable |
SDCNN.col2Im(SDVariable in,
Conv2DConfig Conv2DConfig)
col2im operation for use in 2D convolution operations.
|
SDVariable |
SDCNN.col2Im(String name,
SDVariable in,
Conv2DConfig Conv2DConfig)
col2im operation for use in 2D convolution operations.
|
SDVariable |
SDBaseOps.concat(int dimension,
SDVariable... inputs)
Concatenate a set of inputs along the specified dimension.
Note that inputs must have identical rank and identical dimensions, other than the dimension to stack on. For example, if 2 inputs have shape [a, x, c] and [a, y, c] and dimension = 1, then the output has shape [a, x+y, c] Inputs must satisfy the following constraints: Input arrays must all be the same datatype: isSameType(inputs) |
SDVariable |
SDBaseOps.concat(String name,
int dimension,
SDVariable... inputs)
Concatenate a set of inputs along the specified dimension.
Note that inputs must have identical rank and identical dimensions, other than the dimension to stack on. For example, if 2 inputs have shape [a, x, c] and [a, y, c] and dimension = 1, then the output has shape [a, x+y, c] Inputs must satisfy the following constraints: Input arrays must all be the same datatype: isSameType(inputs) |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable pred,
DataType dataType)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable pred,
int numClasses)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. For example, if labels = [0, 1, 1], predicted = [0, 2, 1], and numClasses=4 then output is: [1, 0, 0, 0] [0, 1, 1, 0] [0, 0, 0, 0] [0, 0, 0, 0] |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable pred,
SDVariable weights)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable pred,
SDVariable weights,
int numClasses)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. For example, if labels = [0, 1, 1], predicted = [0, 2, 1], numClasses = 4, and weights = [1, 2, 3] [1, 0, 0, 0] [0, 3, 2, 0] [0, 0, 0, 0] [0, 0, 0, 0] |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
DataType dataType)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
int numClasses)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. For example, if labels = [0, 1, 1], predicted = [0, 2, 1], and numClasses=4 then output is: [1, 0, 0, 0] [0, 1, 1, 0] [0, 0, 0, 0] [0, 0, 0, 0] |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
SDVariable weights)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
SDVariable weights,
int numClasses)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. For example, if labels = [0, 1, 1], predicted = [0, 2, 1], numClasses = 4, and weights = [1, 2, 3] [1, 0, 0, 0] [0, 3, 2, 0] [0, 0, 0, 0] [0, 0, 0, 0] |
SDVariable |
SDCNN.conv1d(SDVariable input,
SDVariable weights,
Conv1DConfig Conv1DConfig)
Conv1d operation.
|
SDVariable |
SDCNN.conv1d(SDVariable input,
SDVariable weights,
SDVariable bias,
Conv1DConfig Conv1DConfig)
Conv1d operation.
|
SDVariable |
SDCNN.conv1d(String name,
SDVariable input,
SDVariable weights,
Conv1DConfig Conv1DConfig)
Conv1d operation.
|
SDVariable |
SDCNN.conv1d(String name,
SDVariable input,
SDVariable weights,
SDVariable bias,
Conv1DConfig Conv1DConfig)
Conv1d operation.
|
SDVariable |
SDCNN.conv2d(SDVariable layerInput,
SDVariable weights,
Conv2DConfig Conv2DConfig)
2D Convolution operation with optional bias
|
SDVariable |
SDCNN.conv2d(SDVariable layerInput,
SDVariable weights,
SDVariable bias,
Conv2DConfig Conv2DConfig)
2D Convolution operation with optional bias
|
SDVariable |
SDCNN.conv2d(String name,
SDVariable layerInput,
SDVariable weights,
Conv2DConfig Conv2DConfig)
2D Convolution operation with optional bias
|
SDVariable |
SDCNN.conv2d(String name,
SDVariable layerInput,
SDVariable weights,
SDVariable bias,
Conv2DConfig Conv2DConfig)
2D Convolution operation with optional bias
|
SDVariable |
SDCNN.conv3d(SDVariable input,
SDVariable weights,
Conv3DConfig Conv3DConfig)
Convolution 3D operation with optional bias
|
SDVariable |
SDCNN.conv3d(SDVariable input,
SDVariable weights,
SDVariable bias,
Conv3DConfig Conv3DConfig)
Convolution 3D operation with optional bias
|
SDVariable |
SDCNN.conv3d(String name,
SDVariable input,
SDVariable weights,
Conv3DConfig Conv3DConfig)
Convolution 3D operation with optional bias
|
SDVariable |
SDCNN.conv3d(String name,
SDVariable input,
SDVariable weights,
SDVariable bias,
Conv3DConfig Conv3DConfig)
Convolution 3D operation with optional bias
|
SDVariable |
SDMath.cos(SDVariable x)
Elementwise cosine operation: out = cos(x)
|
SDVariable |
SDMath.cos(String name,
SDVariable x)
Elementwise cosine operation: out = cos(x)
|
SDVariable |
SDMath.cosh(SDVariable x)
Elementwise cosh (hyperbolic cosine) operation: out = cosh(x)
|
SDVariable |
SDMath.cosh(String name,
SDVariable x)
Elementwise cosh (hyperbolic cosine) operation: out = cosh(x)
|
SDVariable |
SDMath.cosineDistance(SDVariable x,
SDVariable y,
int... dimensions)
Cosine distance reduction operation.
|
SDVariable |
SDLoss.cosineDistance(SDVariable label,
SDVariable predictions,
SDVariable weights,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
SDVariable |
SDLoss.cosineDistance(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
SDVariable |
SDMath.cosineDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Cosine distance reduction operation.
|
SDVariable |
SDLoss.cosineDistance(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
SDVariable |
SDLoss.cosineDistance(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
SDVariable |
SDMath.cosineSimilarity(SDVariable x,
SDVariable y,
int... dimensions)
Cosine similarity pairwise reduction operation.
|
SDVariable |
SDMath.cosineSimilarity(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Cosine similarity pairwise reduction operation.
|
SDVariable |
SDMath.countNonZero(SDVariable in,
int... dimensions)
Count non zero array reduction operation, optionally along specified dimensions: out = count(x != 0)
|
SDVariable |
SDMath.countNonZero(String name,
SDVariable in,
int... dimensions)
Count non zero array reduction operation, optionally along specified dimensions: out = count(x != 0)
|
SDVariable |
SDMath.countZero(SDVariable in,
int... dimensions)
Count zero array reduction operation, optionally along specified dimensions: out = count(x == 0)
|
SDVariable |
SDMath.countZero(String name,
SDVariable in,
int... dimensions)
Count zero array reduction operation, optionally along specified dimensions: out = count(x == 0)
|
SDVariable |
SDNN.cReLU(SDVariable x)
Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation.
|
SDVariable |
SDNN.cReLU(String name,
SDVariable x)
Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation.
|
SDVariable |
SDImage.cropAndResize(SDVariable image,
SDVariable cropBoxes,
SDVariable boxIndices,
SDVariable cropOutSize)
Given an input image and some crop boxes, extract out the image subsets and resize them to the specified size.
|
SDVariable |
SDImage.cropAndResize(SDVariable image,
SDVariable cropBoxes,
SDVariable boxIndices,
SDVariable cropOutSize,
double extrapolationValue)
Given an input image and some crop boxes, extract out the image subsets and resize them to the specified size.
|
SDVariable |
SDImage.cropAndResize(String name,
SDVariable image,
SDVariable cropBoxes,
SDVariable boxIndices,
SDVariable cropOutSize)
Given an input image and some crop boxes, extract out the image subsets and resize them to the specified size.
|
SDVariable |
SDImage.cropAndResize(String name,
SDVariable image,
SDVariable cropBoxes,
SDVariable boxIndices,
SDVariable cropOutSize,
double extrapolationValue)
Given an input image and some crop boxes, extract out the image subsets and resize them to the specified size.
|
SDVariable |
SDLinalg.cross(SDVariable a,
SDVariable b)
Computes pairwise cross product.
|
SDVariable |
SDMath.cross(SDVariable a,
SDVariable b)
Returns the pair-wise cross product of equal size arrays a and b: a x b = ||a||x||b|| sin(theta).
Can take rank 1 or above inputs (of equal shapes), but note that the last dimension must have dimension 3 |
SDVariable |
SDLinalg.cross(String name,
SDVariable a,
SDVariable b)
Computes pairwise cross product.
|
SDVariable |
SDMath.cross(String name,
SDVariable a,
SDVariable b)
Returns the pair-wise cross product of equal size arrays a and b: a x b = ||a||x||b|| sin(theta).
Can take rank 1 or above inputs (of equal shapes), but note that the last dimension must have dimension 3 |
SDVariable |
SDLoss.ctcLoss(SDVariable targetLabels,
SDVariable logitInput,
SDVariable targetLabelLengths,
SDVariable logitInputLengths)
CTC Loss: Connectionist Temporal Classification Loss.
|
SDVariable |
SDLoss.ctcLoss(String name,
SDVariable targetLabels,
SDVariable logitInput,
SDVariable targetLabelLengths,
SDVariable logitInputLengths)
CTC Loss: Connectionist Temporal Classification Loss.
|
SDVariable |
SDMath.cube(SDVariable x)
Element-wise cube function: out = x^3
|
SDVariable |
SDMath.cube(String name,
SDVariable x)
Element-wise cube function: out = x^3
|
SDVariable |
SDBaseOps.cumprod(SDVariable in,
boolean exclusive,
boolean reverse,
int... axis)
Cumulative product operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a*b, a*b*c] exclusive=true, reverse=false, [0, a, a*b] exclusive=false, reverse=true: [a*b*c, b*c, c] exclusive=true, reverse=true: [b*c, c, 0] |
SDVariable |
SDBaseOps.cumprod(SDVariable in,
int... axis)
Cumulative product operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a*b, a*b*c] exclusive=true, reverse=false, [0, a, a*b] exclusive=false, reverse=true: [a*b*c, b*c, c] exclusive=true, reverse=true: [b*c, c, 0] |
SDVariable |
SDBaseOps.cumprod(String name,
SDVariable in,
boolean exclusive,
boolean reverse,
int... axis)
Cumulative product operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a*b, a*b*c] exclusive=true, reverse=false, [0, a, a*b] exclusive=false, reverse=true: [a*b*c, b*c, c] exclusive=true, reverse=true: [b*c, c, 0] |
SDVariable |
SDBaseOps.cumprod(String name,
SDVariable in,
int... axis)
Cumulative product operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a*b, a*b*c] exclusive=true, reverse=false, [0, a, a*b] exclusive=false, reverse=true: [a*b*c, b*c, c] exclusive=true, reverse=true: [b*c, c, 0] |
SDVariable |
SDBaseOps.cumsum(SDVariable in,
boolean exclusive,
boolean reverse,
int... axis)
Cumulative sum operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a+b, a+b+c] exclusive=true, reverse=false, [0, a, a+b] exclusive=false, reverse=true: [a+b+c, b+c, c] exclusive=true, reverse=true: [b+c, c, 0] |
SDVariable |
SDBaseOps.cumsum(SDVariable in,
int... axis)
Cumulative sum operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a+b, a+b+c] exclusive=true, reverse=false, [0, a, a+b] exclusive=false, reverse=true: [a+b+c, b+c, c] exclusive=true, reverse=true: [b+c, c, 0] |
SDVariable |
SDBaseOps.cumsum(String name,
SDVariable in,
boolean exclusive,
boolean reverse,
int... axis)
Cumulative sum operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a+b, a+b+c] exclusive=true, reverse=false, [0, a, a+b] exclusive=false, reverse=true: [a+b+c, b+c, c] exclusive=true, reverse=true: [b+c, c, 0] |
SDVariable |
SDBaseOps.cumsum(String name,
SDVariable in,
int... axis)
Cumulative sum operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a+b, a+b+c] exclusive=true, reverse=false, [0, a, a+b] exclusive=false, reverse=true: [a+b+c, b+c, c] exclusive=true, reverse=true: [b+c, c, 0] |
SDVariable |
SDCNN.deconv2d(SDVariable layerInput,
SDVariable weights,
DeConv2DConfig DeConv2DConfig)
2D deconvolution operation with optional bias
|
SDVariable |
SDCNN.deconv2d(SDVariable layerInput,
SDVariable weights,
SDVariable bias,
DeConv2DConfig DeConv2DConfig)
2D deconvolution operation with optional bias
|
SDVariable |
SDCNN.deconv2d(String name,
SDVariable layerInput,
SDVariable weights,
DeConv2DConfig DeConv2DConfig)
2D deconvolution operation with optional bias
|
SDVariable |
SDCNN.deconv2d(String name,
SDVariable layerInput,
SDVariable weights,
SDVariable bias,
DeConv2DConfig DeConv2DConfig)
2D deconvolution operation with optional bias
|
SDVariable |
SDCNN.deconv3d(SDVariable input,
SDVariable weights,
DeConv3DConfig DeConv3DConfig)
3D CNN deconvolution operation with or without optional bias
|
SDVariable |
SDCNN.deconv3d(SDVariable input,
SDVariable weights,
SDVariable bias,
DeConv3DConfig DeConv3DConfig)
3D CNN deconvolution operation with or without optional bias
|
SDVariable |
SDCNN.deconv3d(String name,
SDVariable input,
SDVariable weights,
DeConv3DConfig DeConv3DConfig)
3D CNN deconvolution operation with or without optional bias
|
SDVariable |
SDCNN.deconv3d(String name,
SDVariable input,
SDVariable weights,
SDVariable bias,
DeConv3DConfig DeConv3DConfig)
3D CNN deconvolution operation with or without optional bias
|
SDVariable |
SDCNN.depthToSpace(SDVariable x,
int blockSize,
DataFormat dataFormat)
Convolution 2d layer batch to space operation on 4d input.
Reduces input channels dimension by rearranging data into a larger spatial dimensions Example: if input has shape [mb, 8, 2, 2] and block size is 2, then output size is [mb, 8/(2*2), 2*2, 2*2] = [mb, 2, 4, 4] |
SDVariable |
SDCNN.depthToSpace(String name,
SDVariable x,
int blockSize,
DataFormat dataFormat)
Convolution 2d layer batch to space operation on 4d input.
Reduces input channels dimension by rearranging data into a larger spatial dimensions Example: if input has shape [mb, 8, 2, 2] and block size is 2, then output size is [mb, 8/(2*2), 2*2, 2*2] = [mb, 2, 4, 4] |
SDVariable |
SDCNN.depthWiseConv2d(SDVariable layerInput,
SDVariable depthWeights,
Conv2DConfig Conv2DConfig)
Depth-wise 2D convolution operation with optional bias
|
SDVariable |
SDCNN.depthWiseConv2d(SDVariable layerInput,
SDVariable depthWeights,
SDVariable bias,
Conv2DConfig Conv2DConfig)
Depth-wise 2D convolution operation with optional bias
|
SDVariable |
SDCNN.depthWiseConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
Conv2DConfig Conv2DConfig)
Depth-wise 2D convolution operation with optional bias
|
SDVariable |
SDCNN.depthWiseConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
SDVariable bias,
Conv2DConfig Conv2DConfig)
Depth-wise 2D convolution operation with optional bias
|
SDVariable |
SDLinalg.diag_part(SDVariable input)
Calculates diagonal tensor.
|
SDVariable |
SDLinalg.diag_part(String name,
SDVariable input)
Calculates diagonal tensor.
|
SDVariable |
SDLinalg.diag(SDVariable input)
Calculates diagonal tensor.
|
SDVariable |
SDMath.diag(SDVariable x)
Returns an output variable with diagonal values equal to the specified values; off-diagonal values will be set to 0
For example, if input = [1,2,3], then output is given by: [ 1, 0, 0] [ 0, 2, 0] [ 0, 0, 3] Higher input ranks are also supported: if input has shape [a,...,R-1] then output[i,...,k,i,...,k] = input[i,...,k]. i.e., for input rank R, output has rank 2R |
SDVariable |
SDLinalg.diag(String name,
SDVariable input)
Calculates diagonal tensor.
|
SDVariable |
SDMath.diag(String name,
SDVariable x)
Returns an output variable with diagonal values equal to the specified values; off-diagonal values will be set to 0
For example, if input = [1,2,3], then output is given by: [ 1, 0, 0] [ 0, 2, 0] [ 0, 0, 3] Higher input ranks are also supported: if input has shape [a,...,R-1] then output[i,...,k,i,...,k] = input[i,...,k]. i.e., for input rank R, output has rank 2R |
SDVariable |
SDMath.diagPart(SDVariable x)
Extract the diagonal part from the input array.
If input is [ 1, 0, 0] [ 0, 2, 0] [ 0, 0, 3] then output is [1, 2, 3]. Supports higher dimensions: in general, out[i,...,k] = in[i,...,k,i,...,k] |
SDVariable |
SDMath.diagPart(String name,
SDVariable x)
Extract the diagonal part from the input array.
If input is [ 1, 0, 0] [ 0, 2, 0] [ 0, 0, 3] then output is [1, 2, 3]. Supports higher dimensions: in general, out[i,...,k] = in[i,...,k,i,...,k] |
SDVariable |
SDCNN.dilation2D(SDVariable df,
SDVariable weights,
int[] strides,
int[] rates,
boolean isSameMode)
TODO doc string
|
SDVariable |
SDCNN.dilation2D(String name,
SDVariable df,
SDVariable weights,
int[] strides,
int[] rates,
boolean isSameMode)
TODO doc string
|
SDVariable |
SDMath.div(SDVariable x,
double value)
Scalar division operation, out = in / scalar
|
SDVariable |
SDMath.div(SDVariable x,
SDVariable y)
Pairwise division operation, out = x / y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.div(String name,
SDVariable x,
double value)
Scalar division operation, out = in / scalar
|
SDVariable |
SDMath.div(String name,
SDVariable x,
SDVariable y)
Pairwise division operation, out = x / y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.dot(SDVariable x,
SDVariable y,
int... dimensions)
Pairwise dot product reduction along dimension
output = sum(i=0 ... |
SDVariable |
SDBaseOps.dot(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Pairwise dot product reduction along dimension
output = sum(i=0 ... |
SDVariable |
SDNN.dotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled)
This operation performs dot product attention on the given timeseries input with the given queries
out = sum(similarity(k_i, q) * v_i) similarity(k, q) = softmax(k * q) where x * q is the dot product of x and q Optionally with normalization step: similarity(k, q) = softmax(k * q / sqrt(size(q)) See also "Attention is all you need" (https://arxiv.org/abs/1706.03762, p. |
SDVariable |
SDNN.dotProductAttention(String name,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled)
This operation performs dot product attention on the given timeseries input with the given queries
out = sum(similarity(k_i, q) * v_i) similarity(k, q) = softmax(k * q) where x * q is the dot product of x and q Optionally with normalization step: similarity(k, q) = softmax(k * q / sqrt(size(q)) See also "Attention is all you need" (https://arxiv.org/abs/1706.03762, p. |
SDVariable |
SDNN.dropout(SDVariable input,
double inputRetainProbability)
Dropout operation
|
SDVariable |
SDNN.dropout(String name,
SDVariable input,
double inputRetainProbability)
Dropout operation
|
SDVariable[] |
SDBaseOps.dynamicPartition(SDVariable x,
SDVariable partitions,
int numPartitions)
Dynamically partition the input variable values into the specified number of paritions, using the indices.
Example: |
SDVariable[] |
SDBaseOps.dynamicPartition(String[] names,
SDVariable x,
SDVariable partitions,
int numPartitions)
Dynamically partition the input variable values into the specified number of paritions, using the indices.
Example: |
SDVariable |
SDBaseOps.dynamicStitch(SDVariable[] indices,
SDVariable... x)
Dynamically merge the specified input arrays into a single array, using the specified indices
|
SDVariable |
SDBaseOps.dynamicStitch(String name,
SDVariable[] indices,
SDVariable... x)
Dynamically merge the specified input arrays into a single array, using the specified indices
|
SDVariable |
SDNN.elu(SDVariable x)
Element-wise exponential linear unit (ELU) function:
out = x if x > 0 out = a * (exp(x) - 1) if x <= 0 with constant a = 1.0 |
SDVariable |
SDNN.elu(String name,
SDVariable x)
Element-wise exponential linear unit (ELU) function:
out = x if x > 0 out = a * (exp(x) - 1) if x <= 0 with constant a = 1.0 |
SDVariable |
SDMath.embeddingLookup(SDVariable x,
SDVariable indices,
PartitionMode PartitionMode)
Looks up ids in a list of embedding tensors.
|
SDVariable |
SDMath.embeddingLookup(String name,
SDVariable x,
SDVariable indices,
PartitionMode PartitionMode)
Looks up ids in a list of embedding tensors.
|
SDVariable |
SDMath.entropy(SDVariable in,
int... dimensions)
Entropy reduction: -sum(x * log(x))
|
SDVariable |
SDMath.entropy(String name,
SDVariable in,
int... dimensions)
Entropy reduction: -sum(x * log(x))
|
SDVariable |
SDBaseOps.eq(SDVariable x,
double y)
Equals operation: elementwise x == y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.eq(SDVariable x,
SDVariable y)
Equal to operation: elementwise x == y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.eq(String name,
SDVariable x,
double y)
Equals operation: elementwise x == y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.eq(String name,
SDVariable x,
SDVariable y)
Equal to operation: elementwise x == y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDMath.erf(SDVariable x)
Element-wise Gaussian error function - out = erf(in)
|
SDVariable |
SDMath.erf(String name,
SDVariable x)
Element-wise Gaussian error function - out = erf(in)
|
SDVariable |
SDMath.erfc(SDVariable x)
Element-wise complementary Gaussian error function - out = erfc(in) = 1 - erf(in)
|
SDVariable |
SDMath.erfc(String name,
SDVariable x)
Element-wise complementary Gaussian error function - out = erfc(in) = 1 - erf(in)
|
SDVariable |
SDMath.euclideanDistance(SDVariable x,
SDVariable y,
int... dimensions)
Euclidean distance (l2 norm, l2 distance) reduction operation.
|
SDVariable |
SDMath.euclideanDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Euclidean distance (l2 norm, l2 distance) reduction operation.
|
SDVariable |
SDMath.exp(SDVariable x)
Elementwise exponent function: out = exp(x) = 2.71828...^x
|
SDVariable |
SDMath.exp(String name,
SDVariable x)
Elementwise exponent function: out = exp(x) = 2.71828...^x
|
SDVariable |
SDBaseOps.expandDims(SDVariable x,
int axis)
Reshape the input by adding a 1 at the specified location.
For example, if input has shape [a, b], then output shape is: axis = 0: [1, a, b] axis = 1: [a, 1, b] axis = 2: [a, b, 1] |
SDVariable |
SDBaseOps.expandDims(String name,
SDVariable x,
int axis)
Reshape the input by adding a 1 at the specified location.
For example, if input has shape [a, b], then output shape is: axis = 0: [1, a, b] axis = 1: [a, 1, b] axis = 2: [a, b, 1] |
SDVariable |
SDMath.expm1(SDVariable x)
Elementwise 1.0 - exponent function: out = 1.0 - exp(x) = 1.0 - 2.71828...^x
|
SDVariable |
SDMath.expm1(String name,
SDVariable x)
Elementwise 1.0 - exponent function: out = 1.0 - exp(x) = 1.0 - 2.71828...^x
|
SDVariable |
SDRandom.exponential(double lambda,
DataType datatype,
long... shape)
Generate a new random INDArray, where values are randomly sampled according to a exponential distribution:
P(x) = lambda * exp(-lambda * x) Inputs must satisfy the following constraints: Must be positive: lambda > 0 |
SDVariable |
SDRandom.exponential(String name,
double lambda,
DataType datatype,
long... shape)
Generate a new random INDArray, where values are randomly sampled according to a exponential distribution:
P(x) = lambda * exp(-lambda * x) Inputs must satisfy the following constraints: Must be positive: lambda > 0 |
SDVariable |
SDImage.extractImagePatches(SDVariable image,
int[] kSizes,
int[] strides,
int[] rates,
boolean sameMode)
Given an input image, extract out image patches (of size kSizes - h x w) and place them in the depth dimension.
|
SDVariable |
SDCNN.extractImagePatches(SDVariable input,
int kH,
int kW,
int sH,
int sW,
int rH,
int rW,
boolean sameMode)
Extract image patches
|
SDVariable |
SDImage.extractImagePatches(String name,
SDVariable image,
int[] kSizes,
int[] strides,
int[] rates,
boolean sameMode)
Given an input image, extract out image patches (of size kSizes - h x w) and place them in the depth dimension.
|
SDVariable |
SDCNN.extractImagePatches(String name,
SDVariable input,
int kH,
int kW,
int sH,
int sW,
int rH,
int rW,
boolean sameMode)
Extract image patches
|
SDVariable |
SDMath.eye(int rows)
Generate an identity matrix with the specified number of rows and columns.
|
SDVariable |
SDMath.eye(int rows,
int cols)
As per eye(String, int, int, DataType) but with the default datatype, Eye.DEFAULT_DTYPE
|
SDVariable |
SDMath.eye(int rows,
int cols,
DataType dataType,
int... dimensions)
Generate an identity matrix with the specified number of rows and columns
Example: |
SDVariable |
SDMath.eye(SDVariable rows)
As per eye(String, int) but with the number of rows specified as a scalar INDArray
|
SDVariable |
SDMath.eye(SDVariable rows,
SDVariable cols)
As per eye(int, int) bit with the number of rows/columns specified as scalar INDArrays
|
SDVariable |
SDMath.eye(String name,
int rows)
Generate an identity matrix with the specified number of rows and columns.
|
SDVariable |
SDMath.eye(String name,
int rows,
int cols)
As per eye(String, int, int, DataType) but with the default datatype, Eye.DEFAULT_DTYPE
|
SDVariable |
SDMath.eye(String name,
int rows,
int cols,
DataType dataType,
int... dimensions)
Generate an identity matrix with the specified number of rows and columns
Example: |
SDVariable |
SDMath.eye(String name,
SDVariable rows)
As per eye(String, int) but with the number of rows specified as a scalar INDArray
|
SDVariable |
SDMath.eye(String name,
SDVariable rows,
SDVariable cols)
As per eye(int, int) bit with the number of rows/columns specified as scalar INDArrays
|
SDVariable |
SDBaseOps.fill(SDVariable shape,
DataType dataType,
double value)
Generate an output variable with the specified (dynamic) shape with all elements set to the specified value
|
SDVariable |
SDBaseOps.fill(String name,
SDVariable shape,
DataType dataType,
double value)
Generate an output variable with the specified (dynamic) shape with all elements set to the specified value
|
SDVariable |
SDMath.firstIndex(SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions)
First index reduction operation.
Returns a variable that contains the index of the first element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.firstIndex(SDVariable in,
Condition condition,
int... dimensions)
First index reduction operation.
Returns a variable that contains the index of the first element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.firstIndex(String name,
SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions)
First index reduction operation.
Returns a variable that contains the index of the first element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.firstIndex(String name,
SDVariable in,
Condition condition,
int... dimensions)
First index reduction operation.
Returns a variable that contains the index of the first element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.floor(SDVariable x)
Element-wise floor function: out = floor(x).
Rounds each value down to the nearest integer value (if not already an integer) |
SDVariable |
SDMath.floor(String name,
SDVariable x)
Element-wise floor function: out = floor(x).
Rounds each value down to the nearest integer value (if not already an integer) |
SDVariable |
SDMath.floorDiv(SDVariable x,
SDVariable y)
Pairwise floor division operation, out = floor(x / y)
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.floorDiv(String name,
SDVariable x,
SDVariable y)
Pairwise floor division operation, out = floor(x / y)
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.floorMod(SDVariable x,
double value)
Scalar floor modulus operation
|
SDVariable |
SDMath.floorMod(SDVariable x,
SDVariable y)
Pairwise Modulus division operation
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.floorMod(String name,
SDVariable x,
double value)
Scalar floor modulus operation
|
SDVariable |
SDMath.floorMod(String name,
SDVariable x,
SDVariable y)
Pairwise Modulus division operation
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.gather(SDVariable df,
int[] indices,
int axis)
Gather slices from the input variable where the indices are specified as fixed int[] values.
Output shape is same as input shape, except for axis dimension, which has size equal to indices.length. |
SDVariable |
SDBaseOps.gather(SDVariable df,
SDVariable indices,
int axis)
Gather slices from the input variable where the indices are specified as dynamic array values.
Output shape is same as input shape, except for axis dimension, which has size equal to indices.length. |
SDVariable |
SDBaseOps.gather(String name,
SDVariable df,
int[] indices,
int axis)
Gather slices from the input variable where the indices are specified as fixed int[] values.
Output shape is same as input shape, except for axis dimension, which has size equal to indices.length. |
SDVariable |
SDBaseOps.gather(String name,
SDVariable df,
SDVariable indices,
int axis)
Gather slices from the input variable where the indices are specified as dynamic array values.
Output shape is same as input shape, except for axis dimension, which has size equal to indices.length. |
SDVariable |
SDBaseOps.gatherNd(SDVariable df,
SDVariable indices)
Gather slices from df with shape specified by indices.
|
SDVariable |
SDBaseOps.gatherNd(String name,
SDVariable df,
SDVariable indices)
Gather slices from df with shape specified by indices.
|
SDVariable |
SDNN.gelu(SDVariable x)
GELU activation function - Gaussian Error Linear Units
For more details, see Gaussian Error Linear Units (GELUs) - https://arxiv.org/abs/1606.08415 This method uses the sigmoid approximation |
SDVariable |
SDNN.gelu(String name,
SDVariable x)
GELU activation function - Gaussian Error Linear Units
For more details, see Gaussian Error Linear Units (GELUs) - https://arxiv.org/abs/1606.08415 This method uses the sigmoid approximation |
SDVariable |
SDRNN.gru(SDVariable x,
SDVariable hLast,
SDVariable Wx,
SDVariable Wh,
SDVariable biases)
The GRU operation.
|
SDVariable |
SDRNN.gru(String name,
SDVariable x,
SDVariable hLast,
SDVariable Wx,
SDVariable Wh,
SDVariable biases)
The GRU operation.
|
SDVariable[] |
SDRNN.gruCell(SDVariable x,
SDVariable hLast,
GRUWeights GRUWeights)
The GRU cell.
|
SDVariable[] |
SDRNN.gruCell(String[] names,
SDVariable x,
SDVariable hLast,
GRUWeights GRUWeights)
The GRU cell.
|
SDVariable |
SDBaseOps.gt(SDVariable x,
double y)
Greater than operation: elementwise x > y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.gt(SDVariable x,
SDVariable y)
Greater than operation: elementwise x > y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.gt(String name,
SDVariable x,
double y)
Greater than operation: elementwise x > y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.gt(String name,
SDVariable x,
SDVariable y)
Greater than operation: elementwise x > y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.gte(SDVariable x,
double y)
Greater than or equals operation: elementwise x >= y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.gte(SDVariable x,
SDVariable y)
Greater than or equal to operation: elementwise x >= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.gte(String name,
SDVariable x,
double y)
Greater than or equals operation: elementwise x >= y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.gte(String name,
SDVariable x,
SDVariable y)
Greater than or equal to operation: elementwise x >= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDMath.hammingDistance(SDVariable x,
SDVariable y,
int... dimensions)
Hamming distance reduction operation.
|
SDVariable |
SDMath.hammingDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Hamming distance reduction operation.
|
SDVariable |
SDNN.hardSigmoid(SDVariable x)
Element-wise hard sigmoid function:
out[i] = 0 if in[i] <= -2.5 out[1] = 0.2*in[i]+0.5 if -2.5 < in[i] < 2.5 out[i] = 1 if in[i] >= 2.5 |
SDVariable |
SDNN.hardSigmoid(String name,
SDVariable x)
Element-wise hard sigmoid function:
out[i] = 0 if in[i] <= -2.5 out[1] = 0.2*in[i]+0.5 if -2.5 < in[i] < 2.5 out[i] = 1 if in[i] >= 2.5 |
SDVariable |
SDNN.hardTanh(SDVariable x)
Element-wise hard tanh function:
out[i] = -1 if in[i] <= -1 out[1] = in[i] if -1 < in[i] < 1 out[i] = 1 if in[i] >= 1 |
SDVariable |
SDNN.hardTanh(String name,
SDVariable x)
Element-wise hard tanh function:
out[i] = -1 if in[i] <= -1 out[1] = in[i] if -1 < in[i] < 1 out[i] = 1 if in[i] >= 1 |
SDVariable |
SDNN.hardTanhDerivative(SDVariable x)
Derivative (dOut/dIn) of the element-wise hard Tanh function - hardTanh(INDArray)
|
SDVariable |
SDNN.hardTanhDerivative(String name,
SDVariable x)
Derivative (dOut/dIn) of the element-wise hard Tanh function - hardTanh(INDArray)
|
SDVariable |
SDLoss.hingeLoss(SDVariable label,
SDVariable predictions,
SDVariable weights)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
SDVariable |
SDLoss.hingeLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
SDVariable |
SDLoss.hingeLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
SDVariable |
SDLoss.hingeLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
SDVariable |
SDImage.hsvToRgb(SDVariable input)
Converting image from HSV to RGB format
|
SDVariable |
SDImage.hsvToRgb(String name,
SDVariable input)
Converting image from HSV to RGB format
|
SDVariable |
SDLoss.huberLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
SDLoss.huberLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
SDLoss.huberLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
SDLoss.huberLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
SDMath.iamax(SDVariable in,
boolean keepDims,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
see argmax(String, INDArray, boolean, int...) |
SDVariable |
SDMath.iamax(SDVariable in,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
see argmax(String, INDArray, boolean, int...) |
SDVariable |
SDMath.iamax(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
see argmax(String, INDArray, boolean, int...) |
SDVariable |
SDMath.iamax(String name,
SDVariable in,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
see argmax(String, INDArray, boolean, int...) |
SDVariable |
SDMath.iamin(SDVariable in,
boolean keepDims,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
see argmin(String, INDArray, boolean, int...) |
SDVariable |
SDMath.iamin(SDVariable in,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
see argmin(String, INDArray, boolean, int...) |
SDVariable |
SDMath.iamin(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
see argmin(String, INDArray, boolean, int...) |
SDVariable |
SDMath.iamin(String name,
SDVariable in,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
see argmin(String, INDArray, boolean, int...) |
SDVariable |
SDBaseOps.identity(SDVariable input)
Elementwise identity operation: out = x
|
SDVariable |
SDBaseOps.identity(String name,
SDVariable input)
Elementwise identity operation: out = x
|
SDVariable |
SDCNN.im2Col(SDVariable in,
Conv2DConfig Conv2DConfig)
im2col operation for use in 2D convolution operations.
|
SDVariable |
SDCNN.im2Col(String name,
SDVariable in,
Conv2DConfig Conv2DConfig)
im2col operation for use in 2D convolution operations.
|
SDVariable |
SDImage.imageResize(SDVariable input,
SDVariable size,
boolean preserveAspectRatio,
boolean antialis,
ImageResizeMethod ImageResizeMethod)
Resize images to size using the specified method.
|
SDVariable |
SDImage.imageResize(SDVariable input,
SDVariable size,
ImageResizeMethod ImageResizeMethod)
Resize images to size using the specified method.
|
SDVariable |
SDImage.imageResize(String name,
SDVariable input,
SDVariable size,
boolean preserveAspectRatio,
boolean antialis,
ImageResizeMethod ImageResizeMethod)
Resize images to size using the specified method.
|
SDVariable |
SDImage.imageResize(String name,
SDVariable input,
SDVariable size,
ImageResizeMethod ImageResizeMethod)
Resize images to size using the specified method.
|
SDVariable |
SDBaseOps.invertPermutation(SDVariable input)
Compute the inverse permutation indices for a permutation operation
Example: if input is [2, 0, 1] then output is [1, 2, 0] The idea is that x.permute(input).permute(invertPermutation(input)) == x |
SDVariable |
SDBaseOps.invertPermutation(String name,
SDVariable input)
Compute the inverse permutation indices for a permutation operation
Example: if input is [2, 0, 1] then output is [1, 2, 0] The idea is that x.permute(input).permute(invertPermutation(input)) == x |
SDVariable |
SDMath.isFinite(SDVariable x)
Is finite operation: elementwise isFinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isFinite(String name,
SDVariable x)
Is finite operation: elementwise isFinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isInfinite(SDVariable x)
Is infinite operation: elementwise isInfinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isInfinite(String name,
SDVariable x)
Is infinite operation: elementwise isInfinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isMax(SDVariable x)
Is maximum operation: elementwise x == max(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isMax(String name,
SDVariable x)
Is maximum operation: elementwise x == max(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isNaN(SDVariable x)
Is Not a Number operation: elementwise isNaN(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isNaN(String name,
SDVariable x)
Is Not a Number operation: elementwise isNaN(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isNonDecreasing(SDVariable x)
Is the array non decreasing?
An array is non-decreasing if for every valid i, x[i] <= x[i+1]. |
SDVariable |
SDMath.isNonDecreasing(String name,
SDVariable x)
Is the array non decreasing?
An array is non-decreasing if for every valid i, x[i] <= x[i+1]. |
SDVariable |
SDBaseOps.isNumericTensor(SDVariable x)
Is the director a numeric tensor? In the current version of ND4J/SameDiff, this always returns true/1
|
SDVariable |
SDBaseOps.isNumericTensor(String name,
SDVariable x)
Is the director a numeric tensor? In the current version of ND4J/SameDiff, this always returns true/1
|
SDVariable |
SDMath.isStrictlyIncreasing(SDVariable x)
Is the array strictly increasing?
An array is strictly increasing if for every valid i, x[i] < x[i+1]. |
SDVariable |
SDMath.isStrictlyIncreasing(String name,
SDVariable x)
Is the array strictly increasing?
An array is strictly increasing if for every valid i, x[i] < x[i+1]. |
SDVariable |
SDMath.jaccardDistance(SDVariable x,
SDVariable y,
int... dimensions)
Jaccard similarity reduction operation.
|
SDVariable |
SDMath.jaccardDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Jaccard similarity reduction operation.
|
SDVariable |
SDLoss.l2Loss(SDVariable var)
L2 loss: 1/2 * sum(x^2)
|
SDVariable |
SDLoss.l2Loss(String name,
SDVariable var)
L2 loss: 1/2 * sum(x^2)
|
SDVariable |
SDMath.lastIndex(SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions)
Last index reduction operation.
Returns a variable that contains the index of the last element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.lastIndex(SDVariable in,
Condition condition,
int... dimensions)
Last index reduction operation.
Returns a variable that contains the index of the last element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.lastIndex(String name,
SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions)
Last index reduction operation.
Returns a variable that contains the index of the last element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.lastIndex(String name,
SDVariable in,
Condition condition,
int... dimensions)
Last index reduction operation.
Returns a variable that contains the index of the last element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDNN.layerNorm(SDVariable input,
SDVariable gain,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization
y = gain * standardize(x) + bias |
SDVariable |
SDNN.layerNorm(SDVariable input,
SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization
y = gain * standardize(x) + bias |
SDVariable |
SDNN.layerNorm(String name,
SDVariable input,
SDVariable gain,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization
y = gain * standardize(x) + bias |
SDVariable |
SDNN.layerNorm(String name,
SDVariable input,
SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization
y = gain * standardize(x) + bias |
SDVariable |
SDNN.leakyRelu(SDVariable x,
double alpha)
Element-wise leaky ReLU function:
out = x if x >= 0.0 out = alpha * x if x < cutoff Alpha value is most commonly set to 0.01 |
SDVariable |
SDNN.leakyRelu(String name,
SDVariable x,
double alpha)
Element-wise leaky ReLU function:
out = x if x >= 0.0 out = alpha * x if x < cutoff Alpha value is most commonly set to 0.01 |
SDVariable |
SDNN.leakyReluDerivative(SDVariable x,
double alpha)
Leaky ReLU derivative: dOut/dIn given input.
|
SDVariable |
SDNN.leakyReluDerivative(String name,
SDVariable x,
double alpha)
Leaky ReLU derivative: dOut/dIn given input.
|
SDVariable |
SDBitwise.leftShift(SDVariable x,
SDVariable y)
Bitwise left shift operation.
|
SDVariable |
SDBitwise.leftShift(String name,
SDVariable x,
SDVariable y)
Bitwise left shift operation.
|
SDVariable |
SDBitwise.leftShiftCyclic(SDVariable x,
SDVariable y)
Bitwise left cyclical shift operation.
|
SDVariable |
SDBitwise.leftShiftCyclic(String name,
SDVariable x,
SDVariable y)
Bitwise left cyclical shift operation.
|
SDVariable |
SDNN.linear(SDVariable input,
SDVariable weights,
SDVariable bias)
Linear layer operation: out = mmul(in,w) + bias
Note that bias array is optional |
SDVariable |
SDNN.linear(String name,
SDVariable input,
SDVariable weights,
SDVariable bias)
Linear layer operation: out = mmul(in,w) + bias
Note that bias array is optional |
SDVariable |
SDBaseOps.linspace(DataType dataType,
double start,
double stop,
long number)
Create a new 1d array with values evenly spaced between values 'start' and 'stop'
For example, linspace(start=3.0, stop=4.0, number=3) will generate [3.0, 3.5, 4.0] |
SDVariable |
SDBaseOps.linspace(SDVariable start,
SDVariable stop,
SDVariable number,
DataType dataType)
Create a new 1d array with values evenly spaced between values 'start' and 'stop'
For example, linspace(start=3.0, stop=4.0, number=3) will generate [3.0, 3.5, 4.0] |
SDVariable |
SDBaseOps.linspace(String name,
DataType dataType,
double start,
double stop,
long number)
Create a new 1d array with values evenly spaced between values 'start' and 'stop'
For example, linspace(start=3.0, stop=4.0, number=3) will generate [3.0, 3.5, 4.0] |
SDVariable |
SDBaseOps.linspace(String name,
SDVariable start,
SDVariable stop,
SDVariable number,
DataType dataType)
Create a new 1d array with values evenly spaced between values 'start' and 'stop'
For example, linspace(start=3.0, stop=4.0, number=3) will generate [3.0, 3.5, 4.0] |
SDVariable[] |
SDMath.listDiff(SDVariable x,
SDVariable y)
Calculates difference between inputs X and Y.
|
SDVariable[] |
SDMath.listDiff(String[] names,
SDVariable x,
SDVariable y)
Calculates difference between inputs X and Y.
|
SDVariable |
SDCNN.localResponseNormalization(SDVariable input,
LocalResponseNormalizationConfig LocalResponseNormalizationConfig)
2D convolution layer operation - local response normalization
|
SDVariable |
SDCNN.localResponseNormalization(String name,
SDVariable input,
LocalResponseNormalizationConfig LocalResponseNormalizationConfig)
2D convolution layer operation - local response normalization
|
SDVariable |
SDMath.log(SDVariable x)
Element-wise logarithm function (base e - natural logarithm): out = log(x)
|
SDVariable |
SDMath.log(SDVariable x,
double base)
Element-wise logarithm function (with specified base): out = log_{base}(x)
|
SDVariable |
SDMath.log(String name,
SDVariable x)
Element-wise logarithm function (base e - natural logarithm): out = log(x)
|
SDVariable |
SDMath.log(String name,
SDVariable x,
double base)
Element-wise logarithm function (with specified base): out = log_{base}(x)
|
SDVariable |
SDMath.log1p(SDVariable x)
Elementwise natural logarithm function: out = log_e (1 + x)
|
SDVariable |
SDMath.log1p(String name,
SDVariable x)
Elementwise natural logarithm function: out = log_e (1 + x)
|
SDVariable |
SDLinalg.logdet(SDVariable input)
Calculates log of determinant.
|
SDVariable |
SDLinalg.logdet(String name,
SDVariable input)
Calculates log of determinant.
|
SDVariable |
SDMath.logEntropy(SDVariable in,
int... dimensions)
Log entropy reduction: log(-sum(x * log(x)))
|
SDVariable |
SDMath.logEntropy(String name,
SDVariable in,
int... dimensions)
Log entropy reduction: log(-sum(x * log(x)))
|
SDVariable |
SDLoss.logLoss(SDVariable label,
SDVariable predictions)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
SDLoss.logLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
SDLoss.logLoss(String name,
SDVariable label,
SDVariable predictions)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
SDLoss.logLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
SDRandom.logNormal(double mean,
double stddev,
DataType datatype,
long... shape)
Generate a new random INDArray, where values are randomly sampled according to a Log Normal distribution,
i.e., log(x) ~ N(mean, stdev) |
SDVariable |
SDRandom.logNormal(String name,
double mean,
double stddev,
DataType datatype,
long... shape)
Generate a new random INDArray, where values are randomly sampled according to a Log Normal distribution,
i.e., log(x) ~ N(mean, stdev) |
SDVariable |
SDLoss.logPoisson(SDVariable label,
SDVariable predictions,
SDVariable weights,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
SDVariable |
SDLoss.logPoisson(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
SDVariable |
SDLoss.logPoisson(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
SDVariable |
SDLoss.logPoisson(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
SDVariable |
SDNN.logSigmoid(SDVariable x)
Element-wise sigmoid function: out[i] = log(sigmoid(in[i]))
|
SDVariable |
SDNN.logSigmoid(String name,
SDVariable x)
Element-wise sigmoid function: out[i] = log(sigmoid(in[i]))
|
SDVariable |
SDNN.logSoftmax(SDVariable x)
Log softmax activation
|
SDVariable |
SDNN.logSoftmax(SDVariable x,
int dimension)
Log softmax activation
|
SDVariable |
SDNN.logSoftmax(String name,
SDVariable x)
Log softmax activation
|
SDVariable |
SDNN.logSoftmax(String name,
SDVariable x,
int dimension)
Log softmax activation
|
SDVariable |
SDMath.logSumExp(SDVariable input,
int... dimensions)
Log-sum-exp reduction (optionally along dimension).
Computes log(sum(exp(x)) |
SDVariable |
SDMath.logSumExp(String name,
SDVariable input,
int... dimensions)
Log-sum-exp reduction (optionally along dimension).
Computes log(sum(exp(x)) |
SDVariable |
SDRNN.lstmblock(SDVariable x,
LSTMWeights LSTMWeights,
LSTMConfiguration LSTMConfiguration)
The LSTM block
|
SDVariable |
SDRNN.lstmblock(SDVariable maxTSLength,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights LSTMWeights,
LSTMConfiguration LSTMConfiguration)
The LSTM block
|
SDVariable |
SDRNN.lstmblock(String name,
SDVariable x,
LSTMWeights LSTMWeights,
LSTMConfiguration LSTMConfiguration)
The LSTM block
|
SDVariable |
SDRNN.lstmblock(String name,
SDVariable maxTSLength,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights LSTMWeights,
LSTMConfiguration LSTMConfiguration)
The LSTM block
|
SDVariable[] |
SDRNN.lstmCell(SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights LSTMWeights,
LSTMConfiguration LSTMConfiguration)
The LSTM cell.
|
SDVariable[] |
SDRNN.lstmCell(String[] names,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights LSTMWeights,
LSTMConfiguration LSTMConfiguration)
The LSTM cell.
|
SDVariable[] |
SDRNN.lstmLayer(SDVariable x,
LSTMLayerWeights LSTMLayerWeights,
LSTMLayerConfig LSTMLayerConfig)
Long Short-Term Memory layer - Hochreiter 1997.
SUPPORTS following data formats: for unidirectional: TNS: shapes [timeLength, numExamples, inOutSize] NST: shapes [numExamples, inOutSize, timeLength] NTS: shapes [numExamples, timeLength, inOutSize] for bidirectional: T2NS: shapes [timeLength, 2, numExamples, inOutSize] (for ONNX) SUPPORTS following direction modes: FWD: forward BWD: backward BIDIR_SUM: bidirectional sum BIDIR_CONCAT: bidirectional concat BIDIR_EXTRA_DIM: bidirectional extra output dim (in conjunction with format dataFormat - T2NS) You may use different gate configurations: specify gate/cell/out aplha/beta and numbers of activations for gate/cell/out described in activations enum ("RELU","SIGMOID","AFFINE","LEAKY_RELU","THRESHHOLD_RELU","SCALED_TAHN","HARD_SIGMOID","ELU","SOFTSIGN","SOFTPLUS") Also this layer supports MKLDNN (DNNL) and cuDNN acceleration |
SDVariable[] |
SDRNN.lstmLayer(SDVariable x,
SDVariable cLast,
SDVariable yLast,
SDVariable maxTSLength,
LSTMLayerWeights LSTMLayerWeights,
LSTMLayerConfig LSTMLayerConfig)
Long Short-Term Memory layer - Hochreiter 1997.
SUPPORTS following data formats: for unidirectional: TNS: shapes [timeLength, numExamples, inOutSize] NST: shapes [numExamples, inOutSize, timeLength] NTS: shapes [numExamples, timeLength, inOutSize] for bidirectional: T2NS: shapes [timeLength, 2, numExamples, inOutSize] (for ONNX) SUPPORTS following direction modes: FWD: forward BWD: backward BIDIR_SUM: bidirectional sum BIDIR_CONCAT: bidirectional concat BIDIR_EXTRA_DIM: bidirectional extra output dim (in conjunction with format dataFormat - T2NS) You may use different gate configurations: specify gate/cell/out aplha/beta and numbers of activations for gate/cell/out described in activations enum ("RELU","SIGMOID","AFFINE","LEAKY_RELU","THRESHHOLD_RELU","SCALED_TAHN","HARD_SIGMOID","ELU","SOFTSIGN","SOFTPLUS") Also this layer supports MKLDNN (DNNL) and cuDNN acceleration |
SDVariable[] |
SDRNN.lstmLayer(String[] names,
SDVariable x,
LSTMLayerWeights LSTMLayerWeights,
LSTMLayerConfig LSTMLayerConfig)
Long Short-Term Memory layer - Hochreiter 1997.
SUPPORTS following data formats: for unidirectional: TNS: shapes [timeLength, numExamples, inOutSize] NST: shapes [numExamples, inOutSize, timeLength] NTS: shapes [numExamples, timeLength, inOutSize] for bidirectional: T2NS: shapes [timeLength, 2, numExamples, inOutSize] (for ONNX) SUPPORTS following direction modes: FWD: forward BWD: backward BIDIR_SUM: bidirectional sum BIDIR_CONCAT: bidirectional concat BIDIR_EXTRA_DIM: bidirectional extra output dim (in conjunction with format dataFormat - T2NS) You may use different gate configurations: specify gate/cell/out aplha/beta and numbers of activations for gate/cell/out described in activations enum ("RELU","SIGMOID","AFFINE","LEAKY_RELU","THRESHHOLD_RELU","SCALED_TAHN","HARD_SIGMOID","ELU","SOFTSIGN","SOFTPLUS") Also this layer supports MKLDNN (DNNL) and cuDNN acceleration |
SDVariable[] |
SDRNN.lstmLayer(String[] names,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
SDVariable maxTSLength,
LSTMLayerWeights LSTMLayerWeights,
LSTMLayerConfig LSTMLayerConfig)
Long Short-Term Memory layer - Hochreiter 1997.
SUPPORTS following data formats: for unidirectional: TNS: shapes [timeLength, numExamples, inOutSize] NST: shapes [numExamples, inOutSize, timeLength] NTS: shapes [numExamples, timeLength, inOutSize] for bidirectional: T2NS: shapes [timeLength, 2, numExamples, inOutSize] (for ONNX) SUPPORTS following direction modes: FWD: forward BWD: backward BIDIR_SUM: bidirectional sum BIDIR_CONCAT: bidirectional concat BIDIR_EXTRA_DIM: bidirectional extra output dim (in conjunction with format dataFormat - T2NS) You may use different gate configurations: specify gate/cell/out aplha/beta and numbers of activations for gate/cell/out described in activations enum ("RELU","SIGMOID","AFFINE","LEAKY_RELU","THRESHHOLD_RELU","SCALED_TAHN","HARD_SIGMOID","ELU","SOFTSIGN","SOFTPLUS") Also this layer supports MKLDNN (DNNL) and cuDNN acceleration |
SDVariable |
SDLinalg.lstsq(SDVariable matrix,
SDVariable rhs,
double l2_reguralizer)
Solver for linear squares problems.
|
SDVariable |
SDLinalg.lstsq(SDVariable matrix,
SDVariable rhs,
double l2_reguralizer,
boolean fast)
Solver for linear squares problems.
|
SDVariable |
SDLinalg.lstsq(String name,
SDVariable matrix,
SDVariable rhs,
double l2_reguralizer)
Solver for linear squares problems.
|
SDVariable |
SDLinalg.lstsq(String name,
SDVariable matrix,
SDVariable rhs,
double l2_reguralizer,
boolean fast)
Solver for linear squares problems.
|
SDVariable |
SDBaseOps.lt(SDVariable x,
double y)
Less than operation: elementwise x < y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.lt(SDVariable x,
SDVariable y)
Less than operation: elementwise x < y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.lt(String name,
SDVariable x,
double y)
Less than operation: elementwise x < y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.lt(String name,
SDVariable x,
SDVariable y)
Less than operation: elementwise x < y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.lte(SDVariable x,
double y)
Less than or equals operation: elementwise x <= y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.lte(SDVariable x,
SDVariable y)
Less than or equal to operation: elementwise x <= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.lte(String name,
SDVariable x,
double y)
Less than or equals operation: elementwise x <= y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.lte(String name,
SDVariable x,
SDVariable y)
Less than or equal to operation: elementwise x <= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDLinalg.lu(SDVariable input)
Computes LU decomposition.
|
SDVariable |
SDLinalg.lu(String name,
SDVariable input)
Computes LU decomposition.
|
SDVariable |
SDMath.manhattanDistance(SDVariable x,
SDVariable y,
int... dimensions)
Manhattan distance (l1 norm, l1 distance) reduction operation.
|
SDVariable |
SDMath.manhattanDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Manhattan distance (l1 norm, l1 distance) reduction operation.
|
SDVariable |
SDBaseOps.matchCondition(SDVariable in,
Condition condition)
Returns a boolean mask of equal shape to the input, where the condition is satisfied - value 1 where satisfied, 0 otherwise
|
SDVariable |
SDBaseOps.matchCondition(String name,
SDVariable in,
Condition condition)
Returns a boolean mask of equal shape to the input, where the condition is satisfied - value 1 where satisfied, 0 otherwise
|
SDVariable |
SDBaseOps.matchConditionCount(SDVariable in,
Condition condition)
Returns a count of the number of elements that satisfy the condition
|
SDVariable |
SDBaseOps.matchConditionCount(SDVariable in,
Condition condition,
boolean keepDim,
int... dimensions)
Returns a count of the number of elements that satisfy the condition (for each slice along the specified dimensions)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.matchConditionCount(SDVariable in,
Condition condition,
int... dimensions)
Returns a count of the number of elements that satisfy the condition (for each slice along the specified dimensions)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.matchConditionCount(String name,
SDVariable in,
Condition condition)
Returns a count of the number of elements that satisfy the condition
|
SDVariable |
SDBaseOps.matchConditionCount(String name,
SDVariable in,
Condition condition,
boolean keepDim,
int... dimensions)
Returns a count of the number of elements that satisfy the condition (for each slice along the specified dimensions)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.matchConditionCount(String name,
SDVariable in,
Condition condition,
int... dimensions)
Returns a count of the number of elements that satisfy the condition (for each slice along the specified dimensions)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDLinalg.matmul(SDVariable a,
SDVariable b)
Performs matrix mutiplication on input tensors.
|
SDVariable |
SDLinalg.matmul(String name,
SDVariable a,
SDVariable b)
Performs matrix mutiplication on input tensors.
|
SDVariable[] |
SDLinalg.matrixBandPart(SDVariable input,
int minLower,
int maxUpper)
Copy a tensor setting outside a central band in each innermost matrix.
|
SDVariable[] |
SDLinalg.matrixBandPart(String[] names,
SDVariable input,
int minLower,
int maxUpper)
Copy a tensor setting outside a central band in each innermost matrix.
|
SDVariable |
SDMath.matrixDeterminant(SDVariable in)
Matrix determinant op.
|
SDVariable |
SDMath.matrixDeterminant(String name,
SDVariable in)
Matrix determinant op.
|
SDVariable |
SDMath.matrixInverse(SDVariable in)
Matrix inverse op.
|
SDVariable |
SDMath.matrixInverse(String name,
SDVariable in)
Matrix inverse op.
|
SDVariable |
SDBaseOps.max(SDVariable x,
boolean keepDims,
int... dimensions)
Max array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.max(SDVariable x,
int... dimensions)
Max array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.max(SDVariable first,
SDVariable second)
Element-wise maximum operation: out[i] = max(first[i], second[i])
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.max(SDVariable x,
SDVariable y)
Pairwise max operation, out = max(x, y)
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.max(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Max array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.max(String name,
SDVariable x,
int... dimensions)
Max array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.max(String name,
SDVariable first,
SDVariable second)
Element-wise maximum operation: out[i] = max(first[i], second[i])
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.max(String name,
SDVariable x,
SDVariable y)
Pairwise max operation, out = max(x, y)
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDCNN.maxPooling2d(SDVariable input,
Pooling2DConfig Pooling2DConfig)
2D Convolution layer operation - max pooling 2d
|
SDVariable |
SDCNN.maxPooling2d(String name,
SDVariable input,
Pooling2DConfig Pooling2DConfig)
2D Convolution layer operation - max pooling 2d
|
SDVariable |
SDCNN.maxPooling3d(SDVariable input,
Pooling3DConfig Pooling3DConfig)
3D convolution layer operation - max pooling 3d operation.
|
SDVariable |
SDCNN.maxPooling3d(String name,
SDVariable input,
Pooling3DConfig Pooling3DConfig)
3D convolution layer operation - max pooling 3d operation.
|
SDVariable[] |
SDCNN.maxPoolWithArgmax(SDVariable input,
Pooling2DConfig Pooling2DConfig)
2D Convolution layer operation - Max pooling on the input and outputs both max values and indices
|
SDVariable[] |
SDCNN.maxPoolWithArgmax(String[] names,
SDVariable input,
Pooling2DConfig Pooling2DConfig)
2D Convolution layer operation - Max pooling on the input and outputs both max values and indices
|
SDVariable |
SDBaseOps.mean(SDVariable x,
boolean keepDims,
int... dimensions)
Mean (average) array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.mean(SDVariable x,
int... dimensions)
Mean (average) array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.mean(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Mean (average) array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.mean(String name,
SDVariable x,
int... dimensions)
Mean (average) array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDLoss.meanPairwiseSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
SDVariable |
SDLoss.meanPairwiseSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
SDVariable |
SDLoss.meanPairwiseSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
SDVariable |
SDLoss.meanPairwiseSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
SDVariable |
SDLoss.meanSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights)
Mean squared error loss function.
|
SDVariable |
SDLoss.meanSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean squared error loss function.
|
SDVariable |
SDLoss.meanSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights)
Mean squared error loss function.
|
SDVariable |
SDLoss.meanSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean squared error loss function.
|
SDVariable |
SDBaseOps.merge(SDVariable x,
SDVariable y)
The merge operation is a control operation that forwards the either of the inputs to the output, when
the first of them becomes available. |
SDVariable |
SDBaseOps.merge(String name,
SDVariable x,
SDVariable y)
The merge operation is a control operation that forwards the either of the inputs to the output, when
the first of them becomes available. |
SDVariable |
SDMath.mergeAdd(SDVariable... inputs)
Merge add function: merges an arbitrary number of equal shaped arrays using element-wise addition:
out = sum_i in[i] |
SDVariable |
SDMath.mergeAdd(String name,
SDVariable... inputs)
Merge add function: merges an arbitrary number of equal shaped arrays using element-wise addition:
out = sum_i in[i] |
SDVariable |
SDMath.mergeAvg(SDVariable... inputs)
Merge average function: merges an arbitrary number of equal shaped arrays using element-wise mean operation:
out = mean_i in[i] |
SDVariable |
SDMath.mergeAvg(String name,
SDVariable... inputs)
Merge average function: merges an arbitrary number of equal shaped arrays using element-wise mean operation:
out = mean_i in[i] |
SDVariable |
SDMath.mergeMax(SDVariable... inputs)
Merge max function: merges an arbitrary number of equal shaped arrays using element-wise maximum operation:
out = max_i in[i] |
SDVariable |
SDMath.mergeMax(String name,
SDVariable... inputs)
Merge max function: merges an arbitrary number of equal shaped arrays using element-wise maximum operation:
out = max_i in[i] |
SDVariable |
SDMath.mergeMaxIndex(SDVariable... x)
Return array of max elements indices with along tensor dimensions
|
SDVariable |
SDMath.mergeMaxIndex(SDVariable[] x,
DataType dataType)
Return array of max elements indices with along tensor dimensions
|
SDVariable |
SDMath.mergeMaxIndex(String name,
SDVariable... x)
Return array of max elements indices with along tensor dimensions
|
SDVariable |
SDMath.mergeMaxIndex(String name,
SDVariable[] x,
DataType dataType)
Return array of max elements indices with along tensor dimensions
|
SDVariable[] |
SDMath.meshgrid(SDVariable[] inputs,
boolean cartesian)
Broadcasts parameters for evaluation on an N-D grid.
|
SDVariable[] |
SDMath.meshgrid(String[] names,
SDVariable[] inputs,
boolean cartesian)
Broadcasts parameters for evaluation on an N-D grid.
|
SDVariable |
SDBaseOps.min(SDVariable x,
boolean keepDims,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDBaseOps.min(SDVariable x,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDBaseOps.min(SDVariable first,
SDVariable second)
Element-wise minimum operation: out[i] = min(first[i], second[i])
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.min(SDVariable x,
SDVariable y)
Pairwise max operation, out = min(x, y)
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.min(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDBaseOps.min(String name,
SDVariable x,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDBaseOps.min(String name,
SDVariable first,
SDVariable second)
Element-wise minimum operation: out[i] = min(first[i], second[i])
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.min(String name,
SDVariable x,
SDVariable y)
Pairwise max operation, out = min(x, y)
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.mmul(SDVariable x,
SDVariable y)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDLinalg.mmul(SDVariable x,
SDVariable y)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDBaseOps.mmul(SDVariable x,
SDVariable y,
boolean transposeX,
boolean transposeY,
boolean transposeZ)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDLinalg.mmul(SDVariable x,
SDVariable y,
boolean transposeX,
boolean transposeY,
boolean transposeZ)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDBaseOps.mmul(String name,
SDVariable x,
SDVariable y)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDLinalg.mmul(String name,
SDVariable x,
SDVariable y)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDBaseOps.mmul(String name,
SDVariable x,
SDVariable y,
boolean transposeX,
boolean transposeY,
boolean transposeZ)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDLinalg.mmul(String name,
SDVariable x,
SDVariable y,
boolean transposeX,
boolean transposeY,
boolean transposeZ)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDMath.mod(SDVariable x,
SDVariable y)
Pairwise modulus (remainder) operation, out = x % y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.mod(String name,
SDVariable x,
SDVariable y)
Pairwise modulus (remainder) operation, out = x % y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable[] |
SDMath.moments(SDVariable input,
int... axes)
Calculate the mean and (population) variance for the input variable, for the specified axis
|
SDVariable[] |
SDMath.moments(String[] names,
SDVariable input,
int... axes)
Calculate the mean and (population) variance for the input variable, for the specified axis
|
SDVariable |
SDMath.mul(SDVariable x,
double value)
Scalar multiplication operation, out = in * scalar
|
SDVariable |
SDMath.mul(SDVariable x,
SDVariable y)
Pairwise multiplication operation, out = x * y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.mul(String name,
SDVariable x,
double value)
Scalar multiplication operation, out = in * scalar
|
SDVariable |
SDMath.mul(String name,
SDVariable x,
SDVariable y)
Pairwise multiplication operation, out = x * y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDNN.multiHeadDotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled)
This performs multi-headed dot product attention on the given timeseries input
out = concat(head_1, head_2, ..., head_n) * Wo head_i = dot_product_attention(Wq_i*q, Wk_i*k, Wv_i*v) Optionally with normalization when calculating the attention for each head. See also "Attention is all you need" (https://arxiv.org/abs/1706.03762, pp. |
SDVariable |
SDNN.multiHeadDotProductAttention(String name,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled)
This performs multi-headed dot product attention on the given timeseries input
out = concat(head_1, head_2, ..., head_n) * Wo head_i = dot_product_attention(Wq_i*q, Wk_i*k, Wv_i*v) Optionally with normalization when calculating the attention for each head. See also "Attention is all you need" (https://arxiv.org/abs/1706.03762, pp. |
SDVariable |
SDMath.neg(SDVariable x)
Elementwise negative operation: out = -x
|
SDVariable |
SDMath.neg(String name,
SDVariable x)
Elementwise negative operation: out = -x
|
SDVariable |
SDBaseOps.neq(SDVariable x,
double y)
Not equals operation: elementwise x != y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.neq(SDVariable x,
SDVariable y)
Not equal to operation: elementwise x != y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.neq(String name,
SDVariable x,
double y)
Not equals operation: elementwise x != y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.neq(String name,
SDVariable x,
SDVariable y)
Not equal to operation: elementwise x != y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDImage.nonMaxSuppression(SDVariable boxes,
SDVariable scores,
int maxOutSize,
double iouThreshold,
double scoreThreshold)
Greedily selects a subset of bounding boxes in descending order of score
|
SDVariable |
SDImage.nonMaxSuppression(String name,
SDVariable boxes,
SDVariable scores,
int maxOutSize,
double iouThreshold,
double scoreThreshold)
Greedily selects a subset of bounding boxes in descending order of score
|
SDVariable |
SDBaseOps.norm1(SDVariable x,
boolean keepDims,
int... dimensions)
Norm1 (L1 norm) reduction operation: The output contains the L1 norm for each tensor/subset along the specified dimensions:
out = sum_i abs(x[i]) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm1(SDVariable x,
int... dimensions)
Norm1 (L1 norm) reduction operation: The output contains the L1 norm for each tensor/subset along the specified dimensions:
out = sum_i abs(x[i]) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm1(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Norm1 (L1 norm) reduction operation: The output contains the L1 norm for each tensor/subset along the specified dimensions:
out = sum_i abs(x[i]) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm1(String name,
SDVariable x,
int... dimensions)
Norm1 (L1 norm) reduction operation: The output contains the L1 norm for each tensor/subset along the specified dimensions:
out = sum_i abs(x[i]) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm2(SDVariable x,
boolean keepDims,
int... dimensions)
Norm2 (L2 norm) reduction operation: The output contains the L2 norm for each tensor/subset along the specified dimensions:
out = sqrt(sum_i x[i]^2) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm2(SDVariable x,
int... dimensions)
Norm2 (L2 norm) reduction operation: The output contains the L2 norm for each tensor/subset along the specified dimensions:
out = sqrt(sum_i x[i]^2) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm2(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Norm2 (L2 norm) reduction operation: The output contains the L2 norm for each tensor/subset along the specified dimensions:
out = sqrt(sum_i x[i]^2) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm2(String name,
SDVariable x,
int... dimensions)
Norm2 (L2 norm) reduction operation: The output contains the L2 norm for each tensor/subset along the specified dimensions:
out = sqrt(sum_i x[i]^2) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDRandom.normal(double mean,
double stddev,
DataType datatype,
long... shape)
Generate a new random INDArray, where values are randomly sampled according to a Gaussian (normal) distribution,
N(mean, stdev) |
SDVariable |
SDRandom.normal(String name,
double mean,
double stddev,
DataType datatype,
long... shape)
Generate a new random INDArray, where values are randomly sampled according to a Gaussian (normal) distribution,
N(mean, stdev) |
SDVariable[] |
SDMath.normalizeMoments(SDVariable counts,
SDVariable means,
SDVariable variances,
double shift)
Calculate the mean and variance from the sufficient statistics
|
SDVariable[] |
SDMath.normalizeMoments(String[] names,
SDVariable counts,
SDVariable means,
SDVariable variances,
double shift)
Calculate the mean and variance from the sufficient statistics
|
SDVariable |
SDRandom.normalTruncated(double mean,
double stddev,
DataType datatype,
long... shape)
Generate a new random INDArray, where values are randomly sampled according to a Gaussian (normal) distribution,
N(mean, stdev). |
SDVariable |
SDRandom.normalTruncated(String name,
double mean,
double stddev,
DataType datatype,
long... shape)
Generate a new random INDArray, where values are randomly sampled according to a Gaussian (normal) distribution,
N(mean, stdev). |
SDVariable |
SDBaseOps.normmax(SDVariable x,
boolean keepDims,
int... dimensions)
Max norm (infinity norm) reduction operation: The output contains the max norm for each tensor/subset along the
specified dimensions: out = max(abs(x[i])) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.normmax(SDVariable x,
int... dimensions)
Max norm (infinity norm) reduction operation: The output contains the max norm for each tensor/subset along the
specified dimensions: out = max(abs(x[i])) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.normmax(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Max norm (infinity norm) reduction operation: The output contains the max norm for each tensor/subset along the
specified dimensions: out = max(abs(x[i])) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.normmax(String name,
SDVariable x,
int... dimensions)
Max norm (infinity norm) reduction operation: The output contains the max norm for each tensor/subset along the
specified dimensions: out = max(abs(x[i])) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.oneHot(SDVariable indices,
int depth)
Convert the array to a one-hot array with walues 0 and 1 for each entry
If input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with out[i, ..., j, in[i,...,j]] = 1 with other values being set to 0 see oneHot(SDVariable, int, int, double, double) |
SDVariable |
SDBaseOps.oneHot(SDVariable indices,
int depth,
int axis,
double on,
double off)
Convert the array to a one-hot array with walues and for each entry
If input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with {out[i, ..., j, in[i,...,j]] with other values being set to |
SDVariable |
SDBaseOps.oneHot(SDVariable indices,
int depth,
int axis,
double on,
double off,
DataType dataType)
Convert the array to a one-hot array with walues and for each entry
If input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with {out[i, ..., j, in[i,...,j]] with other values being set to |
SDVariable |
SDBaseOps.oneHot(String name,
SDVariable indices,
int depth)
Convert the array to a one-hot array with walues 0 and 1 for each entry
If input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with out[i, ..., j, in[i,...,j]] = 1 with other values being set to 0 see oneHot(SDVariable, int, int, double, double) |
SDVariable |
SDBaseOps.oneHot(String name,
SDVariable indices,
int depth,
int axis,
double on,
double off)
Convert the array to a one-hot array with walues and for each entry
If input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with {out[i, ..., j, in[i,...,j]] with other values being set to |
SDVariable |
SDBaseOps.oneHot(String name,
SDVariable indices,
int depth,
int axis,
double on,
double off,
DataType dataType)
Convert the array to a one-hot array with walues and for each entry
If input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with {out[i, ..., j, in[i,...,j]] with other values being set to |
SDVariable |
SDBaseOps.onesLike(SDVariable input)
Return a variable of all 1s, with the same shape as the input variable.
|
SDVariable |
SDBaseOps.onesLike(SDVariable input,
DataType dataType)
As per onesLike(String, SDVariable) but the output datatype may be specified
|
SDVariable |
SDBaseOps.onesLike(String name,
SDVariable input)
Return a variable of all 1s, with the same shape as the input variable.
|
SDVariable |
SDBaseOps.onesLike(String name,
SDVariable input,
DataType dataType)
As per onesLike(String, SDVariable) but the output datatype may be specified
|
SDVariable |
SDBitwise.or(SDVariable x,
SDVariable y)
Bitwise OR operation.
|
SDVariable |
SDMath.or(SDVariable x,
SDVariable y)
Boolean OR operation: elementwise (x != 0) || (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBitwise.or(String name,
SDVariable x,
SDVariable y)
Bitwise OR operation.
|
SDVariable |
SDMath.or(String name,
SDVariable x,
SDVariable y)
Boolean OR operation: elementwise (x != 0) || (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDNN.pad(SDVariable input,
SDVariable padding,
double constant)
Padding operation
|
SDVariable |
SDNN.pad(SDVariable input,
SDVariable padding,
PadMode PadMode,
double constant)
Padding operation
|
SDVariable |
SDNN.pad(String name,
SDVariable input,
SDVariable padding,
double constant)
Padding operation
|
SDVariable |
SDNN.pad(String name,
SDVariable input,
SDVariable padding,
PadMode PadMode,
double constant)
Padding operation
|
SDVariable |
SDBaseOps.permute(SDVariable x,
int... dimensions)
Array permutation operation: permute the dimensions according to the specified permutation indices.
Example: if input has shape [a,b,c] and dimensions = [2,0,1] the output has shape [c,a,b] |
SDVariable |
SDBaseOps.permute(SDVariable x,
SDVariable dimensions)
Array permutation operation: permute the dimensions according to the specified permutation indices.
Example: if input has shape [a,b,c] and dimensions = [2,0,1] the output has shape [c,a,b] |
SDVariable |
SDBaseOps.permute(String name,
SDVariable x,
int... dimensions)
Array permutation operation: permute the dimensions according to the specified permutation indices.
Example: if input has shape [a,b,c] and dimensions = [2,0,1] the output has shape [c,a,b] |
SDVariable |
SDBaseOps.permute(String name,
SDVariable x,
SDVariable dimensions)
Array permutation operation: permute the dimensions according to the specified permutation indices.
Example: if input has shape [a,b,c] and dimensions = [2,0,1] the output has shape [c,a,b] |
SDVariable |
SDMath.pow(SDVariable x,
double value)
Element-wise power function: out = x^value
|
SDVariable |
SDMath.pow(SDVariable x,
SDVariable y)
Element-wise (broadcastable) power function: out = x[i]^y[i]
|
SDVariable |
SDMath.pow(String name,
SDVariable x,
double value)
Element-wise power function: out = x^value
|
SDVariable |
SDMath.pow(String name,
SDVariable x,
SDVariable y)
Element-wise (broadcastable) power function: out = x[i]^y[i]
|
SDVariable |
SDNN.preciseGelu(SDVariable x)
GELU activation function - Gaussian Error Linear Units
For more details, see Gaussian Error Linear Units (GELUs) - https://arxiv.org/abs/1606.08415 This method uses the precise method |
SDVariable |
SDNN.preciseGelu(String name,
SDVariable x)
GELU activation function - Gaussian Error Linear Units
For more details, see Gaussian Error Linear Units (GELUs) - https://arxiv.org/abs/1606.08415 This method uses the precise method |
SDVariable |
SDNN.prelu(SDVariable input,
SDVariable alpha,
int... sharedAxes)
PReLU (Parameterized Rectified Linear Unit) operation.
|
SDVariable |
SDNN.prelu(String name,
SDVariable input,
SDVariable alpha,
int... sharedAxes)
PReLU (Parameterized Rectified Linear Unit) operation.
|
SDVariable |
SDBaseOps.prod(SDVariable x,
boolean keepDims,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.prod(SDVariable x,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.prod(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.prod(String name,
SDVariable x,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable[] |
SDLinalg.qr(SDVariable input)
Computes the QR decompositions of input matrix.
|
SDVariable[] |
SDLinalg.qr(SDVariable input,
boolean full)
Computes the QR decompositions of input matrix.
|
SDVariable[] |
SDLinalg.qr(String[] names,
SDVariable input)
Computes the QR decompositions of input matrix.
|
SDVariable[] |
SDLinalg.qr(String[] names,
SDVariable input,
boolean full)
Computes the QR decompositions of input matrix.
|
SDVariable |
SDImage.randomCrop(SDVariable input,
SDVariable shape)
Randomly crops image
|
SDVariable |
SDImage.randomCrop(String name,
SDVariable input,
SDVariable shape)
Randomly crops image
|
SDVariable |
SDBaseOps.range(double from,
double to,
double step,
DataType dataType)
Create a new variable with a 1d array, where the values start at from and increment by step
up to (but not including) limit. For example, range(1.0, 3.0, 0.5) will return [1.0, 1.5, 2.0, 2.5] |
SDVariable |
SDBaseOps.range(SDVariable from,
SDVariable to,
SDVariable step,
DataType dataType)
Create a new variable with a 1d array, where the values start at from and increment by step
up to (but not including) limit. For example, range(1.0, 3.0, 0.5) will return [1.0, 1.5, 2.0, 2.5] |
SDVariable |
SDBaseOps.range(String name,
double from,
double to,
double step,
DataType dataType)
Create a new variable with a 1d array, where the values start at from and increment by step
up to (but not including) limit. For example, range(1.0, 3.0, 0.5) will return [1.0, 1.5, 2.0, 2.5] |
SDVariable |
SDBaseOps.range(String name,
SDVariable from,
SDVariable to,
SDVariable step,
DataType dataType)
Create a new variable with a 1d array, where the values start at from and increment by step
up to (but not including) limit. For example, range(1.0, 3.0, 0.5) will return [1.0, 1.5, 2.0, 2.5] |
SDVariable |
SDBaseOps.rank(SDVariable in)
Returns the rank (number of dimensions, i.e., length(shape)) of the specified INDArray as a 0D scalar variable
|
SDVariable |
SDBaseOps.rank(String name,
SDVariable in)
Returns the rank (number of dimensions, i.e., length(shape)) of the specified INDArray as a 0D scalar variable
|
SDVariable |
SDMath.rationalTanh(SDVariable x)
Rational Tanh Approximation elementwise function, as described in the paper:
Compact Convolutional Neural Network Cascade for Face Detection This is a faster Tanh approximation |
SDVariable |
SDMath.rationalTanh(String name,
SDVariable x)
Rational Tanh Approximation elementwise function, as described in the paper:
Compact Convolutional Neural Network Cascade for Face Detection This is a faster Tanh approximation |
SDVariable |
SDMath.rdiv(SDVariable x,
double value)
Scalar reverse division operation, out = scalar / in
|
SDVariable |
SDMath.rdiv(SDVariable x,
SDVariable y)
Pairwise reverse division operation, out = y / x
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.rdiv(String name,
SDVariable x,
double value)
Scalar reverse division operation, out = scalar / in
|
SDVariable |
SDMath.rdiv(String name,
SDVariable x,
SDVariable y)
Pairwise reverse division operation, out = y / x
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.reciprocal(SDVariable x)
Element-wise reciprocal (inverse) function: out[i] = 1 / in[i]
|
SDVariable |
SDMath.reciprocal(String name,
SDVariable x)
Element-wise reciprocal (inverse) function: out[i] = 1 / in[i]
|
SDVariable |
SDMath.rectifiedTanh(SDVariable x)
Rectified tanh operation: max(0, tanh(in))
|
SDVariable |
SDMath.rectifiedTanh(String name,
SDVariable x)
Rectified tanh operation: max(0, tanh(in))
|
SDVariable |
SDNN.relu(SDVariable x,
double cutoff)
Element-wise rectified linear function with specified cutoff:
out[i] = in[i] if in[i] >= cutoff out[i] = 0 otherwise |
SDVariable |
SDNN.relu(String name,
SDVariable x,
double cutoff)
Element-wise rectified linear function with specified cutoff:
out[i] = in[i] if in[i] >= cutoff out[i] = 0 otherwise |
SDVariable |
SDNN.relu6(SDVariable x,
double cutoff)
Element-wise "rectified linear 6" function with specified cutoff:
out[i] = min(max(in, cutoff), 6) |
SDVariable |
SDNN.relu6(String name,
SDVariable x,
double cutoff)
Element-wise "rectified linear 6" function with specified cutoff:
out[i] = min(max(in, cutoff), 6) |
SDVariable |
SDNN.reluLayer(SDVariable input,
SDVariable weights,
SDVariable bias)
ReLU (Rectified Linear Unit) layer operation: out = relu(mmul(in,w) + bias)
Note that bias array is optional |
SDVariable |
SDNN.reluLayer(String name,
SDVariable input,
SDVariable weights,
SDVariable bias)
ReLU (Rectified Linear Unit) layer operation: out = relu(mmul(in,w) + bias)
Note that bias array is optional |
SDVariable |
SDBaseOps.replaceWhere(SDVariable update,
double value,
Condition condition)
Element-wise replace where condition:
out[i] = value if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.replaceWhere(SDVariable update,
SDVariable from,
Condition condition)
Element-wise replace where condition:
out[i] = from[i] if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.replaceWhere(String name,
SDVariable update,
double value,
Condition condition)
Element-wise replace where condition:
out[i] = value if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.replaceWhere(String name,
SDVariable update,
SDVariable from,
Condition condition)
Element-wise replace where condition:
out[i] = from[i] if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.reshape(SDVariable x,
long... shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reshape(SDVariable x,
SDVariable shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reshape(String name,
SDVariable x,
long... shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reshape(String name,
SDVariable x,
SDVariable shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reverse(SDVariable x,
int... dimensions)
Reverse the values of an array for the specified dimensions
If input is: [ 1, 2, 3] [ 4, 5, 6] then reverse(in, 0): [3, 2, 1] [6, 5, 4] reverse(in, 1): [4, 5, 6] [1, 2 3] |
SDVariable |
SDBaseOps.reverse(String name,
SDVariable x,
int... dimensions)
Reverse the values of an array for the specified dimensions
If input is: [ 1, 2, 3] [ 4, 5, 6] then reverse(in, 0): [3, 2, 1] [6, 5, 4] reverse(in, 1): [4, 5, 6] [1, 2 3] |
SDVariable |
SDBaseOps.reverseSequence(SDVariable x,
SDVariable seq_lengths)
Reverse sequence op: for each slice along dimension seqDimension, the first seqLength values are reversed
|
SDVariable |
SDBaseOps.reverseSequence(SDVariable x,
SDVariable seq_lengths,
int seqDim,
int batchDim)
Reverse sequence op: for each slice along dimension seqDimension, the first seqLength values are reversed
|
SDVariable |
SDBaseOps.reverseSequence(String name,
SDVariable x,
SDVariable seq_lengths)
Reverse sequence op: for each slice along dimension seqDimension, the first seqLength values are reversed
|
SDVariable |
SDBaseOps.reverseSequence(String name,
SDVariable x,
SDVariable seq_lengths,
int seqDim,
int batchDim)
Reverse sequence op: for each slice along dimension seqDimension, the first seqLength values are reversed
|
SDVariable |
SDImage.rgbToHsv(SDVariable input)
Converting array from HSV to RGB format
|
SDVariable |
SDImage.rgbToHsv(String name,
SDVariable input)
Converting array from HSV to RGB format
|
SDVariable |
SDImage.rgbToYiq(SDVariable input)
Converting array from RGB to YIQ format
|
SDVariable |
SDImage.rgbToYiq(String name,
SDVariable input)
Converting array from RGB to YIQ format
|
SDVariable |
SDImage.rgbToYuv(SDVariable input)
Converting array from RGB to YUV format
|
SDVariable |
SDImage.rgbToYuv(String name,
SDVariable input)
Converting array from RGB to YUV format
|
SDVariable |
SDBitwise.rightShift(SDVariable x,
SDVariable y)
Bitwise right shift operation.
|
SDVariable |
SDBitwise.rightShift(String name,
SDVariable x,
SDVariable y)
Bitwise right shift operation.
|
SDVariable |
SDBitwise.rightShiftCyclic(SDVariable x,
SDVariable y)
Bitwise right cyclical shift operation.
|
SDVariable |
SDBitwise.rightShiftCyclic(String name,
SDVariable x,
SDVariable y)
Bitwise right cyclical shift operation.
|
SDVariable |
SDMath.round(SDVariable x)
Element-wise round function: out = round(x).
Rounds (up or down depending on value) to the nearest integer value. |
SDVariable |
SDMath.round(String name,
SDVariable x)
Element-wise round function: out = round(x).
Rounds (up or down depending on value) to the nearest integer value. |
SDVariable |
SDMath.rsqrt(SDVariable x)
Element-wise reciprocal (inverse) of square root: out = 1.0 / sqrt(x)
|
SDVariable |
SDMath.rsqrt(String name,
SDVariable x)
Element-wise reciprocal (inverse) of square root: out = 1.0 / sqrt(x)
|
SDVariable |
SDMath.rsub(SDVariable x,
double value)
Scalar reverse subtraction operation, out = scalar - in
|
SDVariable |
SDMath.rsub(SDVariable x,
SDVariable y)
Pairwise reverse subtraction operation, out = y - x
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.rsub(String name,
SDVariable x,
double value)
Scalar reverse subtraction operation, out = scalar - in
|
SDVariable |
SDMath.rsub(String name,
SDVariable x,
SDVariable y)
Pairwise reverse subtraction operation, out = y - x
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.scalarFloorMod(SDVariable in,
double value)
Element-wise scalar floor modulus operation: out = floorMod(in, value).
i.e., returns the remainder after division by 'value' |
SDVariable |
SDBaseOps.scalarFloorMod(String name,
SDVariable in,
double value)
Element-wise scalar floor modulus operation: out = floorMod(in, value).
i.e., returns the remainder after division by 'value' |
SDVariable |
SDBaseOps.scalarMax(SDVariable in,
double value)
Element-wise scalar maximum operation: out = max(in, value)
|
SDVariable |
SDBaseOps.scalarMax(String name,
SDVariable in,
double value)
Element-wise scalar maximum operation: out = max(in, value)
|
SDVariable |
SDBaseOps.scalarMin(SDVariable in,
double value)
Element-wise scalar minimum operation: out = min(in, value)
|
SDVariable |
SDBaseOps.scalarMin(String name,
SDVariable in,
double value)
Element-wise scalar minimum operation: out = min(in, value)
|
SDVariable |
SDBaseOps.scalarSet(SDVariable in,
double set)
Return a variable with equal shape to the input, but all elements set to value 'set'
|
SDVariable |
SDBaseOps.scalarSet(String name,
SDVariable in,
double set)
Return a variable with equal shape to the input, but all elements set to value 'set'
|
SDVariable |
SDBaseOps.scatterAdd(SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter addition operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterAdd(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter addition operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterDiv(SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter division operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterDiv(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter division operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMax(SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter max operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMax(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter max operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMin(SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter min operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMin(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter min operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMul(SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter multiplication operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMul(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter multiplication operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterSub(SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter subtraction operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterSub(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter subtraction operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterUpdate(SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter update operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterUpdate(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter update operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.segmentMax(SDVariable data,
SDVariable segmentIds)
Segment max operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentMax(String name,
SDVariable data,
SDVariable segmentIds)
Segment max operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentMean(SDVariable data,
SDVariable segmentIds)
Segment mean operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentMean(String name,
SDVariable data,
SDVariable segmentIds)
Segment mean operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentMin(SDVariable data,
SDVariable segmentIds)
Segment min operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentMin(String name,
SDVariable data,
SDVariable segmentIds)
Segment min operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentProd(SDVariable data,
SDVariable segmentIds)
Segment product operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentProd(String name,
SDVariable data,
SDVariable segmentIds)
Segment product operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentSum(SDVariable data,
SDVariable segmentIds)
Segment sum operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentSum(String name,
SDVariable data,
SDVariable segmentIds)
Segment sum operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDNN.selu(SDVariable x)
Element-wise SeLU function - Scaled exponential Lineal Unit: see Self-Normalizing Neural Networks
out[i] = scale * alpha * (exp(in[i])-1) if in[i]>0, or 0 if in[i] <= 0 Uses default scale and alpha values. |
SDVariable |
SDNN.selu(String name,
SDVariable x)
Element-wise SeLU function - Scaled exponential Lineal Unit: see Self-Normalizing Neural Networks
out[i] = scale * alpha * (exp(in[i])-1) if in[i]>0, or 0 if in[i] <= 0 Uses default scale and alpha values. |
SDVariable |
SDCNN.separableConv2d(SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
Conv2DConfig Conv2DConfig)
Separable 2D convolution operation with optional bias
|
SDVariable |
SDCNN.separableConv2d(SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
SDVariable bias,
Conv2DConfig Conv2DConfig)
Separable 2D convolution operation with optional bias
|
SDVariable |
SDCNN.separableConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
Conv2DConfig Conv2DConfig)
Separable 2D convolution operation with optional bias
|
SDVariable |
SDCNN.separableConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
SDVariable bias,
Conv2DConfig Conv2DConfig)
Separable 2D convolution operation with optional bias
|
SDVariable |
SDBaseOps.sequenceMask(SDVariable lengths,
DataType dataType)
see sequenceMask(String, SDVariable, SDVariable, DataType)
|
SDVariable |
SDBaseOps.sequenceMask(SDVariable lengths,
int maxLen,
DataType dataType)
Generate a sequence mask (with values 0 or 1) based on the specified lengths
Specifically, out[i, ..., k, j] = (j < lengths[i, ..., k] ? 1.0 : 0.0) |
SDVariable |
SDBaseOps.sequenceMask(SDVariable lengths,
SDVariable maxLen,
DataType dataType)
Generate a sequence mask (with values 0 or 1) based on the specified lengths
Specifically, out[i, ..., k, j] = (j < lengths[i, ..., k] ? 1.0 : 0.0) |
SDVariable |
SDBaseOps.sequenceMask(String name,
SDVariable lengths,
DataType dataType)
see sequenceMask(String, SDVariable, SDVariable, DataType)
|
SDVariable |
SDBaseOps.sequenceMask(String name,
SDVariable lengths,
int maxLen,
DataType dataType)
Generate a sequence mask (with values 0 or 1) based on the specified lengths
Specifically, out[i, ..., k, j] = (j < lengths[i, ..., k] ? 1.0 : 0.0) |
SDVariable |
SDBaseOps.sequenceMask(String name,
SDVariable lengths,
SDVariable maxLen,
DataType dataType)
Generate a sequence mask (with values 0 or 1) based on the specified lengths
Specifically, out[i, ..., k, j] = (j < lengths[i, ..., k] ? 1.0 : 0.0) |
SDVariable |
SDMath.setDiag(SDVariable in,
SDVariable diag)
Set the diagonal value to the specified values
If input is [ a, b, c] [ d, e, f] [ g, h, i] and diag = [ 1, 2, 3] then output is [ 1, b, c] [ d, 2, f] [ g, h, 3] |
SDVariable |
SDMath.setDiag(String name,
SDVariable in,
SDVariable diag)
Set the diagonal value to the specified values
If input is [ a, b, c] [ d, e, f] [ g, h, i] and diag = [ 1, 2, 3] then output is [ 1, b, c] [ d, 2, f] [ g, h, 3] |
SDVariable |
SDMath.shannonEntropy(SDVariable in,
int... dimensions)
Shannon Entropy reduction: -sum(x * log2(x))
|
SDVariable |
SDMath.shannonEntropy(String name,
SDVariable in,
int... dimensions)
Shannon Entropy reduction: -sum(x * log2(x))
|
SDVariable |
SDBaseOps.shape(SDVariable input)
Returns the shape of the specified INDArray as a 1D INDArray
|
SDVariable |
SDBaseOps.shape(String name,
SDVariable input)
Returns the shape of the specified INDArray as a 1D INDArray
|
SDVariable |
SDNN.sigmoid(SDVariable x)
Element-wise sigmoid function: out[i] = 1.0/(1+exp(-in[i]))
|
SDVariable |
SDNN.sigmoid(String name,
SDVariable x)
Element-wise sigmoid function: out[i] = 1.0/(1+exp(-in[i]))
|
SDVariable |
SDLoss.sigmoidCrossEntropy(SDVariable label,
SDVariable predictionLogits,
SDVariable weights)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
SDVariable |
SDLoss.sigmoidCrossEntropy(SDVariable label,
SDVariable predictionLogits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
SDVariable |
SDLoss.sigmoidCrossEntropy(String name,
SDVariable label,
SDVariable predictionLogits,
SDVariable weights)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
SDVariable |
SDLoss.sigmoidCrossEntropy(String name,
SDVariable label,
SDVariable predictionLogits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
SDVariable |
SDNN.sigmoidDerivative(SDVariable x,
SDVariable wrt)
Element-wise sigmoid function derivative: dL/dIn given input and dL/dOut
|
SDVariable |
SDNN.sigmoidDerivative(String name,
SDVariable x,
SDVariable wrt)
Element-wise sigmoid function derivative: dL/dIn given input and dL/dOut
|
SDVariable |
SDMath.sign(SDVariable x)
Element-wise sign (signum) function:
out = -1 if in < 0 out = 0 if in = 0 out = 1 if in > 0 |
SDVariable |
SDMath.sign(String name,
SDVariable x)
Element-wise sign (signum) function:
out = -1 if in < 0 out = 0 if in = 0 out = 1 if in > 0 |
SDVariable |
SDMath.sin(SDVariable x)
Elementwise sine operation: out = sin(x)
|
SDVariable |
SDMath.sin(String name,
SDVariable x)
Elementwise sine operation: out = sin(x)
|
SDVariable |
SDMath.sinh(SDVariable x)
Elementwise sinh (hyperbolic sine) operation: out = sinh(x)
|
SDVariable |
SDMath.sinh(String name,
SDVariable x)
Elementwise sinh (hyperbolic sine) operation: out = sinh(x)
|
SDVariable |
SDBaseOps.size(SDVariable in)
Returns the size (number of elements, i.e., prod(shape)) of the specified INDArray as a 0D scalar variable
|
SDVariable |
SDBaseOps.size(String name,
SDVariable in)
Returns the size (number of elements, i.e., prod(shape)) of the specified INDArray as a 0D scalar variable
|
SDVariable |
SDBaseOps.sizeAt(SDVariable in,
int dimension)
Returns a rank 0 (scalar) variable for the size of the specified dimension.
For example, if X has shape [10,20,30] then sizeAt(X,1)=20. |
SDVariable |
SDBaseOps.sizeAt(String name,
SDVariable in,
int dimension)
Returns a rank 0 (scalar) variable for the size of the specified dimension.
For example, if X has shape [10,20,30] then sizeAt(X,1)=20. |
SDVariable |
SDBaseOps.slice(SDVariable input,
int[] begin,
int... size)
Get a subset of the specified input, by specifying the first element and the size of the array.
For example, if input is: [a, b, c] [d, e, f] then slice(input, begin=[0,1], size=[2,1] will return: [b] [e] Note that for each dimension i, begin[i] + size[i] <= input.size(i) |
SDVariable |
SDBaseOps.slice(SDVariable input,
SDVariable begin,
SDVariable size)
Get a subset of the specified input, by specifying the first element and the size of the array.
For example, if input is: [a, b, c] [d, e, f] then slice(input, begin=[0,1], size=[2,1] will return: [b] [e] Note that for each dimension i, begin[i] + size[i] <= input.size(i) |
SDVariable |
SDBaseOps.slice(String name,
SDVariable input,
int[] begin,
int... size)
Get a subset of the specified input, by specifying the first element and the size of the array.
For example, if input is: [a, b, c] [d, e, f] then slice(input, begin=[0,1], size=[2,1] will return: [b] [e] Note that for each dimension i, begin[i] + size[i] <= input.size(i) |
SDVariable |
SDBaseOps.slice(String name,
SDVariable input,
SDVariable begin,
SDVariable size)
Get a subset of the specified input, by specifying the first element and the size of the array.
For example, if input is: [a, b, c] [d, e, f] then slice(input, begin=[0,1], size=[2,1] will return: [b] [e] Note that for each dimension i, begin[i] + size[i] <= input.size(i) |
SDVariable |
SDNN.softmax(SDVariable x)
Softmax activation, along the specified dimension
|
SDVariable |
SDNN.softmax(SDVariable x,
int dimension)
Softmax activation, along the specified dimension
|
SDVariable |
SDNN.softmax(String name,
SDVariable x)
Softmax activation, along the specified dimension
|
SDVariable |
SDNN.softmax(String name,
SDVariable x,
int dimension)
Softmax activation, along the specified dimension
|
SDVariable |
SDLoss.softmaxCrossEntropy(SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits)If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels; otherwise, the output is a scalar. |
SDVariable |
SDLoss.softmaxCrossEntropy(SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits)If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels; otherwise, the output is a scalar. |
SDVariable |
SDLoss.softmaxCrossEntropy(String name,
SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits)If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels; otherwise, the output is a scalar. |
SDVariable |
SDLoss.softmaxCrossEntropy(String name,
SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits)If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels; otherwise, the output is a scalar. |
SDVariable |
SDNN.softmaxDerivative(SDVariable x,
SDVariable wrt,
int dimension)
Softmax derivative function
|
SDVariable |
SDNN.softmaxDerivative(String name,
SDVariable x,
SDVariable wrt,
int dimension)
Softmax derivative function
|
SDVariable |
SDNN.softplus(SDVariable x)
Element-wise softplus function: out = log(exp(x) + 1)
|
SDVariable |
SDNN.softplus(String name,
SDVariable x)
Element-wise softplus function: out = log(exp(x) + 1)
|
SDVariable |
SDNN.softsign(SDVariable x)
Element-wise softsign function: out = x / (abs(x) + 1)
|
SDVariable |
SDNN.softsign(String name,
SDVariable x)
Element-wise softsign function: out = x / (abs(x) + 1)
|
SDVariable |
SDNN.softsignDerivative(SDVariable x)
Element-wise derivative (dOut/dIn) of the softsign function softsign(INDArray)
|
SDVariable |
SDNN.softsignDerivative(String name,
SDVariable x)
Element-wise derivative (dOut/dIn) of the softsign function softsign(INDArray)
|
SDVariable |
SDLinalg.solve(SDVariable matrix,
SDVariable rhs)
Solver for systems of linear equations.
|
SDVariable |
SDLinalg.solve(SDVariable matrix,
SDVariable rhs,
boolean adjoint)
Solver for systems of linear equations.
|
SDVariable |
SDLinalg.solve(String name,
SDVariable matrix,
SDVariable rhs)
Solver for systems of linear equations.
|
SDVariable |
SDLinalg.solve(String name,
SDVariable matrix,
SDVariable rhs,
boolean adjoint)
Solver for systems of linear equations.
|
SDVariable |
SDCNN.spaceToBatch(SDVariable x,
int[] blocks,
int[] paddingTop,
int... paddingBottom)
Convolution 2d layer space to batch operation on 4d input.
Increases input batch dimension by rearranging data from spatial dimensions into batch dimension |
SDVariable |
SDCNN.spaceToBatch(String name,
SDVariable x,
int[] blocks,
int[] paddingTop,
int... paddingBottom)
Convolution 2d layer space to batch operation on 4d input.
Increases input batch dimension by rearranging data from spatial dimensions into batch dimension |
SDVariable |
SDCNN.spaceToDepth(SDVariable x,
int blockSize,
DataFormat dataFormat)
Convolution 2d layer space to depth operation on 4d input.
Increases input channels (reduced spatial dimensions) by rearranging data into a larger channels dimension Example: if input has shape [mb, 2, 4, 4] and block size is 2, then output size is [mb, 8/(2*2), 2*2, 2*2] = [mb, 2, 4, 4] |
SDVariable |
SDCNN.spaceToDepth(String name,
SDVariable x,
int blockSize,
DataFormat dataFormat)
Convolution 2d layer space to depth operation on 4d input.
Increases input channels (reduced spatial dimensions) by rearranging data into a larger channels dimension Example: if input has shape [mb, 2, 4, 4] and block size is 2, then output size is [mb, 8/(2*2), 2*2, 2*2] = [mb, 2, 4, 4] |
SDVariable |
SDLoss.sparseSoftmaxCrossEntropy(SDVariable logits,
SDVariable labels)
As per softmaxCrossEntropy(String, SDVariable, SDVariable, LossReduce) but the labels variable
is represented as an integer array instead of the equivalent one-hot array. i.e., if logits are rank N, then labels have rank N-1 |
SDVariable |
SDLoss.sparseSoftmaxCrossEntropy(String name,
SDVariable logits,
SDVariable labels)
As per softmaxCrossEntropy(String, SDVariable, SDVariable, LossReduce) but the labels variable
is represented as an integer array instead of the equivalent one-hot array. i.e., if logits are rank N, then labels have rank N-1 |
SDVariable[] |
SDBaseOps.split(SDVariable input,
int numSplit,
int splitDim)
Split a value in to a list of ndarrays.
|
SDVariable[] |
SDBaseOps.split(String[] names,
SDVariable input,
int numSplit,
int splitDim)
Split a value in to a list of ndarrays.
|
SDVariable |
SDMath.sqrt(SDVariable x)
Element-wise square root function: out = sqrt(x)
|
SDVariable |
SDMath.sqrt(String name,
SDVariable x)
Element-wise square root function: out = sqrt(x)
|
SDVariable |
SDMath.square(SDVariable x)
Element-wise square function: out = x^2
|
SDVariable |
SDMath.square(String name,
SDVariable x)
Element-wise square function: out = x^2
|
SDVariable |
SDMath.squaredDifference(SDVariable x,
SDVariable y)
Pairwise squared difference operation.
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.squaredDifference(String name,
SDVariable x,
SDVariable y)
Pairwise squared difference operation.
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.squaredNorm(SDVariable x,
boolean keepDims,
int... dimensions)
Squared L2 norm: see norm2(String, SDVariable, boolean, int...)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.squaredNorm(SDVariable x,
int... dimensions)
Squared L2 norm: see norm2(String, SDVariable, boolean, int...)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.squaredNorm(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Squared L2 norm: see norm2(String, SDVariable, boolean, int...)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.squaredNorm(String name,
SDVariable x,
int... dimensions)
Squared L2 norm: see norm2(String, SDVariable, boolean, int...)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.squeeze(SDVariable x,
int axis)
Remove a single dimension of size 1.
For example, if input has shape [a,b,1,c] then squeeze(input, 2) returns an array of shape [a,b,c] |
SDVariable |
SDBaseOps.squeeze(String name,
SDVariable x,
int axis)
Remove a single dimension of size 1.
For example, if input has shape [a,b,1,c] then squeeze(input, 2) returns an array of shape [a,b,c] |
SDVariable |
SDRNN.sru(SDVariable x,
SDVariable initialC,
SDVariable mask,
SRUWeights SRUWeights)
The SRU layer.
|
SDVariable |
SDRNN.sru(SDVariable x,
SDVariable initialC,
SRUWeights SRUWeights)
The SRU layer.
|
SDVariable |
SDRNN.sru(String name,
SDVariable x,
SDVariable initialC,
SDVariable mask,
SRUWeights SRUWeights)
The SRU layer.
|
SDVariable |
SDRNN.sru(String name,
SDVariable x,
SDVariable initialC,
SRUWeights SRUWeights)
The SRU layer.
|
SDVariable |
SDRNN.sruCell(SDVariable x,
SDVariable cLast,
SRUWeights SRUWeights)
The SRU layer.
|
SDVariable |
SDRNN.sruCell(String name,
SDVariable x,
SDVariable cLast,
SRUWeights SRUWeights)
The SRU layer.
|
SDVariable |
SDBaseOps.stack(int axis,
SDVariable... values)
Stack a set of N INDArray of rank X into one rank X+1 variable.
If inputs have shape [a,b,c] then output has shape: axis = 0: [N,a,b,c] axis = 1: [a,N,b,c] axis = 2: [a,b,N,c] axis = 3: [a,b,c,N] see unstack(String[], SDVariable, int, int) |
SDVariable |
SDBaseOps.stack(String name,
int axis,
SDVariable... values)
Stack a set of N INDArray of rank X into one rank X+1 variable.
If inputs have shape [a,b,c] then output has shape: axis = 0: [N,a,b,c] axis = 1: [a,N,b,c] axis = 2: [a,b,N,c] axis = 3: [a,b,c,N] see unstack(String[], SDVariable, int, int) |
SDVariable |
SDBaseOps.standardDeviation(SDVariable x,
boolean biasCorrected,
boolean keepDims,
int... dimensions)
Stardard deviation array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.standardDeviation(SDVariable x,
boolean biasCorrected,
int... dimensions)
Stardard deviation array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.standardDeviation(String name,
SDVariable x,
boolean biasCorrected,
boolean keepDims,
int... dimensions)
Stardard deviation array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.standardDeviation(String name,
SDVariable x,
boolean biasCorrected,
int... dimensions)
Stardard deviation array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.standardize(SDVariable x,
int... dimensions)
Standardize input variable along given axis
|
SDVariable |
SDMath.standardize(String name,
SDVariable x,
int... dimensions)
Standardize input variable along given axis
|
SDVariable |
SDMath.step(SDVariable x,
double value)
Elementwise step function:
out(x) = 1 if x >= cutoff out(x) = 0 otherwise |
SDVariable |
SDMath.step(String name,
SDVariable x,
double value)
Elementwise step function:
out(x) = 1 if x >= cutoff out(x) = 0 otherwise |
SDVariable |
SDBaseOps.stridedSlice(SDVariable in,
long[] begin,
long[] end,
long... strides)
Get a subset of the specified input, by specifying the first element, last element, and the strides.
For example, if input is: [a, b, c] [d, e, f] [g, h, i] then stridedSlice(input, begin=[0,1], end=[2,2], strides=[2,1], all masks = 0) will return: [b, c] [h, i] |
SDVariable |
SDBaseOps.stridedSlice(SDVariable in,
long[] begin,
long[] end,
long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask)
Get a subset of the specified input, by specifying the first element, last element, and the strides.
For example, if input is: [a, b, c] [d, e, f] [g, h, i] then stridedSlice(input, begin=[0,1], end=[2,2], strides=[2,1], all masks = 0) will return: [b, c] [h, i] |
SDVariable |
SDBaseOps.stridedSlice(String name,
SDVariable in,
long[] begin,
long[] end,
long... strides)
Get a subset of the specified input, by specifying the first element, last element, and the strides.
For example, if input is: [a, b, c] [d, e, f] [g, h, i] then stridedSlice(input, begin=[0,1], end=[2,2], strides=[2,1], all masks = 0) will return: [b, c] [h, i] |
SDVariable |
SDBaseOps.stridedSlice(String name,
SDVariable in,
long[] begin,
long[] end,
long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask)
Get a subset of the specified input, by specifying the first element, last element, and the strides.
For example, if input is: [a, b, c] [d, e, f] [g, h, i] then stridedSlice(input, begin=[0,1], end=[2,2], strides=[2,1], all masks = 0) will return: [b, c] [h, i] |
SDVariable |
SDMath.sub(SDVariable x,
double value)
Scalar subtraction operation, out = in - scalar
|
SDVariable |
SDMath.sub(SDVariable x,
SDVariable y)
Pairwise subtraction operation, out = x - y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.sub(String name,
SDVariable x,
double value)
Scalar subtraction operation, out = in - scalar
|
SDVariable |
SDMath.sub(String name,
SDVariable x,
SDVariable y)
Pairwise subtraction operation, out = x - y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.sum(SDVariable x,
boolean keepDims,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions.
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.sum(SDVariable x,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions.
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.sum(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions.
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.sum(String name,
SDVariable x,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions.
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDLinalg.svd(SDVariable input,
boolean fullUV,
boolean computeUV)
Calculates singular value decomposition.
|
SDVariable |
SDLinalg.svd(SDVariable input,
boolean fullUV,
boolean computeUV,
int switchNum)
Calculates singular value decomposition.
|
SDVariable |
SDLinalg.svd(String name,
SDVariable input,
boolean fullUV,
boolean computeUV)
Calculates singular value decomposition.
|
SDVariable |
SDLinalg.svd(String name,
SDVariable input,
boolean fullUV,
boolean computeUV,
int switchNum)
Calculates singular value decomposition.
|
SDVariable |
SDNN.swish(SDVariable x)
Element-wise "swish" function: out = x * sigmoid(b*x) with b=1.0
See: https://arxiv.org/abs/1710.05941 |
SDVariable |
SDNN.swish(String name,
SDVariable x)
Element-wise "swish" function: out = x * sigmoid(b*x) with b=1.0
See: https://arxiv.org/abs/1710.05941 |
SDVariable[] |
SDBaseOps.switchOp(SDVariable x,
SDVariable predicate)
Switch operation
Predictate - if false, values are output to left (first) branch/output; if true, to right (second) branch/output |
SDVariable[] |
SDBaseOps.switchOp(String[] names,
SDVariable x,
SDVariable predicate)
Switch operation
Predictate - if false, values are output to left (first) branch/output; if true, to right (second) branch/output |
SDVariable |
SDMath.tan(SDVariable x)
Elementwise tangent operation: out = tan(x)
|
SDVariable |
SDMath.tan(String name,
SDVariable x)
Elementwise tangent operation: out = tan(x)
|
SDVariable |
SDMath.tanh(SDVariable x)
Elementwise tanh (hyperbolic tangent) operation: out = tanh(x)
|
SDVariable |
SDNN.tanh(SDVariable x)
Elementwise tanh (hyperbolic tangent) operation: out = tanh(x)
|
SDVariable |
SDMath.tanh(String name,
SDVariable x)
Elementwise tanh (hyperbolic tangent) operation: out = tanh(x)
|
SDVariable |
SDNN.tanh(String name,
SDVariable x)
Elementwise tanh (hyperbolic tangent) operation: out = tanh(x)
|
SDVariable |
SDBaseOps.tensorMmul(SDVariable x,
SDVariable y,
int[] dimensionsX,
int... dimensionsY)
//TODO: Ops must be documented.
|
SDVariable |
SDBaseOps.tensorMmul(SDVariable x,
SDVariable y,
int[] dimensionsX,
int[] dimensionsY,
boolean transposeX,
boolean transposeY,
boolean transposeZ)
//TODO: Ops must be documented.
|
SDVariable |
SDBaseOps.tensorMmul(String name,
SDVariable x,
SDVariable y,
int[] dimensionsX,
int... dimensionsY)
//TODO: Ops must be documented.
|
SDVariable |
SDBaseOps.tensorMmul(String name,
SDVariable x,
SDVariable y,
int[] dimensionsX,
int[] dimensionsY,
boolean transposeX,
boolean transposeY,
boolean transposeZ)
//TODO: Ops must be documented.
|
SDVariable |
SDBaseOps.tile(SDVariable x,
int... repeat)
see tile(String, SDVariable, int...)
|
SDVariable |
SDBaseOps.tile(SDVariable x,
SDVariable repeat)
Repeat (tile) the input tensor the specified number of times.
For example, if input is [1, 2] [3, 4] and repeat is [2, 3] then output is [1, 2, 1, 2, 1, 2] [3, 4, 3, 4, 3, 4] [1, 2, 1, 2, 1, 2] [3, 4, 3, 4, 3, 4] |
SDVariable |
SDBaseOps.tile(String name,
SDVariable x,
int... repeat)
see tile(String, SDVariable, int...)
|
SDVariable |
SDBaseOps.tile(String name,
SDVariable x,
SDVariable repeat)
Repeat (tile) the input tensor the specified number of times.
For example, if input is [1, 2] [3, 4] and repeat is [2, 3] then output is [1, 2, 1, 2, 1, 2] [3, 4, 3, 4, 3, 4] [1, 2, 1, 2, 1, 2] [3, 4, 3, 4, 3, 4] |
SDVariable |
SDMath.trace(SDVariable in)
Matrix trace operation
For rank 2 matrices, the output is a scalar vith the trace - i.e., sum of the main diagonal. For higher rank inputs, output[a,b,c] = trace(in[a,b,c,:,:]) |
SDVariable |
SDMath.trace(String name,
SDVariable in)
Matrix trace operation
For rank 2 matrices, the output is a scalar vith the trace - i.e., sum of the main diagonal. For higher rank inputs, output[a,b,c] = trace(in[a,b,c,:,:]) |
SDVariable |
SDBaseOps.transpose(SDVariable x)
Matrix transpose operation: If input has shape [a,b] output has shape [b,a]
|
SDVariable |
SDBaseOps.transpose(String name,
SDVariable x)
Matrix transpose operation: If input has shape [a,b] output has shape [b,a]
|
SDVariable |
SDLinalg.tri(DataType dataType,
int row,
int column,
int diagonal)
An array with ones at and below the given diagonal and zeros elsewhere.
|
SDVariable |
SDLinalg.tri(int row,
int column)
An array with ones at and below the given diagonal and zeros elsewhere.
|
SDVariable |
SDLinalg.tri(String name,
DataType dataType,
int row,
int column,
int diagonal)
An array with ones at and below the given diagonal and zeros elsewhere.
|
SDVariable |
SDLinalg.tri(String name,
int row,
int column)
An array with ones at and below the given diagonal and zeros elsewhere.
|
SDVariable |
SDLinalg.triangularSolve(SDVariable matrix,
SDVariable rhs,
boolean lower,
boolean adjoint)
Solver for systems of linear questions.
|
SDVariable |
SDLinalg.triangularSolve(String name,
SDVariable matrix,
SDVariable rhs,
boolean lower,
boolean adjoint)
Solver for systems of linear questions.
|
SDVariable |
SDLinalg.triu(SDVariable input)
Upper triangle of an array.
|
SDVariable |
SDLinalg.triu(SDVariable input,
int diag)
Upper triangle of an array.
|
SDVariable |
SDLinalg.triu(String name,
SDVariable input)
Upper triangle of an array.
|
SDVariable |
SDLinalg.triu(String name,
SDVariable input,
int diag)
Upper triangle of an array.
|
SDVariable |
SDRandom.uniform(double min,
double max,
DataType datatype,
long... shape)
Generate a new random INDArray, where values are randomly sampled according to a uniform distribution,
U(min,max) |
SDVariable |
SDRandom.uniform(String name,
double min,
double max,
DataType datatype,
long... shape)
Generate a new random INDArray, where values are randomly sampled according to a uniform distribution,
U(min,max) |
SDVariable |
SDBaseOps.unsortedSegmentMax(SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment max operation.
|
SDVariable |
SDBaseOps.unsortedSegmentMax(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment max operation.
|
SDVariable |
SDBaseOps.unsortedSegmentMean(SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment mean operation.
|
SDVariable |
SDBaseOps.unsortedSegmentMean(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment mean operation.
|
SDVariable |
SDBaseOps.unsortedSegmentMin(SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment min operation.
|
SDVariable |
SDBaseOps.unsortedSegmentMin(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment min operation.
|
SDVariable |
SDBaseOps.unsortedSegmentProd(SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment product operation.
|
SDVariable |
SDBaseOps.unsortedSegmentProd(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment product operation.
|
SDVariable |
SDBaseOps.unsortedSegmentSqrtN(SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment sqrtN operation.
|
SDVariable |
SDBaseOps.unsortedSegmentSqrtN(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment sqrtN operation.
|
SDVariable |
SDBaseOps.unsortedSegmentSum(SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment sum operation.
|
SDVariable |
SDBaseOps.unsortedSegmentSum(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment sum operation.
|
SDVariable[] |
SDBaseOps.unstack(SDVariable value,
int axis,
int num)
Unstack a variable of rank X into N rank X-1 variables by taking slices along the specified axis.
If input has shape [a,b,c] then output has shape: axis = 0: [b,c] axis = 1: [a,c] axis = 2: [a,b] |
SDVariable[] |
SDBaseOps.unstack(String[] names,
SDVariable value,
int axis,
int num)
Unstack a variable of rank X into N rank X-1 variables by taking slices along the specified axis.
If input has shape [a,b,c] then output has shape: axis = 0: [b,c] axis = 1: [a,c] axis = 2: [a,b] |
SDVariable |
SDCNN.upsampling2d(SDVariable input,
int scale)
Upsampling layer for 2D inputs.
scale is used for both height and width dimensions. |
SDVariable |
SDCNN.upsampling2d(SDVariable input,
int scaleH,
int scaleW,
boolean nchw)
2D Convolution layer operation - Upsampling 2d
|
SDVariable |
SDCNN.upsampling2d(String name,
SDVariable input,
int scale)
Upsampling layer for 2D inputs.
scale is used for both height and width dimensions. |
SDVariable |
SDCNN.upsampling2d(String name,
SDVariable input,
int scaleH,
int scaleW,
boolean nchw)
2D Convolution layer operation - Upsampling 2d
|
SDVariable |
SDCNN.upsampling3d(SDVariable input,
boolean ncdhw,
int scaleD,
int scaleH,
int scaleW)
3D Convolution layer operation - Upsampling 3d
|
SDVariable |
SDCNN.upsampling3d(String name,
SDVariable input,
boolean ncdhw,
int scaleD,
int scaleH,
int scaleW)
3D Convolution layer operation - Upsampling 3d
|
SDVariable |
SDBaseOps.variance(SDVariable x,
boolean biasCorrected,
boolean keepDims,
int... dimensions)
Variance array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.variance(SDVariable x,
boolean biasCorrected,
int... dimensions)
Variance array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.variance(String name,
SDVariable x,
boolean biasCorrected,
boolean keepDims,
int... dimensions)
Variance array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.variance(String name,
SDVariable x,
boolean biasCorrected,
int... dimensions)
Variance array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDLoss.weightedCrossEntropyWithLogits(SDVariable targets,
SDVariable inputs,
SDVariable weights)
Weighted cross entropy loss with logits
|
SDVariable |
SDLoss.weightedCrossEntropyWithLogits(String name,
SDVariable targets,
SDVariable inputs,
SDVariable weights)
Weighted cross entropy loss with logits
|
SDVariable |
SDBitwise.xor(SDVariable x,
SDVariable y)
Bitwise XOR operation (exclusive OR).
|
SDVariable |
SDMath.xor(SDVariable x,
SDVariable y)
Boolean XOR (exclusive OR) operation: elementwise (x != 0) XOR (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBitwise.xor(String name,
SDVariable x,
SDVariable y)
Bitwise XOR operation (exclusive OR).
|
SDVariable |
SDMath.xor(String name,
SDVariable x,
SDVariable y)
Boolean XOR (exclusive OR) operation: elementwise (x != 0) XOR (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDImage.yiqToRgb(SDVariable input)
Converting image from YIQ to RGB format
|
SDVariable |
SDImage.yiqToRgb(String name,
SDVariable input)
Converting image from YIQ to RGB format
|
SDVariable |
SDImage.yuvToRgb(SDVariable input)
Converting image from YUV to RGB format
|
SDVariable |
SDImage.yuvToRgb(String name,
SDVariable input)
Converting image from YUV to RGB format
|
SDVariable |
SDMath.zeroFraction(SDVariable input)
Full array zero fraction array reduction operation, optionally along specified dimensions: out = (count(x == 0) / length(x))
|
SDVariable |
SDMath.zeroFraction(String name,
SDVariable input)
Full array zero fraction array reduction operation, optionally along specified dimensions: out = (count(x == 0) / length(x))
|
SDVariable |
SDBaseOps.zerosLike(SDVariable input)
Return a variable of all 0s, with the same shape as the input variable.
|
SDVariable |
SDBaseOps.zerosLike(String name,
SDVariable input)
Return a variable of all 0s, with the same shape as the input variable.
|
| Modifier and Type | Method and Description |
|---|---|
SDVariable |
SDMath.abs(SDVariable x)
Elementwise absolute value operation: out = abs(x)
|
SDVariable |
SDMath.abs(String name,
SDVariable x)
Elementwise absolute value operation: out = abs(x)
|
SDVariable |
SDLoss.absoluteDifference(SDVariable label,
SDVariable predictions,
SDVariable weights)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
SDVariable |
SDLoss.absoluteDifference(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
SDVariable |
SDLoss.absoluteDifference(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
SDVariable |
SDLoss.absoluteDifference(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
SDVariable |
SDMath.acos(SDVariable x)
Elementwise acos (arccosine, inverse cosine) operation: out = arccos(x)
|
SDVariable |
SDMath.acos(String name,
SDVariable x)
Elementwise acos (arccosine, inverse cosine) operation: out = arccos(x)
|
SDVariable |
SDMath.acosh(SDVariable x)
Elementwise acosh (inverse hyperbolic cosine) function: out = acosh(x)
|
SDVariable |
SDMath.acosh(String name,
SDVariable x)
Elementwise acosh (inverse hyperbolic cosine) function: out = acosh(x)
|
SDVariable |
SDMath.add(SDVariable x,
double value)
Scalar add operation, out = in + scalar
|
SDVariable |
SDMath.add(SDVariable x,
SDVariable y)
Pairwise addition operation, out = x + y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.add(String name,
SDVariable x,
double value)
Scalar add operation, out = in + scalar
|
SDVariable |
SDMath.add(String name,
SDVariable x,
SDVariable y)
Pairwise addition operation, out = x + y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDImage.adjustContrast(SDVariable in,
double factor)
Adjusts contrast of RGB or grayscale images.
|
SDVariable |
SDImage.adjustContrast(String name,
SDVariable in,
double factor)
Adjusts contrast of RGB or grayscale images.
|
SDVariable |
SDImage.adjustHue(SDVariable in,
double delta)
Adjust hue of RGB image
|
SDVariable |
SDImage.adjustHue(String name,
SDVariable in,
double delta)
Adjust hue of RGB image
|
SDVariable |
SDImage.adjustSaturation(SDVariable in,
double factor)
Adjust saturation of RGB images
|
SDVariable |
SDImage.adjustSaturation(String name,
SDVariable in,
double factor)
Adjust saturation of RGB images
|
SDVariable |
SDBaseOps.all(SDVariable x,
int... dimensions)
Boolean and array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.all(String name,
SDVariable x,
int... dimensions)
Boolean and array reduction operation, optionally along specified dimensions
|
SDVariable |
SDMath.amax(SDVariable in,
int... dimensions)
Absolute max array reduction operation, optionally along specified dimensions: out = max(abs(x))
|
SDVariable |
SDMath.amax(String name,
SDVariable in,
int... dimensions)
Absolute max array reduction operation, optionally along specified dimensions: out = max(abs(x))
|
SDVariable |
SDMath.amean(SDVariable in,
int... dimensions)
Absolute mean array reduction operation, optionally along specified dimensions: out = mean(abs(x))
|
SDVariable |
SDMath.amean(String name,
SDVariable in,
int... dimensions)
Absolute mean array reduction operation, optionally along specified dimensions: out = mean(abs(x))
|
SDVariable |
SDMath.amin(SDVariable in,
int... dimensions)
Absolute min array reduction operation, optionally along specified dimensions: out = min(abs(x))
|
SDVariable |
SDMath.amin(String name,
SDVariable in,
int... dimensions)
Absolute min array reduction operation, optionally along specified dimensions: out = min(abs(x))
|
SDVariable |
SDBitwise.and(SDVariable x,
SDVariable y)
Bitwise AND operation.
|
SDVariable |
SDMath.and(SDVariable x,
SDVariable y)
Boolean AND operation: elementwise (x != 0) && (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBitwise.and(String name,
SDVariable x,
SDVariable y)
Bitwise AND operation.
|
SDVariable |
SDMath.and(String name,
SDVariable x,
SDVariable y)
Boolean AND operation: elementwise (x != 0) && (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.any(SDVariable x,
int... dimensions)
Boolean or array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.any(String name,
SDVariable x,
int... dimensions)
Boolean or array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.argmax(SDVariable in,
boolean keepDims,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmax(SDVariable in,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmax(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmax(String name,
SDVariable in,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmin(SDVariable in,
boolean keepDims,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmin(SDVariable in,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmin(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmin(String name,
SDVariable in,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.asin(SDVariable x)
Elementwise asin (arcsin, inverse sine) operation: out = arcsin(x)
|
SDVariable |
SDMath.asin(String name,
SDVariable x)
Elementwise asin (arcsin, inverse sine) operation: out = arcsin(x)
|
SDVariable |
SDMath.asinh(SDVariable x)
Elementwise asinh (inverse hyperbolic sine) function: out = asinh(x)
|
SDVariable |
SDMath.asinh(String name,
SDVariable x)
Elementwise asinh (inverse hyperbolic sine) function: out = asinh(x)
|
SDVariable |
SDMath.asum(SDVariable in,
int... dimensions)
Absolute sum array reduction operation, optionally along specified dimensions: out = sum(abs(x))
|
SDVariable |
SDMath.asum(String name,
SDVariable in,
int... dimensions)
Absolute sum array reduction operation, optionally along specified dimensions: out = sum(abs(x))
|
SDVariable |
SDMath.atan(SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = arctangent(x)
|
SDVariable |
SDMath.atan(String name,
SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = arctangent(x)
|
SDVariable |
SDMath.atan2(SDVariable y,
SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = atan2(x,y).
Similar to atan(y/x) but sigts of x and y are used to determine the location of the result |
SDVariable |
SDMath.atan2(String name,
SDVariable y,
SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = atan2(x,y).
Similar to atan(y/x) but sigts of x and y are used to determine the location of the result |
SDVariable |
SDMath.atanh(SDVariable x)
Elementwise atanh (inverse hyperbolic tangent) function: out = atanh(x)
|
SDVariable |
SDMath.atanh(String name,
SDVariable x)
Elementwise atanh (inverse hyperbolic tangent) function: out = atanh(x)
|
SDVariable |
SDCNN.avgPooling2d(SDVariable input,
Pooling2DConfig Pooling2DConfig)
2D Convolution layer operation - average pooling 2d
|
SDVariable |
SDCNN.avgPooling2d(String name,
SDVariable input,
Pooling2DConfig Pooling2DConfig)
2D Convolution layer operation - average pooling 2d
|
SDVariable |
SDCNN.avgPooling3d(SDVariable input,
Pooling3DConfig Pooling3DConfig)
3D convolution layer operation - average pooling 3d
|
SDVariable |
SDCNN.avgPooling3d(String name,
SDVariable input,
Pooling3DConfig Pooling3DConfig)
3D convolution layer operation - average pooling 3d
|
SDVariable[] |
SDBaseOps.batchMmul(SDVariable[] inputsA,
SDVariable... inputsB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(SDVariable[] inputsA,
SDVariable... inputsB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(SDVariable[] inputsA,
SDVariable[] inputsB,
boolean transposeA,
boolean transposeB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(SDVariable[] inputsA,
SDVariable[] inputsB,
boolean transposeA,
boolean transposeB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(String[] names,
SDVariable[] inputsA,
SDVariable... inputsB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(String[] names,
SDVariable[] inputsA,
SDVariable... inputsB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(String[] names,
SDVariable[] inputsA,
SDVariable[] inputsB,
boolean transposeA,
boolean transposeB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(String[] names,
SDVariable[] inputsA,
SDVariable[] inputsB,
boolean transposeA,
boolean transposeB)
Matrix multiply a batch of matrices.
|
SDVariable |
SDNN.batchNorm(SDVariable input,
SDVariable mean,
SDVariable variance,
SDVariable gamma,
SDVariable beta,
double epsilon,
int... axis)
|
SDVariable |
SDNN.batchNorm(String name,
SDVariable input,
SDVariable mean,
SDVariable variance,
SDVariable gamma,
SDVariable beta,
double epsilon,
int... axis)
|
SDVariable |
SDCNN.batchToSpace(SDVariable x,
int[] blocks,
int[] croppingTop,
int... croppingBottom)
Convolution 2d layer batch to space operation on 4d input.
Reduces input batch dimension by rearranging data into a larger spatial dimensions |
SDVariable |
SDCNN.batchToSpace(String name,
SDVariable x,
int[] blocks,
int[] croppingTop,
int... croppingBottom)
Convolution 2d layer batch to space operation on 4d input.
Reduces input batch dimension by rearranging data into a larger spatial dimensions |
SDVariable |
SDNN.biasAdd(SDVariable input,
SDVariable bias,
boolean nchw)
Bias addition operation: a special case of addition, typically used with CNN 4D activations and a 1D bias vector
|
SDVariable |
SDNN.biasAdd(String name,
SDVariable input,
SDVariable bias,
boolean nchw)
Bias addition operation: a special case of addition, typically used with CNN 4D activations and a 1D bias vector
|
SDVariable |
SDBitwise.bitRotl(SDVariable x,
SDVariable shift)
Roll integer bits to the left, i.e.
|
SDVariable |
SDBitwise.bitRotl(String name,
SDVariable x,
SDVariable shift)
Roll integer bits to the left, i.e.
|
SDVariable |
SDBitwise.bitRotr(SDVariable x,
SDVariable shift)
Roll integer bits to the right, i.e.
|
SDVariable |
SDBitwise.bitRotr(String name,
SDVariable x,
SDVariable shift)
Roll integer bits to the right, i.e.
|
SDVariable |
SDBitwise.bitsHammingDistance(SDVariable x,
SDVariable y)
Bitwise Hamming distance reduction over all elements of both input arrays.
For example, if x=01100000 and y=1010000 then the bitwise Hamming distance is 2 (due to differences at positions 0 and 1) Inputs must satisfy the following constraints: Must be same types: isSameType(x, y) |
SDVariable |
SDBitwise.bitsHammingDistance(String name,
SDVariable x,
SDVariable y)
Bitwise Hamming distance reduction over all elements of both input arrays.
For example, if x=01100000 and y=1010000 then the bitwise Hamming distance is 2 (due to differences at positions 0 and 1) Inputs must satisfy the following constraints: Must be same types: isSameType(x, y) |
SDVariable |
SDBitwise.bitShift(SDVariable x,
SDVariable shift)
Shift integer bits to the left, i.e.
|
SDVariable |
SDMath.bitShift(SDVariable x,
SDVariable shift)
Bit shift operation
|
SDVariable |
SDBitwise.bitShift(String name,
SDVariable x,
SDVariable shift)
Shift integer bits to the left, i.e.
|
SDVariable |
SDMath.bitShift(String name,
SDVariable x,
SDVariable shift)
Bit shift operation
|
SDVariable |
SDBitwise.bitShiftRight(SDVariable x,
SDVariable shift)
Shift integer bits to the right, i.e.
|
SDVariable |
SDMath.bitShiftRight(SDVariable x,
SDVariable shift)
Right bit shift operation
|
SDVariable |
SDBitwise.bitShiftRight(String name,
SDVariable x,
SDVariable shift)
Shift integer bits to the right, i.e.
|
SDVariable |
SDMath.bitShiftRight(String name,
SDVariable x,
SDVariable shift)
Right bit shift operation
|
SDVariable |
SDMath.bitShiftRotl(SDVariable x,
SDVariable shift)
Cyclic bit shift operation
|
SDVariable |
SDMath.bitShiftRotl(String name,
SDVariable x,
SDVariable shift)
Cyclic bit shift operation
|
SDVariable |
SDMath.bitShiftRotr(SDVariable x,
SDVariable shift)
Cyclic right shift operation
|
SDVariable |
SDMath.bitShiftRotr(String name,
SDVariable x,
SDVariable shift)
Cyclic right shift operation
|
SDVariable |
SDBaseOps.castTo(SDVariable arg,
DataType datatype)
Cast the array to a new datatype - for example, Integer -> Float
|
SDVariable |
SDBaseOps.castTo(String name,
SDVariable arg,
DataType datatype)
Cast the array to a new datatype - for example, Integer -> Float
|
SDVariable |
SDMath.ceil(SDVariable x)
Element-wise ceiling function: out = ceil(x).
Rounds each value up to the nearest integer value (if not already an integer) |
SDVariable |
SDMath.ceil(String name,
SDVariable x)
Element-wise ceiling function: out = ceil(x).
Rounds each value up to the nearest integer value (if not already an integer) |
SDVariable |
SDLinalg.cholesky(SDVariable input)
Computes the Cholesky decomposition of one or more square matrices.
|
SDVariable |
SDLinalg.cholesky(String name,
SDVariable input)
Computes the Cholesky decomposition of one or more square matrices.
|
SDVariable |
SDMath.clipByAvgNorm(SDVariable x,
double clipValue,
int... dimensions)
Clips tensor values to a maximum average L2-norm.
|
SDVariable |
SDMath.clipByAvgNorm(String name,
SDVariable x,
double clipValue,
int... dimensions)
Clips tensor values to a maximum average L2-norm.
|
SDVariable |
SDMath.clipByNorm(SDVariable x,
double clipValue,
int... dimensions)
Clipping by L2 norm, optionally along dimension(s)
if l2Norm(x,dimension) < clipValue, then input is returned unmodifed Otherwise, out[i] = in[i] * clipValue / l2Norm(in, dimensions) where each value is clipped according to the corresponding l2Norm along the specified dimensions |
SDVariable |
SDMath.clipByNorm(String name,
SDVariable x,
double clipValue,
int... dimensions)
Clipping by L2 norm, optionally along dimension(s)
if l2Norm(x,dimension) < clipValue, then input is returned unmodifed Otherwise, out[i] = in[i] * clipValue / l2Norm(in, dimensions) where each value is clipped according to the corresponding l2Norm along the specified dimensions |
SDVariable |
SDMath.clipByValue(SDVariable x,
double clipValueMin,
double clipValueMax)
Element-wise clipping function:
out[i] = in[i] if in[i] >= clipValueMin and in[i] <= clipValueMax out[i] = clipValueMin if in[i] < clipValueMin out[i] = clipValueMax if in[i] > clipValueMax |
SDVariable |
SDMath.clipByValue(String name,
SDVariable x,
double clipValueMin,
double clipValueMax)
Element-wise clipping function:
out[i] = in[i] if in[i] >= clipValueMin and in[i] <= clipValueMax out[i] = clipValueMin if in[i] < clipValueMin out[i] = clipValueMax if in[i] > clipValueMax |
SDVariable |
SDCNN.col2Im(SDVariable in,
Conv2DConfig Conv2DConfig)
col2im operation for use in 2D convolution operations.
|
SDVariable |
SDCNN.col2Im(String name,
SDVariable in,
Conv2DConfig Conv2DConfig)
col2im operation for use in 2D convolution operations.
|
SDVariable |
SDBaseOps.concat(int dimension,
SDVariable... inputs)
Concatenate a set of inputs along the specified dimension.
Note that inputs must have identical rank and identical dimensions, other than the dimension to stack on. For example, if 2 inputs have shape [a, x, c] and [a, y, c] and dimension = 1, then the output has shape [a, x+y, c] Inputs must satisfy the following constraints: Input arrays must all be the same datatype: isSameType(inputs) |
SDVariable |
SDBaseOps.concat(String name,
int dimension,
SDVariable... inputs)
Concatenate a set of inputs along the specified dimension.
Note that inputs must have identical rank and identical dimensions, other than the dimension to stack on. For example, if 2 inputs have shape [a, x, c] and [a, y, c] and dimension = 1, then the output has shape [a, x+y, c] Inputs must satisfy the following constraints: Input arrays must all be the same datatype: isSameType(inputs) |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable pred,
DataType dataType)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable pred,
int numClasses)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. For example, if labels = [0, 1, 1], predicted = [0, 2, 1], and numClasses=4 then output is: [1, 0, 0, 0] [0, 1, 1, 0] [0, 0, 0, 0] [0, 0, 0, 0] |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable pred,
SDVariable weights)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable pred,
SDVariable weights,
int numClasses)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. For example, if labels = [0, 1, 1], predicted = [0, 2, 1], numClasses = 4, and weights = [1, 2, 3] [1, 0, 0, 0] [0, 3, 2, 0] [0, 0, 0, 0] [0, 0, 0, 0] |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
DataType dataType)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
int numClasses)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. For example, if labels = [0, 1, 1], predicted = [0, 2, 1], and numClasses=4 then output is: [1, 0, 0, 0] [0, 1, 1, 0] [0, 0, 0, 0] [0, 0, 0, 0] |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
SDVariable weights)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
SDVariable weights,
int numClasses)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values. For example, if labels = [0, 1, 1], predicted = [0, 2, 1], numClasses = 4, and weights = [1, 2, 3] [1, 0, 0, 0] [0, 3, 2, 0] [0, 0, 0, 0] [0, 0, 0, 0] |
SDVariable |
SDCNN.conv1d(SDVariable input,
SDVariable weights,
Conv1DConfig Conv1DConfig)
Conv1d operation.
|
SDVariable |
SDCNN.conv1d(SDVariable input,
SDVariable weights,
SDVariable bias,
Conv1DConfig Conv1DConfig)
Conv1d operation.
|
SDVariable |
SDCNN.conv1d(String name,
SDVariable input,
SDVariable weights,
Conv1DConfig Conv1DConfig)
Conv1d operation.
|
SDVariable |
SDCNN.conv1d(String name,
SDVariable input,
SDVariable weights,
SDVariable bias,
Conv1DConfig Conv1DConfig)
Conv1d operation.
|
SDVariable |
SDCNN.conv2d(SDVariable layerInput,
SDVariable weights,
Conv2DConfig Conv2DConfig)
2D Convolution operation with optional bias
|
SDVariable |
SDCNN.conv2d(SDVariable layerInput,
SDVariable weights,
SDVariable bias,
Conv2DConfig Conv2DConfig)
2D Convolution operation with optional bias
|
SDVariable |
SDCNN.conv2d(String name,
SDVariable layerInput,
SDVariable weights,
Conv2DConfig Conv2DConfig)
2D Convolution operation with optional bias
|
SDVariable |
SDCNN.conv2d(String name,
SDVariable layerInput,
SDVariable weights,
SDVariable bias,
Conv2DConfig Conv2DConfig)
2D Convolution operation with optional bias
|
SDVariable |
SDCNN.conv3d(SDVariable input,
SDVariable weights,
Conv3DConfig Conv3DConfig)
Convolution 3D operation with optional bias
|
SDVariable |
SDCNN.conv3d(SDVariable input,
SDVariable weights,
SDVariable bias,
Conv3DConfig Conv3DConfig)
Convolution 3D operation with optional bias
|
SDVariable |
SDCNN.conv3d(String name,
SDVariable input,
SDVariable weights,
Conv3DConfig Conv3DConfig)
Convolution 3D operation with optional bias
|
SDVariable |
SDCNN.conv3d(String name,
SDVariable input,
SDVariable weights,
SDVariable bias,
Conv3DConfig Conv3DConfig)
Convolution 3D operation with optional bias
|
SDVariable |
SDMath.cos(SDVariable x)
Elementwise cosine operation: out = cos(x)
|
SDVariable |
SDMath.cos(String name,
SDVariable x)
Elementwise cosine operation: out = cos(x)
|
SDVariable |
SDMath.cosh(SDVariable x)
Elementwise cosh (hyperbolic cosine) operation: out = cosh(x)
|
SDVariable |
SDMath.cosh(String name,
SDVariable x)
Elementwise cosh (hyperbolic cosine) operation: out = cosh(x)
|
SDVariable |
SDMath.cosineDistance(SDVariable x,
SDVariable y,
int... dimensions)
Cosine distance reduction operation.
|
SDVariable |
SDLoss.cosineDistance(SDVariable label,
SDVariable predictions,
SDVariable weights,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
SDVariable |
SDLoss.cosineDistance(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
SDVariable |
SDMath.cosineDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Cosine distance reduction operation.
|
SDVariable |
SDLoss.cosineDistance(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
SDVariable |
SDLoss.cosineDistance(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
SDVariable |
SDMath.cosineSimilarity(SDVariable x,
SDVariable y,
int... dimensions)
Cosine similarity pairwise reduction operation.
|
SDVariable |
SDMath.cosineSimilarity(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Cosine similarity pairwise reduction operation.
|
SDVariable |
SDMath.countNonZero(SDVariable in,
int... dimensions)
Count non zero array reduction operation, optionally along specified dimensions: out = count(x != 0)
|
SDVariable |
SDMath.countNonZero(String name,
SDVariable in,
int... dimensions)
Count non zero array reduction operation, optionally along specified dimensions: out = count(x != 0)
|
SDVariable |
SDMath.countZero(SDVariable in,
int... dimensions)
Count zero array reduction operation, optionally along specified dimensions: out = count(x == 0)
|
SDVariable |
SDMath.countZero(String name,
SDVariable in,
int... dimensions)
Count zero array reduction operation, optionally along specified dimensions: out = count(x == 0)
|
SDVariable |
SDNN.cReLU(SDVariable x)
Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation.
|
SDVariable |
SDNN.cReLU(String name,
SDVariable x)
Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation.
|
SDVariable |
SDImage.cropAndResize(SDVariable image,
SDVariable cropBoxes,
SDVariable boxIndices,
SDVariable cropOutSize)
Given an input image and some crop boxes, extract out the image subsets and resize them to the specified size.
|
SDVariable |
SDImage.cropAndResize(SDVariable image,
SDVariable cropBoxes,
SDVariable boxIndices,
SDVariable cropOutSize,
double extrapolationValue)
Given an input image and some crop boxes, extract out the image subsets and resize them to the specified size.
|
SDVariable |
SDImage.cropAndResize(String name,
SDVariable image,
SDVariable cropBoxes,
SDVariable boxIndices,
SDVariable cropOutSize)
Given an input image and some crop boxes, extract out the image subsets and resize them to the specified size.
|
SDVariable |
SDImage.cropAndResize(String name,
SDVariable image,
SDVariable cropBoxes,
SDVariable boxIndices,
SDVariable cropOutSize,
double extrapolationValue)
Given an input image and some crop boxes, extract out the image subsets and resize them to the specified size.
|
SDVariable |
SDLinalg.cross(SDVariable a,
SDVariable b)
Computes pairwise cross product.
|
SDVariable |
SDMath.cross(SDVariable a,
SDVariable b)
Returns the pair-wise cross product of equal size arrays a and b: a x b = ||a||x||b|| sin(theta).
Can take rank 1 or above inputs (of equal shapes), but note that the last dimension must have dimension 3 |
SDVariable |
SDLinalg.cross(String name,
SDVariable a,
SDVariable b)
Computes pairwise cross product.
|
SDVariable |
SDMath.cross(String name,
SDVariable a,
SDVariable b)
Returns the pair-wise cross product of equal size arrays a and b: a x b = ||a||x||b|| sin(theta).
Can take rank 1 or above inputs (of equal shapes), but note that the last dimension must have dimension 3 |
SDVariable |
SDLoss.ctcLoss(SDVariable targetLabels,
SDVariable logitInput,
SDVariable targetLabelLengths,
SDVariable logitInputLengths)
CTC Loss: Connectionist Temporal Classification Loss.
|
SDVariable |
SDLoss.ctcLoss(String name,
SDVariable targetLabels,
SDVariable logitInput,
SDVariable targetLabelLengths,
SDVariable logitInputLengths)
CTC Loss: Connectionist Temporal Classification Loss.
|
SDVariable |
SDMath.cube(SDVariable x)
Element-wise cube function: out = x^3
|
SDVariable |
SDMath.cube(String name,
SDVariable x)
Element-wise cube function: out = x^3
|
SDVariable |
SDBaseOps.cumprod(SDVariable in,
boolean exclusive,
boolean reverse,
int... axis)
Cumulative product operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a*b, a*b*c] exclusive=true, reverse=false, [0, a, a*b] exclusive=false, reverse=true: [a*b*c, b*c, c] exclusive=true, reverse=true: [b*c, c, 0] |
SDVariable |
SDBaseOps.cumprod(SDVariable in,
int... axis)
Cumulative product operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a*b, a*b*c] exclusive=true, reverse=false, [0, a, a*b] exclusive=false, reverse=true: [a*b*c, b*c, c] exclusive=true, reverse=true: [b*c, c, 0] |
SDVariable |
SDBaseOps.cumprod(String name,
SDVariable in,
boolean exclusive,
boolean reverse,
int... axis)
Cumulative product operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a*b, a*b*c] exclusive=true, reverse=false, [0, a, a*b] exclusive=false, reverse=true: [a*b*c, b*c, c] exclusive=true, reverse=true: [b*c, c, 0] |
SDVariable |
SDBaseOps.cumprod(String name,
SDVariable in,
int... axis)
Cumulative product operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a*b, a*b*c] exclusive=true, reverse=false, [0, a, a*b] exclusive=false, reverse=true: [a*b*c, b*c, c] exclusive=true, reverse=true: [b*c, c, 0] |
SDVariable |
SDBaseOps.cumsum(SDVariable in,
boolean exclusive,
boolean reverse,
int... axis)
Cumulative sum operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a+b, a+b+c] exclusive=true, reverse=false, [0, a, a+b] exclusive=false, reverse=true: [a+b+c, b+c, c] exclusive=true, reverse=true: [b+c, c, 0] |
SDVariable |
SDBaseOps.cumsum(SDVariable in,
int... axis)
Cumulative sum operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a+b, a+b+c] exclusive=true, reverse=false, [0, a, a+b] exclusive=false, reverse=true: [a+b+c, b+c, c] exclusive=true, reverse=true: [b+c, c, 0] |
SDVariable |
SDBaseOps.cumsum(String name,
SDVariable in,
boolean exclusive,
boolean reverse,
int... axis)
Cumulative sum operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a+b, a+b+c] exclusive=true, reverse=false, [0, a, a+b] exclusive=false, reverse=true: [a+b+c, b+c, c] exclusive=true, reverse=true: [b+c, c, 0] |
SDVariable |
SDBaseOps.cumsum(String name,
SDVariable in,
int... axis)
Cumulative sum operation.
For input: [ a, b, c], output is: exclusive=false, reverse=false: [a, a+b, a+b+c] exclusive=true, reverse=false, [0, a, a+b] exclusive=false, reverse=true: [a+b+c, b+c, c] exclusive=true, reverse=true: [b+c, c, 0] |
SDVariable |
SDCNN.deconv2d(SDVariable layerInput,
SDVariable weights,
DeConv2DConfig DeConv2DConfig)
2D deconvolution operation with optional bias
|
SDVariable |
SDCNN.deconv2d(SDVariable layerInput,
SDVariable weights,
SDVariable bias,
DeConv2DConfig DeConv2DConfig)
2D deconvolution operation with optional bias
|
SDVariable |
SDCNN.deconv2d(String name,
SDVariable layerInput,
SDVariable weights,
DeConv2DConfig DeConv2DConfig)
2D deconvolution operation with optional bias
|
SDVariable |
SDCNN.deconv2d(String name,
SDVariable layerInput,
SDVariable weights,
SDVariable bias,
DeConv2DConfig DeConv2DConfig)
2D deconvolution operation with optional bias
|
SDVariable |
SDCNN.deconv3d(SDVariable input,
SDVariable weights,
DeConv3DConfig DeConv3DConfig)
3D CNN deconvolution operation with or without optional bias
|
SDVariable |
SDCNN.deconv3d(SDVariable input,
SDVariable weights,
SDVariable bias,
DeConv3DConfig DeConv3DConfig)
3D CNN deconvolution operation with or without optional bias
|
SDVariable |
SDCNN.deconv3d(String name,
SDVariable input,
SDVariable weights,
DeConv3DConfig DeConv3DConfig)
3D CNN deconvolution operation with or without optional bias
|
SDVariable |
SDCNN.deconv3d(String name,
SDVariable input,
SDVariable weights,
SDVariable bias,
DeConv3DConfig DeConv3DConfig)
3D CNN deconvolution operation with or without optional bias
|
SDVariable |
SDCNN.depthToSpace(SDVariable x,
int blockSize,
DataFormat dataFormat)
Convolution 2d layer batch to space operation on 4d input.
Reduces input channels dimension by rearranging data into a larger spatial dimensions Example: if input has shape [mb, 8, 2, 2] and block size is 2, then output size is [mb, 8/(2*2), 2*2, 2*2] = [mb, 2, 4, 4] |
SDVariable |
SDCNN.depthToSpace(String name,
SDVariable x,
int blockSize,
DataFormat dataFormat)
Convolution 2d layer batch to space operation on 4d input.
Reduces input channels dimension by rearranging data into a larger spatial dimensions Example: if input has shape [mb, 8, 2, 2] and block size is 2, then output size is [mb, 8/(2*2), 2*2, 2*2] = [mb, 2, 4, 4] |
SDVariable |
SDCNN.depthWiseConv2d(SDVariable layerInput,
SDVariable depthWeights,
Conv2DConfig Conv2DConfig)
Depth-wise 2D convolution operation with optional bias
|
SDVariable |
SDCNN.depthWiseConv2d(SDVariable layerInput,
SDVariable depthWeights,
SDVariable bias,
Conv2DConfig Conv2DConfig)
Depth-wise 2D convolution operation with optional bias
|
SDVariable |
SDCNN.depthWiseConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
Conv2DConfig Conv2DConfig)
Depth-wise 2D convolution operation with optional bias
|
SDVariable |
SDCNN.depthWiseConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
SDVariable bias,
Conv2DConfig Conv2DConfig)
Depth-wise 2D convolution operation with optional bias
|
SDVariable |
SDLinalg.diag_part(SDVariable input)
Calculates diagonal tensor.
|
SDVariable |
SDLinalg.diag_part(String name,
SDVariable input)
Calculates diagonal tensor.
|
SDVariable |
SDLinalg.diag(SDVariable input)
Calculates diagonal tensor.
|
SDVariable |
SDMath.diag(SDVariable x)
Returns an output variable with diagonal values equal to the specified values; off-diagonal values will be set to 0
For example, if input = [1,2,3], then output is given by: [ 1, 0, 0] [ 0, 2, 0] [ 0, 0, 3] Higher input ranks are also supported: if input has shape [a,...,R-1] then output[i,...,k,i,...,k] = input[i,...,k]. i.e., for input rank R, output has rank 2R |
SDVariable |
SDLinalg.diag(String name,
SDVariable input)
Calculates diagonal tensor.
|
SDVariable |
SDMath.diag(String name,
SDVariable x)
Returns an output variable with diagonal values equal to the specified values; off-diagonal values will be set to 0
For example, if input = [1,2,3], then output is given by: [ 1, 0, 0] [ 0, 2, 0] [ 0, 0, 3] Higher input ranks are also supported: if input has shape [a,...,R-1] then output[i,...,k,i,...,k] = input[i,...,k]. i.e., for input rank R, output has rank 2R |
SDVariable |
SDMath.diagPart(SDVariable x)
Extract the diagonal part from the input array.
If input is [ 1, 0, 0] [ 0, 2, 0] [ 0, 0, 3] then output is [1, 2, 3]. Supports higher dimensions: in general, out[i,...,k] = in[i,...,k,i,...,k] |
SDVariable |
SDMath.diagPart(String name,
SDVariable x)
Extract the diagonal part from the input array.
If input is [ 1, 0, 0] [ 0, 2, 0] [ 0, 0, 3] then output is [1, 2, 3]. Supports higher dimensions: in general, out[i,...,k] = in[i,...,k,i,...,k] |
SDVariable |
SDCNN.dilation2D(SDVariable df,
SDVariable weights,
int[] strides,
int[] rates,
boolean isSameMode)
TODO doc string
|
SDVariable |
SDCNN.dilation2D(String name,
SDVariable df,
SDVariable weights,
int[] strides,
int[] rates,
boolean isSameMode)
TODO doc string
|
SDVariable |
SDMath.div(SDVariable x,
double value)
Scalar division operation, out = in / scalar
|
SDVariable |
SDMath.div(SDVariable x,
SDVariable y)
Pairwise division operation, out = x / y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.div(String name,
SDVariable x,
double value)
Scalar division operation, out = in / scalar
|
SDVariable |
SDMath.div(String name,
SDVariable x,
SDVariable y)
Pairwise division operation, out = x / y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.dot(SDVariable x,
SDVariable y,
int... dimensions)
Pairwise dot product reduction along dimension
output = sum(i=0 ... |
SDVariable |
SDBaseOps.dot(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Pairwise dot product reduction along dimension
output = sum(i=0 ... |
SDVariable |
SDNN.dotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled)
This operation performs dot product attention on the given timeseries input with the given queries
out = sum(similarity(k_i, q) * v_i) similarity(k, q) = softmax(k * q) where x * q is the dot product of x and q Optionally with normalization step: similarity(k, q) = softmax(k * q / sqrt(size(q)) See also "Attention is all you need" (https://arxiv.org/abs/1706.03762, p. |
SDVariable |
SDNN.dotProductAttention(String name,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled)
This operation performs dot product attention on the given timeseries input with the given queries
out = sum(similarity(k_i, q) * v_i) similarity(k, q) = softmax(k * q) where x * q is the dot product of x and q Optionally with normalization step: similarity(k, q) = softmax(k * q / sqrt(size(q)) See also "Attention is all you need" (https://arxiv.org/abs/1706.03762, p. |
SDVariable |
SDNN.dropout(SDVariable input,
double inputRetainProbability)
Dropout operation
|
SDVariable |
SDNN.dropout(String name,
SDVariable input,
double inputRetainProbability)
Dropout operation
|
SDVariable[] |
SDBaseOps.dynamicPartition(SDVariable x,
SDVariable partitions,
int numPartitions)
Dynamically partition the input variable values into the specified number of paritions, using the indices.
Example: |
SDVariable[] |
SDBaseOps.dynamicPartition(String[] names,
SDVariable x,
SDVariable partitions,
int numPartitions)
Dynamically partition the input variable values into the specified number of paritions, using the indices.
Example: |
SDVariable |
SDBaseOps.dynamicStitch(SDVariable[] indices,
SDVariable... x)
Dynamically merge the specified input arrays into a single array, using the specified indices
|
SDVariable |
SDBaseOps.dynamicStitch(SDVariable[] indices,
SDVariable... x)
Dynamically merge the specified input arrays into a single array, using the specified indices
|
SDVariable |
SDBaseOps.dynamicStitch(String name,
SDVariable[] indices,
SDVariable... x)
Dynamically merge the specified input arrays into a single array, using the specified indices
|
SDVariable |
SDBaseOps.dynamicStitch(String name,
SDVariable[] indices,
SDVariable... x)
Dynamically merge the specified input arrays into a single array, using the specified indices
|
SDVariable |
SDNN.elu(SDVariable x)
Element-wise exponential linear unit (ELU) function:
out = x if x > 0 out = a * (exp(x) - 1) if x <= 0 with constant a = 1.0 |
SDVariable |
SDNN.elu(String name,
SDVariable x)
Element-wise exponential linear unit (ELU) function:
out = x if x > 0 out = a * (exp(x) - 1) if x <= 0 with constant a = 1.0 |
SDVariable |
SDMath.embeddingLookup(SDVariable x,
SDVariable indices,
PartitionMode PartitionMode)
Looks up ids in a list of embedding tensors.
|
SDVariable |
SDMath.embeddingLookup(String name,
SDVariable x,
SDVariable indices,
PartitionMode PartitionMode)
Looks up ids in a list of embedding tensors.
|
SDVariable |
SDMath.entropy(SDVariable in,
int... dimensions)
Entropy reduction: -sum(x * log(x))
|
SDVariable |
SDMath.entropy(String name,
SDVariable in,
int... dimensions)
Entropy reduction: -sum(x * log(x))
|
SDVariable |
SDBaseOps.eq(SDVariable x,
double y)
Equals operation: elementwise x == y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.eq(SDVariable x,
SDVariable y)
Equal to operation: elementwise x == y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.eq(String name,
SDVariable x,
double y)
Equals operation: elementwise x == y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.eq(String name,
SDVariable x,
SDVariable y)
Equal to operation: elementwise x == y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDMath.erf(SDVariable x)
Element-wise Gaussian error function - out = erf(in)
|
SDVariable |
SDMath.erf(String name,
SDVariable x)
Element-wise Gaussian error function - out = erf(in)
|
SDVariable |
SDMath.erfc(SDVariable x)
Element-wise complementary Gaussian error function - out = erfc(in) = 1 - erf(in)
|
SDVariable |
SDMath.erfc(String name,
SDVariable x)
Element-wise complementary Gaussian error function - out = erfc(in) = 1 - erf(in)
|
SDVariable |
SDMath.euclideanDistance(SDVariable x,
SDVariable y,
int... dimensions)
Euclidean distance (l2 norm, l2 distance) reduction operation.
|
SDVariable |
SDMath.euclideanDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Euclidean distance (l2 norm, l2 distance) reduction operation.
|
SDVariable |
SDMath.exp(SDVariable x)
Elementwise exponent function: out = exp(x) = 2.71828...^x
|
SDVariable |
SDMath.exp(String name,
SDVariable x)
Elementwise exponent function: out = exp(x) = 2.71828...^x
|
SDVariable |
SDBaseOps.expandDims(SDVariable x,
int axis)
Reshape the input by adding a 1 at the specified location.
For example, if input has shape [a, b], then output shape is: axis = 0: [1, a, b] axis = 1: [a, 1, b] axis = 2: [a, b, 1] |
SDVariable |
SDBaseOps.expandDims(String name,
SDVariable x,
int axis)
Reshape the input by adding a 1 at the specified location.
For example, if input has shape [a, b], then output shape is: axis = 0: [1, a, b] axis = 1: [a, 1, b] axis = 2: [a, b, 1] |
SDVariable |
SDMath.expm1(SDVariable x)
Elementwise 1.0 - exponent function: out = 1.0 - exp(x) = 1.0 - 2.71828...^x
|
SDVariable |
SDMath.expm1(String name,
SDVariable x)
Elementwise 1.0 - exponent function: out = 1.0 - exp(x) = 1.0 - 2.71828...^x
|
SDVariable |
SDImage.extractImagePatches(SDVariable image,
int[] kSizes,
int[] strides,
int[] rates,
boolean sameMode)
Given an input image, extract out image patches (of size kSizes - h x w) and place them in the depth dimension.
|
SDVariable |
SDCNN.extractImagePatches(SDVariable input,
int kH,
int kW,
int sH,
int sW,
int rH,
int rW,
boolean sameMode)
Extract image patches
|
SDVariable |
SDImage.extractImagePatches(String name,
SDVariable image,
int[] kSizes,
int[] strides,
int[] rates,
boolean sameMode)
Given an input image, extract out image patches (of size kSizes - h x w) and place them in the depth dimension.
|
SDVariable |
SDCNN.extractImagePatches(String name,
SDVariable input,
int kH,
int kW,
int sH,
int sW,
int rH,
int rW,
boolean sameMode)
Extract image patches
|
SDVariable |
SDMath.eye(SDVariable rows)
As per eye(String, int) but with the number of rows specified as a scalar INDArray
|
SDVariable |
SDMath.eye(SDVariable rows,
SDVariable cols)
As per eye(int, int) bit with the number of rows/columns specified as scalar INDArrays
|
SDVariable |
SDMath.eye(String name,
SDVariable rows)
As per eye(String, int) but with the number of rows specified as a scalar INDArray
|
SDVariable |
SDMath.eye(String name,
SDVariable rows,
SDVariable cols)
As per eye(int, int) bit with the number of rows/columns specified as scalar INDArrays
|
SDVariable |
SDBaseOps.fill(SDVariable shape,
DataType dataType,
double value)
Generate an output variable with the specified (dynamic) shape with all elements set to the specified value
|
SDVariable |
SDBaseOps.fill(String name,
SDVariable shape,
DataType dataType,
double value)
Generate an output variable with the specified (dynamic) shape with all elements set to the specified value
|
SDVariable |
SDMath.firstIndex(SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions)
First index reduction operation.
Returns a variable that contains the index of the first element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.firstIndex(SDVariable in,
Condition condition,
int... dimensions)
First index reduction operation.
Returns a variable that contains the index of the first element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.firstIndex(String name,
SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions)
First index reduction operation.
Returns a variable that contains the index of the first element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.firstIndex(String name,
SDVariable in,
Condition condition,
int... dimensions)
First index reduction operation.
Returns a variable that contains the index of the first element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.floor(SDVariable x)
Element-wise floor function: out = floor(x).
Rounds each value down to the nearest integer value (if not already an integer) |
SDVariable |
SDMath.floor(String name,
SDVariable x)
Element-wise floor function: out = floor(x).
Rounds each value down to the nearest integer value (if not already an integer) |
SDVariable |
SDMath.floorDiv(SDVariable x,
SDVariable y)
Pairwise floor division operation, out = floor(x / y)
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.floorDiv(String name,
SDVariable x,
SDVariable y)
Pairwise floor division operation, out = floor(x / y)
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.floorMod(SDVariable x,
double value)
Scalar floor modulus operation
|
SDVariable |
SDMath.floorMod(SDVariable x,
SDVariable y)
Pairwise Modulus division operation
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.floorMod(String name,
SDVariable x,
double value)
Scalar floor modulus operation
|
SDVariable |
SDMath.floorMod(String name,
SDVariable x,
SDVariable y)
Pairwise Modulus division operation
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.gather(SDVariable df,
int[] indices,
int axis)
Gather slices from the input variable where the indices are specified as fixed int[] values.
Output shape is same as input shape, except for axis dimension, which has size equal to indices.length. |
SDVariable |
SDBaseOps.gather(SDVariable df,
SDVariable indices,
int axis)
Gather slices from the input variable where the indices are specified as dynamic array values.
Output shape is same as input shape, except for axis dimension, which has size equal to indices.length. |
SDVariable |
SDBaseOps.gather(String name,
SDVariable df,
int[] indices,
int axis)
Gather slices from the input variable where the indices are specified as fixed int[] values.
Output shape is same as input shape, except for axis dimension, which has size equal to indices.length. |
SDVariable |
SDBaseOps.gather(String name,
SDVariable df,
SDVariable indices,
int axis)
Gather slices from the input variable where the indices are specified as dynamic array values.
Output shape is same as input shape, except for axis dimension, which has size equal to indices.length. |
SDVariable |
SDBaseOps.gatherNd(SDVariable df,
SDVariable indices)
Gather slices from df with shape specified by indices.
|
SDVariable |
SDBaseOps.gatherNd(String name,
SDVariable df,
SDVariable indices)
Gather slices from df with shape specified by indices.
|
SDVariable |
SDNN.gelu(SDVariable x)
GELU activation function - Gaussian Error Linear Units
For more details, see Gaussian Error Linear Units (GELUs) - https://arxiv.org/abs/1606.08415 This method uses the sigmoid approximation |
SDVariable |
SDNN.gelu(String name,
SDVariable x)
GELU activation function - Gaussian Error Linear Units
For more details, see Gaussian Error Linear Units (GELUs) - https://arxiv.org/abs/1606.08415 This method uses the sigmoid approximation |
SDVariable |
SDRNN.gru(SDVariable x,
SDVariable hLast,
SDVariable Wx,
SDVariable Wh,
SDVariable biases)
The GRU operation.
|
SDVariable |
SDRNN.gru(String name,
SDVariable x,
SDVariable hLast,
SDVariable Wx,
SDVariable Wh,
SDVariable biases)
The GRU operation.
|
SDVariable[] |
SDRNN.gruCell(SDVariable x,
SDVariable hLast,
GRUWeights GRUWeights)
The GRU cell.
|
SDVariable[] |
SDRNN.gruCell(String[] names,
SDVariable x,
SDVariable hLast,
GRUWeights GRUWeights)
The GRU cell.
|
SDVariable |
SDBaseOps.gt(SDVariable x,
double y)
Greater than operation: elementwise x > y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.gt(SDVariable x,
SDVariable y)
Greater than operation: elementwise x > y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.gt(String name,
SDVariable x,
double y)
Greater than operation: elementwise x > y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.gt(String name,
SDVariable x,
SDVariable y)
Greater than operation: elementwise x > y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.gte(SDVariable x,
double y)
Greater than or equals operation: elementwise x >= y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.gte(SDVariable x,
SDVariable y)
Greater than or equal to operation: elementwise x >= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.gte(String name,
SDVariable x,
double y)
Greater than or equals operation: elementwise x >= y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.gte(String name,
SDVariable x,
SDVariable y)
Greater than or equal to operation: elementwise x >= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDMath.hammingDistance(SDVariable x,
SDVariable y,
int... dimensions)
Hamming distance reduction operation.
|
SDVariable |
SDMath.hammingDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Hamming distance reduction operation.
|
SDVariable |
SDNN.hardSigmoid(SDVariable x)
Element-wise hard sigmoid function:
out[i] = 0 if in[i] <= -2.5 out[1] = 0.2*in[i]+0.5 if -2.5 < in[i] < 2.5 out[i] = 1 if in[i] >= 2.5 |
SDVariable |
SDNN.hardSigmoid(String name,
SDVariable x)
Element-wise hard sigmoid function:
out[i] = 0 if in[i] <= -2.5 out[1] = 0.2*in[i]+0.5 if -2.5 < in[i] < 2.5 out[i] = 1 if in[i] >= 2.5 |
SDVariable |
SDNN.hardTanh(SDVariable x)
Element-wise hard tanh function:
out[i] = -1 if in[i] <= -1 out[1] = in[i] if -1 < in[i] < 1 out[i] = 1 if in[i] >= 1 |
SDVariable |
SDNN.hardTanh(String name,
SDVariable x)
Element-wise hard tanh function:
out[i] = -1 if in[i] <= -1 out[1] = in[i] if -1 < in[i] < 1 out[i] = 1 if in[i] >= 1 |
SDVariable |
SDNN.hardTanhDerivative(SDVariable x)
Derivative (dOut/dIn) of the element-wise hard Tanh function - hardTanh(INDArray)
|
SDVariable |
SDNN.hardTanhDerivative(String name,
SDVariable x)
Derivative (dOut/dIn) of the element-wise hard Tanh function - hardTanh(INDArray)
|
SDVariable |
SDLoss.hingeLoss(SDVariable label,
SDVariable predictions,
SDVariable weights)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
SDVariable |
SDLoss.hingeLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
SDVariable |
SDLoss.hingeLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
SDVariable |
SDLoss.hingeLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
SDVariable |
SDImage.hsvToRgb(SDVariable input)
Converting image from HSV to RGB format
|
SDVariable |
SDImage.hsvToRgb(String name,
SDVariable input)
Converting image from HSV to RGB format
|
SDVariable |
SDLoss.huberLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
SDLoss.huberLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
SDLoss.huberLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
SDLoss.huberLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
SDMath.iamax(SDVariable in,
boolean keepDims,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
see argmax(String, INDArray, boolean, int...) |
SDVariable |
SDMath.iamax(SDVariable in,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
see argmax(String, INDArray, boolean, int...) |
SDVariable |
SDMath.iamax(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
see argmax(String, INDArray, boolean, int...) |
SDVariable |
SDMath.iamax(String name,
SDVariable in,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
see argmax(String, INDArray, boolean, int...) |
SDVariable |
SDMath.iamin(SDVariable in,
boolean keepDims,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
see argmin(String, INDArray, boolean, int...) |
SDVariable |
SDMath.iamin(SDVariable in,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
see argmin(String, INDArray, boolean, int...) |
SDVariable |
SDMath.iamin(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
see argmin(String, INDArray, boolean, int...) |
SDVariable |
SDMath.iamin(String name,
SDVariable in,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
see argmin(String, INDArray, boolean, int...) |
SDVariable |
SDBaseOps.identity(SDVariable input)
Elementwise identity operation: out = x
|
SDVariable |
SDBaseOps.identity(String name,
SDVariable input)
Elementwise identity operation: out = x
|
SDVariable |
SDCNN.im2Col(SDVariable in,
Conv2DConfig Conv2DConfig)
im2col operation for use in 2D convolution operations.
|
SDVariable |
SDCNN.im2Col(String name,
SDVariable in,
Conv2DConfig Conv2DConfig)
im2col operation for use in 2D convolution operations.
|
SDVariable |
SDImage.imageResize(SDVariable input,
SDVariable size,
boolean preserveAspectRatio,
boolean antialis,
ImageResizeMethod ImageResizeMethod)
Resize images to size using the specified method.
|
SDVariable |
SDImage.imageResize(SDVariable input,
SDVariable size,
ImageResizeMethod ImageResizeMethod)
Resize images to size using the specified method.
|
SDVariable |
SDImage.imageResize(String name,
SDVariable input,
SDVariable size,
boolean preserveAspectRatio,
boolean antialis,
ImageResizeMethod ImageResizeMethod)
Resize images to size using the specified method.
|
SDVariable |
SDImage.imageResize(String name,
SDVariable input,
SDVariable size,
ImageResizeMethod ImageResizeMethod)
Resize images to size using the specified method.
|
SDVariable |
SDBaseOps.invertPermutation(SDVariable input)
Compute the inverse permutation indices for a permutation operation
Example: if input is [2, 0, 1] then output is [1, 2, 0] The idea is that x.permute(input).permute(invertPermutation(input)) == x |
SDVariable |
SDBaseOps.invertPermutation(String name,
SDVariable input)
Compute the inverse permutation indices for a permutation operation
Example: if input is [2, 0, 1] then output is [1, 2, 0] The idea is that x.permute(input).permute(invertPermutation(input)) == x |
SDVariable |
SDMath.isFinite(SDVariable x)
Is finite operation: elementwise isFinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isFinite(String name,
SDVariable x)
Is finite operation: elementwise isFinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isInfinite(SDVariable x)
Is infinite operation: elementwise isInfinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isInfinite(String name,
SDVariable x)
Is infinite operation: elementwise isInfinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isMax(SDVariable x)
Is maximum operation: elementwise x == max(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isMax(String name,
SDVariable x)
Is maximum operation: elementwise x == max(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isNaN(SDVariable x)
Is Not a Number operation: elementwise isNaN(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isNaN(String name,
SDVariable x)
Is Not a Number operation: elementwise isNaN(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isNonDecreasing(SDVariable x)
Is the array non decreasing?
An array is non-decreasing if for every valid i, x[i] <= x[i+1]. |
SDVariable |
SDMath.isNonDecreasing(String name,
SDVariable x)
Is the array non decreasing?
An array is non-decreasing if for every valid i, x[i] <= x[i+1]. |
SDVariable |
SDBaseOps.isNumericTensor(SDVariable x)
Is the director a numeric tensor? In the current version of ND4J/SameDiff, this always returns true/1
|
SDVariable |
SDBaseOps.isNumericTensor(String name,
SDVariable x)
Is the director a numeric tensor? In the current version of ND4J/SameDiff, this always returns true/1
|
static boolean |
SDValidation.isSameType(SDVariable[] x) |
static boolean |
SDValidation.isSameType(SDVariable x,
SDVariable y) |
SDVariable |
SDMath.isStrictlyIncreasing(SDVariable x)
Is the array strictly increasing?
An array is strictly increasing if for every valid i, x[i] < x[i+1]. |
SDVariable |
SDMath.isStrictlyIncreasing(String name,
SDVariable x)
Is the array strictly increasing?
An array is strictly increasing if for every valid i, x[i] < x[i+1]. |
SDVariable |
SDMath.jaccardDistance(SDVariable x,
SDVariable y,
int... dimensions)
Jaccard similarity reduction operation.
|
SDVariable |
SDMath.jaccardDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Jaccard similarity reduction operation.
|
SDVariable |
SDLoss.l2Loss(SDVariable var)
L2 loss: 1/2 * sum(x^2)
|
SDVariable |
SDLoss.l2Loss(String name,
SDVariable var)
L2 loss: 1/2 * sum(x^2)
|
SDVariable |
SDMath.lastIndex(SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions)
Last index reduction operation.
Returns a variable that contains the index of the last element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.lastIndex(SDVariable in,
Condition condition,
int... dimensions)
Last index reduction operation.
Returns a variable that contains the index of the last element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.lastIndex(String name,
SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions)
Last index reduction operation.
Returns a variable that contains the index of the last element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.lastIndex(String name,
SDVariable in,
Condition condition,
int... dimensions)
Last index reduction operation.
Returns a variable that contains the index of the last element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDNN.layerNorm(SDVariable input,
SDVariable gain,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization
y = gain * standardize(x) + bias |
SDVariable |
SDNN.layerNorm(SDVariable input,
SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization
y = gain * standardize(x) + bias |
SDVariable |
SDNN.layerNorm(String name,
SDVariable input,
SDVariable gain,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization
y = gain * standardize(x) + bias |
SDVariable |
SDNN.layerNorm(String name,
SDVariable input,
SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization
y = gain * standardize(x) + bias |
SDVariable |
SDNN.leakyRelu(SDVariable x,
double alpha)
Element-wise leaky ReLU function:
out = x if x >= 0.0 out = alpha * x if x < cutoff Alpha value is most commonly set to 0.01 |
SDVariable |
SDNN.leakyRelu(String name,
SDVariable x,
double alpha)
Element-wise leaky ReLU function:
out = x if x >= 0.0 out = alpha * x if x < cutoff Alpha value is most commonly set to 0.01 |
SDVariable |
SDNN.leakyReluDerivative(SDVariable x,
double alpha)
Leaky ReLU derivative: dOut/dIn given input.
|
SDVariable |
SDNN.leakyReluDerivative(String name,
SDVariable x,
double alpha)
Leaky ReLU derivative: dOut/dIn given input.
|
SDVariable |
SDBitwise.leftShift(SDVariable x,
SDVariable y)
Bitwise left shift operation.
|
SDVariable |
SDBitwise.leftShift(String name,
SDVariable x,
SDVariable y)
Bitwise left shift operation.
|
SDVariable |
SDBitwise.leftShiftCyclic(SDVariable x,
SDVariable y)
Bitwise left cyclical shift operation.
|
SDVariable |
SDBitwise.leftShiftCyclic(String name,
SDVariable x,
SDVariable y)
Bitwise left cyclical shift operation.
|
SDVariable |
SDNN.linear(SDVariable input,
SDVariable weights,
SDVariable bias)
Linear layer operation: out = mmul(in,w) + bias
Note that bias array is optional |
SDVariable |
SDNN.linear(String name,
SDVariable input,
SDVariable weights,
SDVariable bias)
Linear layer operation: out = mmul(in,w) + bias
Note that bias array is optional |
SDVariable |
SDBaseOps.linspace(SDVariable start,
SDVariable stop,
SDVariable number,
DataType dataType)
Create a new 1d array with values evenly spaced between values 'start' and 'stop'
For example, linspace(start=3.0, stop=4.0, number=3) will generate [3.0, 3.5, 4.0] |
SDVariable |
SDBaseOps.linspace(String name,
SDVariable start,
SDVariable stop,
SDVariable number,
DataType dataType)
Create a new 1d array with values evenly spaced between values 'start' and 'stop'
For example, linspace(start=3.0, stop=4.0, number=3) will generate [3.0, 3.5, 4.0] |
SDVariable[] |
SDMath.listDiff(SDVariable x,
SDVariable y)
Calculates difference between inputs X and Y.
|
SDVariable[] |
SDMath.listDiff(String[] names,
SDVariable x,
SDVariable y)
Calculates difference between inputs X and Y.
|
SDVariable |
SDCNN.localResponseNormalization(SDVariable input,
LocalResponseNormalizationConfig LocalResponseNormalizationConfig)
2D convolution layer operation - local response normalization
|
SDVariable |
SDCNN.localResponseNormalization(String name,
SDVariable input,
LocalResponseNormalizationConfig LocalResponseNormalizationConfig)
2D convolution layer operation - local response normalization
|
SDVariable |
SDMath.log(SDVariable x)
Element-wise logarithm function (base e - natural logarithm): out = log(x)
|
SDVariable |
SDMath.log(SDVariable x,
double base)
Element-wise logarithm function (with specified base): out = log_{base}(x)
|
SDVariable |
SDMath.log(String name,
SDVariable x)
Element-wise logarithm function (base e - natural logarithm): out = log(x)
|
SDVariable |
SDMath.log(String name,
SDVariable x,
double base)
Element-wise logarithm function (with specified base): out = log_{base}(x)
|
SDVariable |
SDMath.log1p(SDVariable x)
Elementwise natural logarithm function: out = log_e (1 + x)
|
SDVariable |
SDMath.log1p(String name,
SDVariable x)
Elementwise natural logarithm function: out = log_e (1 + x)
|
SDVariable |
SDLinalg.logdet(SDVariable input)
Calculates log of determinant.
|
SDVariable |
SDLinalg.logdet(String name,
SDVariable input)
Calculates log of determinant.
|
SDVariable |
SDMath.logEntropy(SDVariable in,
int... dimensions)
Log entropy reduction: log(-sum(x * log(x)))
|
SDVariable |
SDMath.logEntropy(String name,
SDVariable in,
int... dimensions)
Log entropy reduction: log(-sum(x * log(x)))
|
SDVariable |
SDLoss.logLoss(SDVariable label,
SDVariable predictions)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
SDLoss.logLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
SDLoss.logLoss(String name,
SDVariable label,
SDVariable predictions)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
SDLoss.logLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
SDLoss.logPoisson(SDVariable label,
SDVariable predictions,
SDVariable weights,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
SDVariable |
SDLoss.logPoisson(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
SDVariable |
SDLoss.logPoisson(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
SDVariable |
SDLoss.logPoisson(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
SDVariable |
SDNN.logSigmoid(SDVariable x)
Element-wise sigmoid function: out[i] = log(sigmoid(in[i]))
|
SDVariable |
SDNN.logSigmoid(String name,
SDVariable x)
Element-wise sigmoid function: out[i] = log(sigmoid(in[i]))
|
SDVariable |
SDNN.logSoftmax(SDVariable x)
Log softmax activation
|
SDVariable |
SDNN.logSoftmax(SDVariable x,
int dimension)
Log softmax activation
|
SDVariable |
SDNN.logSoftmax(String name,
SDVariable x)
Log softmax activation
|
SDVariable |
SDNN.logSoftmax(String name,
SDVariable x,
int dimension)
Log softmax activation
|
SDVariable |
SDMath.logSumExp(SDVariable input,
int... dimensions)
Log-sum-exp reduction (optionally along dimension).
Computes log(sum(exp(x)) |
SDVariable |
SDMath.logSumExp(String name,
SDVariable input,
int... dimensions)
Log-sum-exp reduction (optionally along dimension).
Computes log(sum(exp(x)) |
SDVariable |
SDRNN.lstmblock(SDVariable x,
LSTMWeights LSTMWeights,
LSTMConfiguration LSTMConfiguration)
The LSTM block
|
SDVariable |
SDRNN.lstmblock(SDVariable maxTSLength,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights LSTMWeights,
LSTMConfiguration LSTMConfiguration)
The LSTM block
|
SDVariable |
SDRNN.lstmblock(String name,
SDVariable x,
LSTMWeights LSTMWeights,
LSTMConfiguration LSTMConfiguration)
The LSTM block
|
SDVariable |
SDRNN.lstmblock(String name,
SDVariable maxTSLength,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights LSTMWeights,
LSTMConfiguration LSTMConfiguration)
The LSTM block
|
SDVariable[] |
SDRNN.lstmCell(SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights LSTMWeights,
LSTMConfiguration LSTMConfiguration)
The LSTM cell.
|
SDVariable[] |
SDRNN.lstmCell(String[] names,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights LSTMWeights,
LSTMConfiguration LSTMConfiguration)
The LSTM cell.
|
SDVariable[] |
SDRNN.lstmLayer(SDVariable x,
LSTMLayerWeights LSTMLayerWeights,
LSTMLayerConfig LSTMLayerConfig)
Long Short-Term Memory layer - Hochreiter 1997.
SUPPORTS following data formats: for unidirectional: TNS: shapes [timeLength, numExamples, inOutSize] NST: shapes [numExamples, inOutSize, timeLength] NTS: shapes [numExamples, timeLength, inOutSize] for bidirectional: T2NS: shapes [timeLength, 2, numExamples, inOutSize] (for ONNX) SUPPORTS following direction modes: FWD: forward BWD: backward BIDIR_SUM: bidirectional sum BIDIR_CONCAT: bidirectional concat BIDIR_EXTRA_DIM: bidirectional extra output dim (in conjunction with format dataFormat - T2NS) You may use different gate configurations: specify gate/cell/out aplha/beta and numbers of activations for gate/cell/out described in activations enum ("RELU","SIGMOID","AFFINE","LEAKY_RELU","THRESHHOLD_RELU","SCALED_TAHN","HARD_SIGMOID","ELU","SOFTSIGN","SOFTPLUS") Also this layer supports MKLDNN (DNNL) and cuDNN acceleration |
SDVariable[] |
SDRNN.lstmLayer(SDVariable x,
SDVariable cLast,
SDVariable yLast,
SDVariable maxTSLength,
LSTMLayerWeights LSTMLayerWeights,
LSTMLayerConfig LSTMLayerConfig)
Long Short-Term Memory layer - Hochreiter 1997.
SUPPORTS following data formats: for unidirectional: TNS: shapes [timeLength, numExamples, inOutSize] NST: shapes [numExamples, inOutSize, timeLength] NTS: shapes [numExamples, timeLength, inOutSize] for bidirectional: T2NS: shapes [timeLength, 2, numExamples, inOutSize] (for ONNX) SUPPORTS following direction modes: FWD: forward BWD: backward BIDIR_SUM: bidirectional sum BIDIR_CONCAT: bidirectional concat BIDIR_EXTRA_DIM: bidirectional extra output dim (in conjunction with format dataFormat - T2NS) You may use different gate configurations: specify gate/cell/out aplha/beta and numbers of activations for gate/cell/out described in activations enum ("RELU","SIGMOID","AFFINE","LEAKY_RELU","THRESHHOLD_RELU","SCALED_TAHN","HARD_SIGMOID","ELU","SOFTSIGN","SOFTPLUS") Also this layer supports MKLDNN (DNNL) and cuDNN acceleration |
SDVariable[] |
SDRNN.lstmLayer(String[] names,
SDVariable x,
LSTMLayerWeights LSTMLayerWeights,
LSTMLayerConfig LSTMLayerConfig)
Long Short-Term Memory layer - Hochreiter 1997.
SUPPORTS following data formats: for unidirectional: TNS: shapes [timeLength, numExamples, inOutSize] NST: shapes [numExamples, inOutSize, timeLength] NTS: shapes [numExamples, timeLength, inOutSize] for bidirectional: T2NS: shapes [timeLength, 2, numExamples, inOutSize] (for ONNX) SUPPORTS following direction modes: FWD: forward BWD: backward BIDIR_SUM: bidirectional sum BIDIR_CONCAT: bidirectional concat BIDIR_EXTRA_DIM: bidirectional extra output dim (in conjunction with format dataFormat - T2NS) You may use different gate configurations: specify gate/cell/out aplha/beta and numbers of activations for gate/cell/out described in activations enum ("RELU","SIGMOID","AFFINE","LEAKY_RELU","THRESHHOLD_RELU","SCALED_TAHN","HARD_SIGMOID","ELU","SOFTSIGN","SOFTPLUS") Also this layer supports MKLDNN (DNNL) and cuDNN acceleration |
SDVariable[] |
SDRNN.lstmLayer(String[] names,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
SDVariable maxTSLength,
LSTMLayerWeights LSTMLayerWeights,
LSTMLayerConfig LSTMLayerConfig)
Long Short-Term Memory layer - Hochreiter 1997.
SUPPORTS following data formats: for unidirectional: TNS: shapes [timeLength, numExamples, inOutSize] NST: shapes [numExamples, inOutSize, timeLength] NTS: shapes [numExamples, timeLength, inOutSize] for bidirectional: T2NS: shapes [timeLength, 2, numExamples, inOutSize] (for ONNX) SUPPORTS following direction modes: FWD: forward BWD: backward BIDIR_SUM: bidirectional sum BIDIR_CONCAT: bidirectional concat BIDIR_EXTRA_DIM: bidirectional extra output dim (in conjunction with format dataFormat - T2NS) You may use different gate configurations: specify gate/cell/out aplha/beta and numbers of activations for gate/cell/out described in activations enum ("RELU","SIGMOID","AFFINE","LEAKY_RELU","THRESHHOLD_RELU","SCALED_TAHN","HARD_SIGMOID","ELU","SOFTSIGN","SOFTPLUS") Also this layer supports MKLDNN (DNNL) and cuDNN acceleration |
SDVariable |
SDLinalg.lstsq(SDVariable matrix,
SDVariable rhs,
double l2_reguralizer)
Solver for linear squares problems.
|
SDVariable |
SDLinalg.lstsq(SDVariable matrix,
SDVariable rhs,
double l2_reguralizer,
boolean fast)
Solver for linear squares problems.
|
SDVariable |
SDLinalg.lstsq(String name,
SDVariable matrix,
SDVariable rhs,
double l2_reguralizer)
Solver for linear squares problems.
|
SDVariable |
SDLinalg.lstsq(String name,
SDVariable matrix,
SDVariable rhs,
double l2_reguralizer,
boolean fast)
Solver for linear squares problems.
|
SDVariable |
SDBaseOps.lt(SDVariable x,
double y)
Less than operation: elementwise x < y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.lt(SDVariable x,
SDVariable y)
Less than operation: elementwise x < y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.lt(String name,
SDVariable x,
double y)
Less than operation: elementwise x < y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.lt(String name,
SDVariable x,
SDVariable y)
Less than operation: elementwise x < y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.lte(SDVariable x,
double y)
Less than or equals operation: elementwise x <= y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.lte(SDVariable x,
SDVariable y)
Less than or equal to operation: elementwise x <= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.lte(String name,
SDVariable x,
double y)
Less than or equals operation: elementwise x <= y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.lte(String name,
SDVariable x,
SDVariable y)
Less than or equal to operation: elementwise x <= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDLinalg.lu(SDVariable input)
Computes LU decomposition.
|
SDVariable |
SDLinalg.lu(String name,
SDVariable input)
Computes LU decomposition.
|
SDVariable |
SDMath.manhattanDistance(SDVariable x,
SDVariable y,
int... dimensions)
Manhattan distance (l1 norm, l1 distance) reduction operation.
|
SDVariable |
SDMath.manhattanDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Manhattan distance (l1 norm, l1 distance) reduction operation.
|
SDVariable |
SDBaseOps.matchCondition(SDVariable in,
Condition condition)
Returns a boolean mask of equal shape to the input, where the condition is satisfied - value 1 where satisfied, 0 otherwise
|
SDVariable |
SDBaseOps.matchCondition(String name,
SDVariable in,
Condition condition)
Returns a boolean mask of equal shape to the input, where the condition is satisfied - value 1 where satisfied, 0 otherwise
|
SDVariable |
SDBaseOps.matchConditionCount(SDVariable in,
Condition condition)
Returns a count of the number of elements that satisfy the condition
|
SDVariable |
SDBaseOps.matchConditionCount(SDVariable in,
Condition condition,
boolean keepDim,
int... dimensions)
Returns a count of the number of elements that satisfy the condition (for each slice along the specified dimensions)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.matchConditionCount(SDVariable in,
Condition condition,
int... dimensions)
Returns a count of the number of elements that satisfy the condition (for each slice along the specified dimensions)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.matchConditionCount(String name,
SDVariable in,
Condition condition)
Returns a count of the number of elements that satisfy the condition
|
SDVariable |
SDBaseOps.matchConditionCount(String name,
SDVariable in,
Condition condition,
boolean keepDim,
int... dimensions)
Returns a count of the number of elements that satisfy the condition (for each slice along the specified dimensions)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.matchConditionCount(String name,
SDVariable in,
Condition condition,
int... dimensions)
Returns a count of the number of elements that satisfy the condition (for each slice along the specified dimensions)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDLinalg.matmul(SDVariable a,
SDVariable b)
Performs matrix mutiplication on input tensors.
|
SDVariable |
SDLinalg.matmul(String name,
SDVariable a,
SDVariable b)
Performs matrix mutiplication on input tensors.
|
SDVariable[] |
SDLinalg.matrixBandPart(SDVariable input,
int minLower,
int maxUpper)
Copy a tensor setting outside a central band in each innermost matrix.
|
SDVariable[] |
SDLinalg.matrixBandPart(String[] names,
SDVariable input,
int minLower,
int maxUpper)
Copy a tensor setting outside a central band in each innermost matrix.
|
SDVariable |
SDMath.matrixDeterminant(SDVariable in)
Matrix determinant op.
|
SDVariable |
SDMath.matrixDeterminant(String name,
SDVariable in)
Matrix determinant op.
|
SDVariable |
SDMath.matrixInverse(SDVariable in)
Matrix inverse op.
|
SDVariable |
SDMath.matrixInverse(String name,
SDVariable in)
Matrix inverse op.
|
SDVariable |
SDBaseOps.max(SDVariable x,
boolean keepDims,
int... dimensions)
Max array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.max(SDVariable x,
int... dimensions)
Max array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.max(SDVariable first,
SDVariable second)
Element-wise maximum operation: out[i] = max(first[i], second[i])
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.max(SDVariable x,
SDVariable y)
Pairwise max operation, out = max(x, y)
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.max(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Max array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.max(String name,
SDVariable x,
int... dimensions)
Max array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.max(String name,
SDVariable first,
SDVariable second)
Element-wise maximum operation: out[i] = max(first[i], second[i])
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.max(String name,
SDVariable x,
SDVariable y)
Pairwise max operation, out = max(x, y)
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDCNN.maxPooling2d(SDVariable input,
Pooling2DConfig Pooling2DConfig)
2D Convolution layer operation - max pooling 2d
|
SDVariable |
SDCNN.maxPooling2d(String name,
SDVariable input,
Pooling2DConfig Pooling2DConfig)
2D Convolution layer operation - max pooling 2d
|
SDVariable |
SDCNN.maxPooling3d(SDVariable input,
Pooling3DConfig Pooling3DConfig)
3D convolution layer operation - max pooling 3d operation.
|
SDVariable |
SDCNN.maxPooling3d(String name,
SDVariable input,
Pooling3DConfig Pooling3DConfig)
3D convolution layer operation - max pooling 3d operation.
|
SDVariable[] |
SDCNN.maxPoolWithArgmax(SDVariable input,
Pooling2DConfig Pooling2DConfig)
2D Convolution layer operation - Max pooling on the input and outputs both max values and indices
|
SDVariable[] |
SDCNN.maxPoolWithArgmax(String[] names,
SDVariable input,
Pooling2DConfig Pooling2DConfig)
2D Convolution layer operation - Max pooling on the input and outputs both max values and indices
|
SDVariable |
SDBaseOps.mean(SDVariable x,
boolean keepDims,
int... dimensions)
Mean (average) array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.mean(SDVariable x,
int... dimensions)
Mean (average) array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.mean(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Mean (average) array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.mean(String name,
SDVariable x,
int... dimensions)
Mean (average) array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDLoss.meanPairwiseSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
SDVariable |
SDLoss.meanPairwiseSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
SDVariable |
SDLoss.meanPairwiseSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
SDVariable |
SDLoss.meanPairwiseSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
SDVariable |
SDLoss.meanSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights)
Mean squared error loss function.
|
SDVariable |
SDLoss.meanSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean squared error loss function.
|
SDVariable |
SDLoss.meanSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights)
Mean squared error loss function.
|
SDVariable |
SDLoss.meanSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean squared error loss function.
|
SDVariable |
SDBaseOps.merge(SDVariable x,
SDVariable y)
The merge operation is a control operation that forwards the either of the inputs to the output, when
the first of them becomes available. |
SDVariable |
SDBaseOps.merge(String name,
SDVariable x,
SDVariable y)
The merge operation is a control operation that forwards the either of the inputs to the output, when
the first of them becomes available. |
SDVariable |
SDMath.mergeAdd(SDVariable... inputs)
Merge add function: merges an arbitrary number of equal shaped arrays using element-wise addition:
out = sum_i in[i] |
SDVariable |
SDMath.mergeAdd(String name,
SDVariable... inputs)
Merge add function: merges an arbitrary number of equal shaped arrays using element-wise addition:
out = sum_i in[i] |
SDVariable |
SDMath.mergeAvg(SDVariable... inputs)
Merge average function: merges an arbitrary number of equal shaped arrays using element-wise mean operation:
out = mean_i in[i] |
SDVariable |
SDMath.mergeAvg(String name,
SDVariable... inputs)
Merge average function: merges an arbitrary number of equal shaped arrays using element-wise mean operation:
out = mean_i in[i] |
SDVariable |
SDMath.mergeMax(SDVariable... inputs)
Merge max function: merges an arbitrary number of equal shaped arrays using element-wise maximum operation:
out = max_i in[i] |
SDVariable |
SDMath.mergeMax(String name,
SDVariable... inputs)
Merge max function: merges an arbitrary number of equal shaped arrays using element-wise maximum operation:
out = max_i in[i] |
SDVariable |
SDMath.mergeMaxIndex(SDVariable... x)
Return array of max elements indices with along tensor dimensions
|
SDVariable |
SDMath.mergeMaxIndex(SDVariable[] x,
DataType dataType)
Return array of max elements indices with along tensor dimensions
|
SDVariable |
SDMath.mergeMaxIndex(String name,
SDVariable... x)
Return array of max elements indices with along tensor dimensions
|
SDVariable |
SDMath.mergeMaxIndex(String name,
SDVariable[] x,
DataType dataType)
Return array of max elements indices with along tensor dimensions
|
SDVariable[] |
SDMath.meshgrid(SDVariable[] inputs,
boolean cartesian)
Broadcasts parameters for evaluation on an N-D grid.
|
SDVariable[] |
SDMath.meshgrid(String[] names,
SDVariable[] inputs,
boolean cartesian)
Broadcasts parameters for evaluation on an N-D grid.
|
SDVariable |
SDBaseOps.min(SDVariable x,
boolean keepDims,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDBaseOps.min(SDVariable x,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDBaseOps.min(SDVariable first,
SDVariable second)
Element-wise minimum operation: out[i] = min(first[i], second[i])
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.min(SDVariable x,
SDVariable y)
Pairwise max operation, out = min(x, y)
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.min(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDBaseOps.min(String name,
SDVariable x,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDBaseOps.min(String name,
SDVariable first,
SDVariable second)
Element-wise minimum operation: out[i] = min(first[i], second[i])
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.min(String name,
SDVariable x,
SDVariable y)
Pairwise max operation, out = min(x, y)
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.mmul(SDVariable x,
SDVariable y)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDLinalg.mmul(SDVariable x,
SDVariable y)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDBaseOps.mmul(SDVariable x,
SDVariable y,
boolean transposeX,
boolean transposeY,
boolean transposeZ)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDLinalg.mmul(SDVariable x,
SDVariable y,
boolean transposeX,
boolean transposeY,
boolean transposeZ)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDBaseOps.mmul(String name,
SDVariable x,
SDVariable y)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDLinalg.mmul(String name,
SDVariable x,
SDVariable y)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDBaseOps.mmul(String name,
SDVariable x,
SDVariable y,
boolean transposeX,
boolean transposeY,
boolean transposeZ)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDLinalg.mmul(String name,
SDVariable x,
SDVariable y,
boolean transposeX,
boolean transposeY,
boolean transposeZ)
Matrix multiplication: out = mmul(x,y)
Supports specifying transpose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDMath.mod(SDVariable x,
SDVariable y)
Pairwise modulus (remainder) operation, out = x % y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.mod(String name,
SDVariable x,
SDVariable y)
Pairwise modulus (remainder) operation, out = x % y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable[] |
SDMath.moments(SDVariable input,
int... axes)
Calculate the mean and (population) variance for the input variable, for the specified axis
|
SDVariable[] |
SDMath.moments(String[] names,
SDVariable input,
int... axes)
Calculate the mean and (population) variance for the input variable, for the specified axis
|
SDVariable |
SDMath.mul(SDVariable x,
double value)
Scalar multiplication operation, out = in * scalar
|
SDVariable |
SDMath.mul(SDVariable x,
SDVariable y)
Pairwise multiplication operation, out = x * y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.mul(String name,
SDVariable x,
double value)
Scalar multiplication operation, out = in * scalar
|
SDVariable |
SDMath.mul(String name,
SDVariable x,
SDVariable y)
Pairwise multiplication operation, out = x * y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDNN.multiHeadDotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled)
This performs multi-headed dot product attention on the given timeseries input
out = concat(head_1, head_2, ..., head_n) * Wo head_i = dot_product_attention(Wq_i*q, Wk_i*k, Wv_i*v) Optionally with normalization when calculating the attention for each head. See also "Attention is all you need" (https://arxiv.org/abs/1706.03762, pp. |
SDVariable |
SDNN.multiHeadDotProductAttention(String name,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled)
This performs multi-headed dot product attention on the given timeseries input
out = concat(head_1, head_2, ..., head_n) * Wo head_i = dot_product_attention(Wq_i*q, Wk_i*k, Wv_i*v) Optionally with normalization when calculating the attention for each head. See also "Attention is all you need" (https://arxiv.org/abs/1706.03762, pp. |
SDVariable |
SDMath.neg(SDVariable x)
Elementwise negative operation: out = -x
|
SDVariable |
SDMath.neg(String name,
SDVariable x)
Elementwise negative operation: out = -x
|
SDVariable |
SDBaseOps.neq(SDVariable x,
double y)
Not equals operation: elementwise x != y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.neq(SDVariable x,
SDVariable y)
Not equal to operation: elementwise x != y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.neq(String name,
SDVariable x,
double y)
Not equals operation: elementwise x != y
Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDBaseOps.neq(String name,
SDVariable x,
SDVariable y)
Not equal to operation: elementwise x != y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html Return boolean array with values true where satisfied, or false otherwise. |
SDVariable |
SDImage.nonMaxSuppression(SDVariable boxes,
SDVariable scores,
int maxOutSize,
double iouThreshold,
double scoreThreshold)
Greedily selects a subset of bounding boxes in descending order of score
|
SDVariable |
SDImage.nonMaxSuppression(String name,
SDVariable boxes,
SDVariable scores,
int maxOutSize,
double iouThreshold,
double scoreThreshold)
Greedily selects a subset of bounding boxes in descending order of score
|
SDVariable |
SDBaseOps.norm1(SDVariable x,
boolean keepDims,
int... dimensions)
Norm1 (L1 norm) reduction operation: The output contains the L1 norm for each tensor/subset along the specified dimensions:
out = sum_i abs(x[i]) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm1(SDVariable x,
int... dimensions)
Norm1 (L1 norm) reduction operation: The output contains the L1 norm for each tensor/subset along the specified dimensions:
out = sum_i abs(x[i]) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm1(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Norm1 (L1 norm) reduction operation: The output contains the L1 norm for each tensor/subset along the specified dimensions:
out = sum_i abs(x[i]) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm1(String name,
SDVariable x,
int... dimensions)
Norm1 (L1 norm) reduction operation: The output contains the L1 norm for each tensor/subset along the specified dimensions:
out = sum_i abs(x[i]) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm2(SDVariable x,
boolean keepDims,
int... dimensions)
Norm2 (L2 norm) reduction operation: The output contains the L2 norm for each tensor/subset along the specified dimensions:
out = sqrt(sum_i x[i]^2) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm2(SDVariable x,
int... dimensions)
Norm2 (L2 norm) reduction operation: The output contains the L2 norm for each tensor/subset along the specified dimensions:
out = sqrt(sum_i x[i]^2) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm2(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Norm2 (L2 norm) reduction operation: The output contains the L2 norm for each tensor/subset along the specified dimensions:
out = sqrt(sum_i x[i]^2) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm2(String name,
SDVariable x,
int... dimensions)
Norm2 (L2 norm) reduction operation: The output contains the L2 norm for each tensor/subset along the specified dimensions:
out = sqrt(sum_i x[i]^2) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable[] |
SDMath.normalizeMoments(SDVariable counts,
SDVariable means,
SDVariable variances,
double shift)
Calculate the mean and variance from the sufficient statistics
|
SDVariable[] |
SDMath.normalizeMoments(String[] names,
SDVariable counts,
SDVariable means,
SDVariable variances,
double shift)
Calculate the mean and variance from the sufficient statistics
|
SDVariable |
SDBaseOps.normmax(SDVariable x,
boolean keepDims,
int... dimensions)
Max norm (infinity norm) reduction operation: The output contains the max norm for each tensor/subset along the
specified dimensions: out = max(abs(x[i])) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.normmax(SDVariable x,
int... dimensions)
Max norm (infinity norm) reduction operation: The output contains the max norm for each tensor/subset along the
specified dimensions: out = max(abs(x[i])) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.normmax(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Max norm (infinity norm) reduction operation: The output contains the max norm for each tensor/subset along the
specified dimensions: out = max(abs(x[i])) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.normmax(String name,
SDVariable x,
int... dimensions)
Max norm (infinity norm) reduction operation: The output contains the max norm for each tensor/subset along the
specified dimensions: out = max(abs(x[i])) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.oneHot(SDVariable indices,
int depth)
Convert the array to a one-hot array with walues 0 and 1 for each entry
If input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with out[i, ..., j, in[i,...,j]] = 1 with other values being set to 0 see oneHot(SDVariable, int, int, double, double) |
SDVariable |
SDBaseOps.oneHot(SDVariable indices,
int depth,
int axis,
double on,
double off)
Convert the array to a one-hot array with walues and for each entry
If input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with {out[i, ..., j, in[i,...,j]] with other values being set to |
SDVariable |
SDBaseOps.oneHot(SDVariable indices,
int depth,
int axis,
double on,
double off,
DataType dataType)
Convert the array to a one-hot array with walues and for each entry
If input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with {out[i, ..., j, in[i,...,j]] with other values being set to |
SDVariable |
SDBaseOps.oneHot(String name,
SDVariable indices,
int depth)
Convert the array to a one-hot array with walues 0 and 1 for each entry
If input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with out[i, ..., j, in[i,...,j]] = 1 with other values being set to 0 see oneHot(SDVariable, int, int, double, double) |
SDVariable |
SDBaseOps.oneHot(String name,
SDVariable indices,
int depth,
int axis,
double on,
double off)
Convert the array to a one-hot array with walues and for each entry
If input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with {out[i, ..., j, in[i,...,j]] with other values being set to |
SDVariable |
SDBaseOps.oneHot(String name,
SDVariable indices,
int depth,
int axis,
double on,
double off,
DataType dataType)
Convert the array to a one-hot array with walues and for each entry
If input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with {out[i, ..., j, in[i,...,j]] with other values being set to |
SDVariable |
SDBaseOps.onesLike(SDVariable input)
Return a variable of all 1s, with the same shape as the input variable.
|
SDVariable |
SDBaseOps.onesLike(SDVariable input,
DataType dataType)
As per onesLike(String, SDVariable) but the output datatype may be specified
|
SDVariable |
SDBaseOps.onesLike(String name,
SDVariable input)
Return a variable of all 1s, with the same shape as the input variable.
|
SDVariable |
SDBaseOps.onesLike(String name,
SDVariable input,
DataType dataType)
As per onesLike(String, SDVariable) but the output datatype may be specified
|
SDVariable |
SDBitwise.or(SDVariable x,
SDVariable y)
Bitwise OR operation.
|
SDVariable |
SDMath.or(SDVariable x,
SDVariable y)
Boolean OR operation: elementwise (x != 0) || (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBitwise.or(String name,
SDVariable x,
SDVariable y)
Bitwise OR operation.
|
SDVariable |
SDMath.or(String name,
SDVariable x,
SDVariable y)
Boolean OR operation: elementwise (x != 0) || (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDNN.pad(SDVariable input,
SDVariable padding,
double constant)
Padding operation
|
SDVariable |
SDNN.pad(SDVariable input,
SDVariable padding,
PadMode PadMode,
double constant)
Padding operation
|
SDVariable |
SDNN.pad(String name,
SDVariable input,
SDVariable padding,
double constant)
Padding operation
|
SDVariable |
SDNN.pad(String name,
SDVariable input,
SDVariable padding,
PadMode PadMode,
double constant)
Padding operation
|
SDVariable |
SDBaseOps.permute(SDVariable x,
int... dimensions)
Array permutation operation: permute the dimensions according to the specified permutation indices.
Example: if input has shape [a,b,c] and dimensions = [2,0,1] the output has shape [c,a,b] |
SDVariable |
SDBaseOps.permute(SDVariable x,
SDVariable dimensions)
Array permutation operation: permute the dimensions according to the specified permutation indices.
Example: if input has shape [a,b,c] and dimensions = [2,0,1] the output has shape [c,a,b] |
SDVariable |
SDBaseOps.permute(String name,
SDVariable x,
int... dimensions)
Array permutation operation: permute the dimensions according to the specified permutation indices.
Example: if input has shape [a,b,c] and dimensions = [2,0,1] the output has shape [c,a,b] |
SDVariable |
SDBaseOps.permute(String name,
SDVariable x,
SDVariable dimensions)
Array permutation operation: permute the dimensions according to the specified permutation indices.
Example: if input has shape [a,b,c] and dimensions = [2,0,1] the output has shape [c,a,b] |
SDVariable |
SDMath.pow(SDVariable x,
double value)
Element-wise power function: out = x^value
|
SDVariable |
SDMath.pow(SDVariable x,
SDVariable y)
Element-wise (broadcastable) power function: out = x[i]^y[i]
|
SDVariable |
SDMath.pow(String name,
SDVariable x,
double value)
Element-wise power function: out = x^value
|
SDVariable |
SDMath.pow(String name,
SDVariable x,
SDVariable y)
Element-wise (broadcastable) power function: out = x[i]^y[i]
|
SDVariable |
SDNN.preciseGelu(SDVariable x)
GELU activation function - Gaussian Error Linear Units
For more details, see Gaussian Error Linear Units (GELUs) - https://arxiv.org/abs/1606.08415 This method uses the precise method |
SDVariable |
SDNN.preciseGelu(String name,
SDVariable x)
GELU activation function - Gaussian Error Linear Units
For more details, see Gaussian Error Linear Units (GELUs) - https://arxiv.org/abs/1606.08415 This method uses the precise method |
SDVariable |
SDNN.prelu(SDVariable input,
SDVariable alpha,
int... sharedAxes)
PReLU (Parameterized Rectified Linear Unit) operation.
|
SDVariable |
SDNN.prelu(String name,
SDVariable input,
SDVariable alpha,
int... sharedAxes)
PReLU (Parameterized Rectified Linear Unit) operation.
|
SDVariable |
SDBaseOps.prod(SDVariable x,
boolean keepDims,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.prod(SDVariable x,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.prod(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.prod(String name,
SDVariable x,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable[] |
SDLinalg.qr(SDVariable input)
Computes the QR decompositions of input matrix.
|
SDVariable[] |
SDLinalg.qr(SDVariable input,
boolean full)
Computes the QR decompositions of input matrix.
|
SDVariable[] |
SDLinalg.qr(String[] names,
SDVariable input)
Computes the QR decompositions of input matrix.
|
SDVariable[] |
SDLinalg.qr(String[] names,
SDVariable input,
boolean full)
Computes the QR decompositions of input matrix.
|
SDVariable |
SDImage.randomCrop(SDVariable input,
SDVariable shape)
Randomly crops image
|
SDVariable |
SDImage.randomCrop(String name,
SDVariable input,
SDVariable shape)
Randomly crops image
|
SDVariable |
SDBaseOps.range(SDVariable from,
SDVariable to,
SDVariable step,
DataType dataType)
Create a new variable with a 1d array, where the values start at from and increment by step
up to (but not including) limit. For example, range(1.0, 3.0, 0.5) will return [1.0, 1.5, 2.0, 2.5] |
SDVariable |
SDBaseOps.range(String name,
SDVariable from,
SDVariable to,
SDVariable step,
DataType dataType)
Create a new variable with a 1d array, where the values start at from and increment by step
up to (but not including) limit. For example, range(1.0, 3.0, 0.5) will return [1.0, 1.5, 2.0, 2.5] |
SDVariable |
SDBaseOps.rank(SDVariable in)
Returns the rank (number of dimensions, i.e., length(shape)) of the specified INDArray as a 0D scalar variable
|
SDVariable |
SDBaseOps.rank(String name,
SDVariable in)
Returns the rank (number of dimensions, i.e., length(shape)) of the specified INDArray as a 0D scalar variable
|
SDVariable |
SDMath.rationalTanh(SDVariable x)
Rational Tanh Approximation elementwise function, as described in the paper:
Compact Convolutional Neural Network Cascade for Face Detection This is a faster Tanh approximation |
SDVariable |
SDMath.rationalTanh(String name,
SDVariable x)
Rational Tanh Approximation elementwise function, as described in the paper:
Compact Convolutional Neural Network Cascade for Face Detection This is a faster Tanh approximation |
SDVariable |
SDMath.rdiv(SDVariable x,
double value)
Scalar reverse division operation, out = scalar / in
|
SDVariable |
SDMath.rdiv(SDVariable x,
SDVariable y)
Pairwise reverse division operation, out = y / x
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.rdiv(String name,
SDVariable x,
double value)
Scalar reverse division operation, out = scalar / in
|
SDVariable |
SDMath.rdiv(String name,
SDVariable x,
SDVariable y)
Pairwise reverse division operation, out = y / x
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.reciprocal(SDVariable x)
Element-wise reciprocal (inverse) function: out[i] = 1 / in[i]
|
SDVariable |
SDMath.reciprocal(String name,
SDVariable x)
Element-wise reciprocal (inverse) function: out[i] = 1 / in[i]
|
SDVariable |
SDMath.rectifiedTanh(SDVariable x)
Rectified tanh operation: max(0, tanh(in))
|
SDVariable |
SDMath.rectifiedTanh(String name,
SDVariable x)
Rectified tanh operation: max(0, tanh(in))
|
SDVariable |
SDNN.relu(SDVariable x,
double cutoff)
Element-wise rectified linear function with specified cutoff:
out[i] = in[i] if in[i] >= cutoff out[i] = 0 otherwise |
SDVariable |
SDNN.relu(String name,
SDVariable x,
double cutoff)
Element-wise rectified linear function with specified cutoff:
out[i] = in[i] if in[i] >= cutoff out[i] = 0 otherwise |
SDVariable |
SDNN.relu6(SDVariable x,
double cutoff)
Element-wise "rectified linear 6" function with specified cutoff:
out[i] = min(max(in, cutoff), 6) |
SDVariable |
SDNN.relu6(String name,
SDVariable x,
double cutoff)
Element-wise "rectified linear 6" function with specified cutoff:
out[i] = min(max(in, cutoff), 6) |
SDVariable |
SDNN.reluLayer(SDVariable input,
SDVariable weights,
SDVariable bias)
ReLU (Rectified Linear Unit) layer operation: out = relu(mmul(in,w) + bias)
Note that bias array is optional |
SDVariable |
SDNN.reluLayer(String name,
SDVariable input,
SDVariable weights,
SDVariable bias)
ReLU (Rectified Linear Unit) layer operation: out = relu(mmul(in,w) + bias)
Note that bias array is optional |
SDVariable |
SDBaseOps.replaceWhere(SDVariable update,
double value,
Condition condition)
Element-wise replace where condition:
out[i] = value if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.replaceWhere(SDVariable update,
SDVariable from,
Condition condition)
Element-wise replace where condition:
out[i] = from[i] if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.replaceWhere(String name,
SDVariable update,
double value,
Condition condition)
Element-wise replace where condition:
out[i] = value if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.replaceWhere(String name,
SDVariable update,
SDVariable from,
Condition condition)
Element-wise replace where condition:
out[i] = from[i] if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.reshape(SDVariable x,
long... shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reshape(SDVariable x,
SDVariable shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reshape(String name,
SDVariable x,
long... shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reshape(String name,
SDVariable x,
SDVariable shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reverse(SDVariable x,
int... dimensions)
Reverse the values of an array for the specified dimensions
If input is: [ 1, 2, 3] [ 4, 5, 6] then reverse(in, 0): [3, 2, 1] [6, 5, 4] reverse(in, 1): [4, 5, 6] [1, 2 3] |
SDVariable |
SDBaseOps.reverse(String name,
SDVariable x,
int... dimensions)
Reverse the values of an array for the specified dimensions
If input is: [ 1, 2, 3] [ 4, 5, 6] then reverse(in, 0): [3, 2, 1] [6, 5, 4] reverse(in, 1): [4, 5, 6] [1, 2 3] |
SDVariable |
SDBaseOps.reverseSequence(SDVariable x,
SDVariable seq_lengths)
Reverse sequence op: for each slice along dimension seqDimension, the first seqLength values are reversed
|
SDVariable |
SDBaseOps.reverseSequence(SDVariable x,
SDVariable seq_lengths,
int seqDim,
int batchDim)
Reverse sequence op: for each slice along dimension seqDimension, the first seqLength values are reversed
|
SDVariable |
SDBaseOps.reverseSequence(String name,
SDVariable x,
SDVariable seq_lengths)
Reverse sequence op: for each slice along dimension seqDimension, the first seqLength values are reversed
|
SDVariable |
SDBaseOps.reverseSequence(String name,
SDVariable x,
SDVariable seq_lengths,
int seqDim,
int batchDim)
Reverse sequence op: for each slice along dimension seqDimension, the first seqLength values are reversed
|
SDVariable |
SDImage.rgbToHsv(SDVariable input)
Converting array from HSV to RGB format
|
SDVariable |
SDImage.rgbToHsv(String name,
SDVariable input)
Converting array from HSV to RGB format
|
SDVariable |
SDImage.rgbToYiq(SDVariable input)
Converting array from RGB to YIQ format
|
SDVariable |
SDImage.rgbToYiq(String name,
SDVariable input)
Converting array from RGB to YIQ format
|
SDVariable |
SDImage.rgbToYuv(SDVariable input)
Converting array from RGB to YUV format
|
SDVariable |
SDImage.rgbToYuv(String name,
SDVariable input)
Converting array from RGB to YUV format
|
SDVariable |
SDBitwise.rightShift(SDVariable x,
SDVariable y)
Bitwise right shift operation.
|
SDVariable |
SDBitwise.rightShift(String name,
SDVariable x,
SDVariable y)
Bitwise right shift operation.
|
SDVariable |
SDBitwise.rightShiftCyclic(SDVariable x,
SDVariable y)
Bitwise right cyclical shift operation.
|
SDVariable |
SDBitwise.rightShiftCyclic(String name,
SDVariable x,
SDVariable y)
Bitwise right cyclical shift operation.
|
SDVariable |
SDMath.round(SDVariable x)
Element-wise round function: out = round(x).
Rounds (up or down depending on value) to the nearest integer value. |
SDVariable |
SDMath.round(String name,
SDVariable x)
Element-wise round function: out = round(x).
Rounds (up or down depending on value) to the nearest integer value. |
SDVariable |
SDMath.rsqrt(SDVariable x)
Element-wise reciprocal (inverse) of square root: out = 1.0 / sqrt(x)
|
SDVariable |
SDMath.rsqrt(String name,
SDVariable x)
Element-wise reciprocal (inverse) of square root: out = 1.0 / sqrt(x)
|
SDVariable |
SDMath.rsub(SDVariable x,
double value)
Scalar reverse subtraction operation, out = scalar - in
|
SDVariable |
SDMath.rsub(SDVariable x,
SDVariable y)
Pairwise reverse subtraction operation, out = y - x
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.rsub(String name,
SDVariable x,
double value)
Scalar reverse subtraction operation, out = scalar - in
|
SDVariable |
SDMath.rsub(String name,
SDVariable x,
SDVariable y)
Pairwise reverse subtraction operation, out = y - x
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.scalarFloorMod(SDVariable in,
double value)
Element-wise scalar floor modulus operation: out = floorMod(in, value).
i.e., returns the remainder after division by 'value' |
SDVariable |
SDBaseOps.scalarFloorMod(String name,
SDVariable in,
double value)
Element-wise scalar floor modulus operation: out = floorMod(in, value).
i.e., returns the remainder after division by 'value' |
SDVariable |
SDBaseOps.scalarMax(SDVariable in,
double value)
Element-wise scalar maximum operation: out = max(in, value)
|
SDVariable |
SDBaseOps.scalarMax(String name,
SDVariable in,
double value)
Element-wise scalar maximum operation: out = max(in, value)
|
SDVariable |
SDBaseOps.scalarMin(SDVariable in,
double value)
Element-wise scalar minimum operation: out = min(in, value)
|
SDVariable |
SDBaseOps.scalarMin(String name,
SDVariable in,
double value)
Element-wise scalar minimum operation: out = min(in, value)
|
SDVariable |
SDBaseOps.scalarSet(SDVariable in,
double set)
Return a variable with equal shape to the input, but all elements set to value 'set'
|
SDVariable |
SDBaseOps.scalarSet(String name,
SDVariable in,
double set)
Return a variable with equal shape to the input, but all elements set to value 'set'
|
SDVariable |
SDBaseOps.scatterAdd(SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter addition operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterAdd(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter addition operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterDiv(SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter division operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterDiv(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter division operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMax(SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter max operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMax(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter max operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMin(SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter min operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMin(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter min operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMul(SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter multiplication operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMul(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter multiplication operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterSub(SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter subtraction operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterSub(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter subtraction operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterUpdate(SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter update operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterUpdate(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter update operation.
If indices is rank 0 (a scalar), then out[index, ...] = out[index, ...] + op(updates[...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = out[indices[i], ...] + op(updates[i, ...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = out[indices[i], ..., indices[k], ...] + op(updates[i, ..., k, ...]) Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.segmentMax(SDVariable data,
SDVariable segmentIds)
Segment max operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentMax(String name,
SDVariable data,
SDVariable segmentIds)
Segment max operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentMean(SDVariable data,
SDVariable segmentIds)
Segment mean operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentMean(String name,
SDVariable data,
SDVariable segmentIds)
Segment mean operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentMin(SDVariable data,
SDVariable segmentIds)
Segment min operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentMin(String name,
SDVariable data,
SDVariable segmentIds)
Segment min operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentProd(SDVariable data,
SDVariable segmentIds)
Segment product operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentProd(String name,
SDVariable data,
SDVariable segmentIds)
Segment product operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentSum(SDVariable data,
SDVariable segmentIds)
Segment sum operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDBaseOps.segmentSum(String name,
SDVariable data,
SDVariable segmentIds)
Segment sum operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [op(3,6), op(1,4,9), op(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. See {unsortedSegment (String, SDVariable, SDVariable, int) ops for the same op without this sorted requirement |
SDVariable |
SDNN.selu(SDVariable x)
Element-wise SeLU function - Scaled exponential Lineal Unit: see Self-Normalizing Neural Networks
out[i] = scale * alpha * (exp(in[i])-1) if in[i]>0, or 0 if in[i] <= 0 Uses default scale and alpha values. |
SDVariable |
SDNN.selu(String name,
SDVariable x)
Element-wise SeLU function - Scaled exponential Lineal Unit: see Self-Normalizing Neural Networks
out[i] = scale * alpha * (exp(in[i])-1) if in[i]>0, or 0 if in[i] <= 0 Uses default scale and alpha values. |
SDVariable |
SDCNN.separableConv2d(SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
Conv2DConfig Conv2DConfig)
Separable 2D convolution operation with optional bias
|
SDVariable |
SDCNN.separableConv2d(SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
SDVariable bias,
Conv2DConfig Conv2DConfig)
Separable 2D convolution operation with optional bias
|
SDVariable |
SDCNN.separableConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
Conv2DConfig Conv2DConfig)
Separable 2D convolution operation with optional bias
|
SDVariable |
SDCNN.separableConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
SDVariable bias,
Conv2DConfig Conv2DConfig)
Separable 2D convolution operation with optional bias
|
SDVariable |
SDBaseOps.sequenceMask(SDVariable lengths,
DataType dataType)
see sequenceMask(String, SDVariable, SDVariable, DataType)
|
SDVariable |
SDBaseOps.sequenceMask(SDVariable lengths,
int maxLen,
DataType dataType)
Generate a sequence mask (with values 0 or 1) based on the specified lengths
Specifically, out[i, ..., k, j] = (j < lengths[i, ..., k] ? 1.0 : 0.0) |
SDVariable |
SDBaseOps.sequenceMask(SDVariable lengths,
SDVariable maxLen,
DataType dataType)
Generate a sequence mask (with values 0 or 1) based on the specified lengths
Specifically, out[i, ..., k, j] = (j < lengths[i, ..., k] ? 1.0 : 0.0) |
SDVariable |
SDBaseOps.sequenceMask(String name,
SDVariable lengths,
DataType dataType)
see sequenceMask(String, SDVariable, SDVariable, DataType)
|
SDVariable |
SDBaseOps.sequenceMask(String name,
SDVariable lengths,
int maxLen,
DataType dataType)
Generate a sequence mask (with values 0 or 1) based on the specified lengths
Specifically, out[i, ..., k, j] = (j < lengths[i, ..., k] ? 1.0 : 0.0) |
SDVariable |
SDBaseOps.sequenceMask(String name,
SDVariable lengths,
SDVariable maxLen,
DataType dataType)
Generate a sequence mask (with values 0 or 1) based on the specified lengths
Specifically, out[i, ..., k, j] = (j < lengths[i, ..., k] ? 1.0 : 0.0) |
SDVariable |
SDMath.setDiag(SDVariable in,
SDVariable diag)
Set the diagonal value to the specified values
If input is [ a, b, c] [ d, e, f] [ g, h, i] and diag = [ 1, 2, 3] then output is [ 1, b, c] [ d, 2, f] [ g, h, 3] |
SDVariable |
SDMath.setDiag(String name,
SDVariable in,
SDVariable diag)
Set the diagonal value to the specified values
If input is [ a, b, c] [ d, e, f] [ g, h, i] and diag = [ 1, 2, 3] then output is [ 1, b, c] [ d, 2, f] [ g, h, 3] |
SDVariable |
SDMath.shannonEntropy(SDVariable in,
int... dimensions)
Shannon Entropy reduction: -sum(x * log2(x))
|
SDVariable |
SDMath.shannonEntropy(String name,
SDVariable in,
int... dimensions)
Shannon Entropy reduction: -sum(x * log2(x))
|
SDVariable |
SDBaseOps.shape(SDVariable input)
Returns the shape of the specified INDArray as a 1D INDArray
|
SDVariable |
SDBaseOps.shape(String name,
SDVariable input)
Returns the shape of the specified INDArray as a 1D INDArray
|
SDVariable |
SDNN.sigmoid(SDVariable x)
Element-wise sigmoid function: out[i] = 1.0/(1+exp(-in[i]))
|
SDVariable |
SDNN.sigmoid(String name,
SDVariable x)
Element-wise sigmoid function: out[i] = 1.0/(1+exp(-in[i]))
|
SDVariable |
SDLoss.sigmoidCrossEntropy(SDVariable label,
SDVariable predictionLogits,
SDVariable weights)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
SDVariable |
SDLoss.sigmoidCrossEntropy(SDVariable label,
SDVariable predictionLogits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
SDVariable |
SDLoss.sigmoidCrossEntropy(String name,
SDVariable label,
SDVariable predictionLogits,
SDVariable weights)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
SDVariable |
SDLoss.sigmoidCrossEntropy(String name,
SDVariable label,
SDVariable predictionLogits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
SDVariable |
SDNN.sigmoidDerivative(SDVariable x,
SDVariable wrt)
Element-wise sigmoid function derivative: dL/dIn given input and dL/dOut
|
SDVariable |
SDNN.sigmoidDerivative(String name,
SDVariable x,
SDVariable wrt)
Element-wise sigmoid function derivative: dL/dIn given input and dL/dOut
|
SDVariable |
SDMath.sign(SDVariable x)
Element-wise sign (signum) function:
out = -1 if in < 0 out = 0 if in = 0 out = 1 if in > 0 |
SDVariable |
SDMath.sign(String name,
SDVariable x)
Element-wise sign (signum) function:
out = -1 if in < 0 out = 0 if in = 0 out = 1 if in > 0 |
SDVariable |
SDMath.sin(SDVariable x)
Elementwise sine operation: out = sin(x)
|
SDVariable |
SDMath.sin(String name,
SDVariable x)
Elementwise sine operation: out = sin(x)
|
SDVariable |
SDMath.sinh(SDVariable x)
Elementwise sinh (hyperbolic sine) operation: out = sinh(x)
|
SDVariable |
SDMath.sinh(String name,
SDVariable x)
Elementwise sinh (hyperbolic sine) operation: out = sinh(x)
|
SDVariable |
SDBaseOps.size(SDVariable in)
Returns the size (number of elements, i.e., prod(shape)) of the specified INDArray as a 0D scalar variable
|
SDVariable |
SDBaseOps.size(String name,
SDVariable in)
Returns the size (number of elements, i.e., prod(shape)) of the specified INDArray as a 0D scalar variable
|
SDVariable |
SDBaseOps.sizeAt(SDVariable in,
int dimension)
Returns a rank 0 (scalar) variable for the size of the specified dimension.
For example, if X has shape [10,20,30] then sizeAt(X,1)=20. |
SDVariable |
SDBaseOps.sizeAt(String name,
SDVariable in,
int dimension)
Returns a rank 0 (scalar) variable for the size of the specified dimension.
For example, if X has shape [10,20,30] then sizeAt(X,1)=20. |
SDVariable |
SDBaseOps.slice(SDVariable input,
int[] begin,
int... size)
Get a subset of the specified input, by specifying the first element and the size of the array.
For example, if input is: [a, b, c] [d, e, f] then slice(input, begin=[0,1], size=[2,1] will return: [b] [e] Note that for each dimension i, begin[i] + size[i] <= input.size(i) |
SDVariable |
SDBaseOps.slice(SDVariable input,
SDVariable begin,
SDVariable size)
Get a subset of the specified input, by specifying the first element and the size of the array.
For example, if input is: [a, b, c] [d, e, f] then slice(input, begin=[0,1], size=[2,1] will return: [b] [e] Note that for each dimension i, begin[i] + size[i] <= input.size(i) |
SDVariable |
SDBaseOps.slice(String name,
SDVariable input,
int[] begin,
int... size)
Get a subset of the specified input, by specifying the first element and the size of the array.
For example, if input is: [a, b, c] [d, e, f] then slice(input, begin=[0,1], size=[2,1] will return: [b] [e] Note that for each dimension i, begin[i] + size[i] <= input.size(i) |
SDVariable |
SDBaseOps.slice(String name,
SDVariable input,
SDVariable begin,
SDVariable size)
Get a subset of the specified input, by specifying the first element and the size of the array.
For example, if input is: [a, b, c] [d, e, f] then slice(input, begin=[0,1], size=[2,1] will return: [b] [e] Note that for each dimension i, begin[i] + size[i] <= input.size(i) |
SDVariable |
SDNN.softmax(SDVariable x)
Softmax activation, along the specified dimension
|
SDVariable |
SDNN.softmax(SDVariable x,
int dimension)
Softmax activation, along the specified dimension
|
SDVariable |
SDNN.softmax(String name,
SDVariable x)
Softmax activation, along the specified dimension
|
SDVariable |
SDNN.softmax(String name,
SDVariable x,
int dimension)
Softmax activation, along the specified dimension
|
SDVariable |
SDLoss.softmaxCrossEntropy(SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits)If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels; otherwise, the output is a scalar. |
SDVariable |
SDLoss.softmaxCrossEntropy(SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits)If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels; otherwise, the output is a scalar. |
SDVariable |
SDLoss.softmaxCrossEntropy(String name,
SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits)If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels; otherwise, the output is a scalar. |
SDVariable |
SDLoss.softmaxCrossEntropy(String name,
SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits)If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels; otherwise, the output is a scalar. |
SDVariable |
SDNN.softmaxDerivative(SDVariable x,
SDVariable wrt,
int dimension)
Softmax derivative function
|
SDVariable |
SDNN.softmaxDerivative(String name,
SDVariable x,
SDVariable wrt,
int dimension)
Softmax derivative function
|
SDVariable |
SDNN.softplus(SDVariable x)
Element-wise softplus function: out = log(exp(x) + 1)
|
SDVariable |
SDNN.softplus(String name,
SDVariable x)
Element-wise softplus function: out = log(exp(x) + 1)
|
SDVariable |
SDNN.softsign(SDVariable x)
Element-wise softsign function: out = x / (abs(x) + 1)
|
SDVariable |
SDNN.softsign(String name,
SDVariable x)
Element-wise softsign function: out = x / (abs(x) + 1)
|
SDVariable |
SDNN.softsignDerivative(SDVariable x)
Element-wise derivative (dOut/dIn) of the softsign function softsign(INDArray)
|
SDVariable |
SDNN.softsignDerivative(String name,
SDVariable x)
Element-wise derivative (dOut/dIn) of the softsign function softsign(INDArray)
|
SDVariable |
SDLinalg.solve(SDVariable matrix,
SDVariable rhs)
Solver for systems of linear equations.
|
SDVariable |
SDLinalg.solve(SDVariable matrix,
SDVariable rhs,
boolean adjoint)
Solver for systems of linear equations.
|
SDVariable |
SDLinalg.solve(String name,
SDVariable matrix,
SDVariable rhs)
Solver for systems of linear equations.
|
SDVariable |
SDLinalg.solve(String name,
SDVariable matrix,
SDVariable rhs,
boolean adjoint)
Solver for systems of linear equations.
|
SDVariable |
SDCNN.spaceToBatch(SDVariable x,
int[] blocks,
int[] paddingTop,
int... paddingBottom)
Convolution 2d layer space to batch operation on 4d input.
Increases input batch dimension by rearranging data from spatial dimensions into batch dimension |
SDVariable |
SDCNN.spaceToBatch(String name,
SDVariable x,
int[] blocks,
int[] paddingTop,
int... paddingBottom)
Convolution 2d layer space to batch operation on 4d input.
Increases input batch dimension by rearranging data from spatial dimensions into batch dimension |
SDVariable |
SDCNN.spaceToDepth(SDVariable x,
int blockSize,
DataFormat dataFormat)
Convolution 2d layer space to depth operation on 4d input.
Increases input channels (reduced spatial dimensions) by rearranging data into a larger channels dimension Example: if input has shape [mb, 2, 4, 4] and block size is 2, then output size is [mb, 8/(2*2), 2*2, 2*2] = [mb, 2, 4, 4] |
SDVariable |
SDCNN.spaceToDepth(String name,
SDVariable x,
int blockSize,
DataFormat dataFormat)
Convolution 2d layer space to depth operation on 4d input.
Increases input channels (reduced spatial dimensions) by rearranging data into a larger channels dimension Example: if input has shape [mb, 2, 4, 4] and block size is 2, then output size is [mb, 8/(2*2), 2*2, 2*2] = [mb, 2, 4, 4] |
SDVariable |
SDLoss.sparseSoftmaxCrossEntropy(SDVariable logits,
SDVariable labels)
As per softmaxCrossEntropy(String, SDVariable, SDVariable, LossReduce) but the labels variable
is represented as an integer array instead of the equivalent one-hot array. i.e., if logits are rank N, then labels have rank N-1 |
SDVariable |
SDLoss.sparseSoftmaxCrossEntropy(String name,
SDVariable logits,
SDVariable labels)
As per softmaxCrossEntropy(String, SDVariable, SDVariable, LossReduce) but the labels variable
is represented as an integer array instead of the equivalent one-hot array. i.e., if logits are rank N, then labels have rank N-1 |
SDVariable[] |
SDBaseOps.split(SDVariable input,
int numSplit,
int splitDim)
Split a value in to a list of ndarrays.
|
SDVariable[] |
SDBaseOps.split(String[] names,
SDVariable input,
int numSplit,
int splitDim)
Split a value in to a list of ndarrays.
|
SDVariable |
SDMath.sqrt(SDVariable x)
Element-wise square root function: out = sqrt(x)
|
SDVariable |
SDMath.sqrt(String name,
SDVariable x)
Element-wise square root function: out = sqrt(x)
|
SDVariable |
SDMath.square(SDVariable x)
Element-wise square function: out = x^2
|
SDVariable |
SDMath.square(String name,
SDVariable x)
Element-wise square function: out = x^2
|
SDVariable |
SDMath.squaredDifference(SDVariable x,
SDVariable y)
Pairwise squared difference operation.
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.squaredDifference(String name,
SDVariable x,
SDVariable y)
Pairwise squared difference operation.
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.squaredNorm(SDVariable x,
boolean keepDims,
int... dimensions)
Squared L2 norm: see norm2(String, SDVariable, boolean, int...)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.squaredNorm(SDVariable x,
int... dimensions)
Squared L2 norm: see norm2(String, SDVariable, boolean, int...)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.squaredNorm(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Squared L2 norm: see norm2(String, SDVariable, boolean, int...)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.squaredNorm(String name,
SDVariable x,
int... dimensions)
Squared L2 norm: see norm2(String, SDVariable, boolean, int...)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.squeeze(SDVariable x,
int axis)
Remove a single dimension of size 1.
For example, if input has shape [a,b,1,c] then squeeze(input, 2) returns an array of shape [a,b,c] |
SDVariable |
SDBaseOps.squeeze(String name,
SDVariable x,
int axis)
Remove a single dimension of size 1.
For example, if input has shape [a,b,1,c] then squeeze(input, 2) returns an array of shape [a,b,c] |
SDVariable |
SDRNN.sru(SDVariable x,
SDVariable initialC,
SDVariable mask,
SRUWeights SRUWeights)
The SRU layer.
|
SDVariable |
SDRNN.sru(SDVariable x,
SDVariable initialC,
SRUWeights SRUWeights)
The SRU layer.
|
SDVariable |
SDRNN.sru(String name,
SDVariable x,
SDVariable initialC,
SDVariable mask,
SRUWeights SRUWeights)
The SRU layer.
|
SDVariable |
SDRNN.sru(String name,
SDVariable x,
SDVariable initialC,
SRUWeights SRUWeights)
The SRU layer.
|
SDVariable |
SDRNN.sruCell(SDVariable x,
SDVariable cLast,
SRUWeights SRUWeights)
The SRU layer.
|
SDVariable |
SDRNN.sruCell(String name,
SDVariable x,
SDVariable cLast,
SRUWeights SRUWeights)
The SRU layer.
|
SDVariable |
SDBaseOps.stack(int axis,
SDVariable... values)
Stack a set of N INDArray of rank X into one rank X+1 variable.
If inputs have shape [a,b,c] then output has shape: axis = 0: [N,a,b,c] axis = 1: [a,N,b,c] axis = 2: [a,b,N,c] axis = 3: [a,b,c,N] see unstack(String[], SDVariable, int, int) |
SDVariable |
SDBaseOps.stack(String name,
int axis,
SDVariable... values)
Stack a set of N INDArray of rank X into one rank X+1 variable.
If inputs have shape [a,b,c] then output has shape: axis = 0: [N,a,b,c] axis = 1: [a,N,b,c] axis = 2: [a,b,N,c] axis = 3: [a,b,c,N] see unstack(String[], SDVariable, int, int) |
SDVariable |
SDBaseOps.standardDeviation(SDVariable x,
boolean biasCorrected,
boolean keepDims,
int... dimensions)
Stardard deviation array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.standardDeviation(SDVariable x,
boolean biasCorrected,
int... dimensions)
Stardard deviation array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.standardDeviation(String name,
SDVariable x,
boolean biasCorrected,
boolean keepDims,
int... dimensions)
Stardard deviation array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.standardDeviation(String name,
SDVariable x,
boolean biasCorrected,
int... dimensions)
Stardard deviation array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.standardize(SDVariable x,
int... dimensions)
Standardize input variable along given axis
|
SDVariable |
SDMath.standardize(String name,
SDVariable x,
int... dimensions)
Standardize input variable along given axis
|
SDVariable |
SDMath.step(SDVariable x,
double value)
Elementwise step function:
out(x) = 1 if x >= cutoff out(x) = 0 otherwise |
SDVariable |
SDMath.step(String name,
SDVariable x,
double value)
Elementwise step function:
out(x) = 1 if x >= cutoff out(x) = 0 otherwise |
SDVariable |
SDBaseOps.stridedSlice(SDVariable in,
long[] begin,
long[] end,
long... strides)
Get a subset of the specified input, by specifying the first element, last element, and the strides.
For example, if input is: [a, b, c] [d, e, f] [g, h, i] then stridedSlice(input, begin=[0,1], end=[2,2], strides=[2,1], all masks = 0) will return: [b, c] [h, i] |
SDVariable |
SDBaseOps.stridedSlice(SDVariable in,
long[] begin,
long[] end,
long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask)
Get a subset of the specified input, by specifying the first element, last element, and the strides.
For example, if input is: [a, b, c] [d, e, f] [g, h, i] then stridedSlice(input, begin=[0,1], end=[2,2], strides=[2,1], all masks = 0) will return: [b, c] [h, i] |
SDVariable |
SDBaseOps.stridedSlice(String name,
SDVariable in,
long[] begin,
long[] end,
long... strides)
Get a subset of the specified input, by specifying the first element, last element, and the strides.
For example, if input is: [a, b, c] [d, e, f] [g, h, i] then stridedSlice(input, begin=[0,1], end=[2,2], strides=[2,1], all masks = 0) will return: [b, c] [h, i] |
SDVariable |
SDBaseOps.stridedSlice(String name,
SDVariable in,
long[] begin,
long[] end,
long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask)
Get a subset of the specified input, by specifying the first element, last element, and the strides.
For example, if input is: [a, b, c] [d, e, f] [g, h, i] then stridedSlice(input, begin=[0,1], end=[2,2], strides=[2,1], all masks = 0) will return: [b, c] [h, i] |
SDVariable |
SDMath.sub(SDVariable x,
double value)
Scalar subtraction operation, out = in - scalar
|
SDVariable |
SDMath.sub(SDVariable x,
SDVariable y)
Pairwise subtraction operation, out = x - y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDMath.sub(String name,
SDVariable x,
double value)
Scalar subtraction operation, out = in - scalar
|
SDVariable |
SDMath.sub(String name,
SDVariable x,
SDVariable y)
Pairwise subtraction operation, out = x - y
Note: supports broadcasting if x and y have different shapes and are broadcastable. For example, if X has shape [1,10] and Y has shape [5,10] then op(X,Y) has output shape [5,10] Broadcast rules are the same as NumPy: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html |
SDVariable |
SDBaseOps.sum(SDVariable x,
boolean keepDims,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions.
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.sum(SDVariable x,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions.
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.sum(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions.
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.sum(String name,
SDVariable x,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions.
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDLinalg.svd(SDVariable input,
boolean fullUV,
boolean computeUV)
Calculates singular value decomposition.
|
SDVariable |
SDLinalg.svd(SDVariable input,
boolean fullUV,
boolean computeUV,
int switchNum)
Calculates singular value decomposition.
|
SDVariable |
SDLinalg.svd(String name,
SDVariable input,
boolean fullUV,
boolean computeUV)
Calculates singular value decomposition.
|
SDVariable |
SDLinalg.svd(String name,
SDVariable input,
boolean fullUV,
boolean computeUV,
int switchNum)
Calculates singular value decomposition.
|
SDVariable |
SDNN.swish(SDVariable x)
Element-wise "swish" function: out = x * sigmoid(b*x) with b=1.0
See: https://arxiv.org/abs/1710.05941 |
SDVariable |
SDNN.swish(String name,
SDVariable x)
Element-wise "swish" function: out = x * sigmoid(b*x) with b=1.0
See: https://arxiv.org/abs/1710.05941 |
SDVariable[] |
SDBaseOps.switchOp(SDVariable x,
SDVariable predicate)
Switch operation
Predictate - if false, values are output to left (first) branch/output; if true, to right (second) branch/output |
SDVariable[] |
SDBaseOps.switchOp(String[] names,
SDVariable x,
SDVariable predicate)
Switch operation
Predictate - if false, values are output to left (first) branch/output; if true, to right (second) branch/output |
SDVariable |
SDMath.tan(SDVariable x)
Elementwise tangent operation: out = tan(x)
|
SDVariable |
SDMath.tan(String name,
SDVariable x)
Elementwise tangent operation: out = tan(x)
|
SDVariable |
SDMath.tanh(SDVariable x)
Elementwise tanh (hyperbolic tangent) operation: out = tanh(x)
|
SDVariable |
SDNN.tanh(SDVariable x)
Elementwise tanh (hyperbolic tangent) operation: out = tanh(x)
|
SDVariable |
SDMath.tanh(String name,
SDVariable x)
Elementwise tanh (hyperbolic tangent) operation: out = tanh(x)
|
SDVariable |
SDNN.tanh(String name,
SDVariable x)
Elementwise tanh (hyperbolic tangent) operation: out = tanh(x)
|
SDVariable |
SDBaseOps.tensorMmul(SDVariable x,
SDVariable y,
int[] dimensionsX,
int... dimensionsY)
//TODO: Ops must be documented.
|
SDVariable |
SDBaseOps.tensorMmul(SDVariable x,
SDVariable y,
int[] dimensionsX,
int[] dimensionsY,
boolean transposeX,
boolean transposeY,
boolean transposeZ)
//TODO: Ops must be documented.
|
SDVariable |
SDBaseOps.tensorMmul(String name,
SDVariable x,
SDVariable y,
int[] dimensionsX,
int... dimensionsY)
//TODO: Ops must be documented.
|
SDVariable |
SDBaseOps.tensorMmul(String name,
SDVariable x,
SDVariable y,
int[] dimensionsX,
int[] dimensionsY,
boolean transposeX,
boolean transposeY,
boolean transposeZ)
//TODO: Ops must be documented.
|
SDVariable |
SDBaseOps.tile(SDVariable x,
int... repeat)
see tile(String, SDVariable, int...)
|
SDVariable |
SDBaseOps.tile(SDVariable x,
SDVariable repeat)
Repeat (tile) the input tensor the specified number of times.
For example, if input is [1, 2] [3, 4] and repeat is [2, 3] then output is [1, 2, 1, 2, 1, 2] [3, 4, 3, 4, 3, 4] [1, 2, 1, 2, 1, 2] [3, 4, 3, 4, 3, 4] |
SDVariable |
SDBaseOps.tile(String name,
SDVariable x,
int... repeat)
see tile(String, SDVariable, int...)
|
SDVariable |
SDBaseOps.tile(String name,
SDVariable x,
SDVariable repeat)
Repeat (tile) the input tensor the specified number of times.
For example, if input is [1, 2] [3, 4] and repeat is [2, 3] then output is [1, 2, 1, 2, 1, 2] [3, 4, 3, 4, 3, 4] [1, 2, 1, 2, 1, 2] [3, 4, 3, 4, 3, 4] |
SDVariable |
SDMath.trace(SDVariable in)
Matrix trace operation
For rank 2 matrices, the output is a scalar vith the trace - i.e., sum of the main diagonal. For higher rank inputs, output[a,b,c] = trace(in[a,b,c,:,:]) |
SDVariable |
SDMath.trace(String name,
SDVariable in)
Matrix trace operation
For rank 2 matrices, the output is a scalar vith the trace - i.e., sum of the main diagonal. For higher rank inputs, output[a,b,c] = trace(in[a,b,c,:,:]) |
SDVariable |
SDBaseOps.transpose(SDVariable x)
Matrix transpose operation: If input has shape [a,b] output has shape [b,a]
|
SDVariable |
SDBaseOps.transpose(String name,
SDVariable x)
Matrix transpose operation: If input has shape [a,b] output has shape [b,a]
|
SDVariable |
SDLinalg.triangularSolve(SDVariable matrix,
SDVariable rhs,
boolean lower,
boolean adjoint)
Solver for systems of linear questions.
|
SDVariable |
SDLinalg.triangularSolve(String name,
SDVariable matrix,
SDVariable rhs,
boolean lower,
boolean adjoint)
Solver for systems of linear questions.
|
SDVariable |
SDLinalg.triu(SDVariable input)
Upper triangle of an array.
|
SDVariable |
SDLinalg.triu(SDVariable input,
int diag)
Upper triangle of an array.
|
SDVariable |
SDLinalg.triu(String name,
SDVariable input)
Upper triangle of an array.
|
SDVariable |
SDLinalg.triu(String name,
SDVariable input,
int diag)
Upper triangle of an array.
|
SDVariable |
SDBaseOps.unsortedSegmentMax(SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment max operation.
|
SDVariable |
SDBaseOps.unsortedSegmentMax(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment max operation.
|
SDVariable |
SDBaseOps.unsortedSegmentMean(SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment mean operation.
|
SDVariable |
SDBaseOps.unsortedSegmentMean(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment mean operation.
|
SDVariable |
SDBaseOps.unsortedSegmentMin(SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment min operation.
|
SDVariable |
SDBaseOps.unsortedSegmentMin(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment min operation.
|
SDVariable |
SDBaseOps.unsortedSegmentProd(SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment product operation.
|
SDVariable |
SDBaseOps.unsortedSegmentProd(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment product operation.
|
SDVariable |
SDBaseOps.unsortedSegmentSqrtN(SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment sqrtN operation.
|
SDVariable |
SDBaseOps.unsortedSegmentSqrtN(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment sqrtN operation.
|
SDVariable |
SDBaseOps.unsortedSegmentSum(SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment sum operation.
|
SDVariable |
SDBaseOps.unsortedSegmentSum(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment sum operation.
|
SDVariable[] |
SDBaseOps.unstack(SDVariable value,
int axis,
int num)
Unstack a variable of rank X into N rank X-1 variables by taking slices along the specified axis.
If input has shape [a,b,c] then output has shape: axis = 0: [b,c] axis = 1: [a,c] axis = 2: [a,b] |
SDVariable[] |
SDBaseOps.unstack(String[] names,
SDVariable value,
int axis,
int num)
Unstack a variable of rank X into N rank X-1 variables by taking slices along the specified axis.
If input has shape [a,b,c] then output has shape: axis = 0: [b,c] axis = 1: [a,c] axis = 2: [a,b] |
SDVariable |
SDCNN.upsampling2d(SDVariable input,
int scale)
Upsampling layer for 2D inputs.
scale is used for both height and width dimensions. |
SDVariable |
SDCNN.upsampling2d(SDVariable input,
int scaleH,
int scaleW,
boolean nchw)
2D Convolution layer operation - Upsampling 2d
|
SDVariable |
SDCNN.upsampling2d(String name,
SDVariable input,
int scale)
Upsampling layer for 2D inputs.
scale is used for both height and width dimensions. |
SDVariable |
SDCNN.upsampling2d(String name,
SDVariable input,
int scaleH,
int scaleW,
boolean nchw)
2D Convolution layer operation - Upsampling 2d
|
SDVariable |
SDCNN.upsampling3d(SDVariable input,
boolean ncdhw,
int scaleD,
int scaleH,
int scaleW)
3D Convolution layer operation - Upsampling 3d
|
SDVariable |
SDCNN.upsampling3d(String name,
SDVariable input,
boolean ncdhw,
int scaleD,
int scaleH,
int scaleW)
3D Convolution layer operation - Upsampling 3d
|
protected static void |
SDValidation.validateBool(String opName,
SDVariable v)
Validate that the operation is being applied on a boolean type SDVariable
|
protected static void |
SDValidation.validateBool(String opName,
SDVariable v1,
SDVariable v2)
Validate that the operation is being applied on boolean SDVariables
|
protected static void |
SDValidation.validateBool(String opName,
String inputName,
SDVariable v)
Validate that the operation is being applied on a boolean type SDVariable
|
protected static void |
SDValidation.validateFloatingPoint(String opName,
SDVariable v)
Validate that the operation is being applied on an floating point type SDVariable
|
protected static void |
SDValidation.validateFloatingPoint(String opName,
String inputName,
SDVariable v)
Validate that the operation is being applied on a floating point type SDVariable
|
protected static void |
SDValidation.validateInteger(String opName,
SDVariable v)
Validate that the operation is being applied on an integer type SDVariable
|
protected static void |
SDValidation.validateInteger(String opName,
String inputName,
SDVariable v)
Validate that the operation is being applied on an integer type SDVariable
|
protected static void |
SDValidation.validateInteger(String opName,
String inputName,
SDVariable[] vars) |
protected static void |
SDValidation.validateNumerical(String opName,
SDVariable v)
Validate that the operation is being applied on a numerical SDVariable (not boolean or utf8).
|
protected static void |
SDValidation.validateNumerical(String opName,
SDVariable v1,
SDVariable v2)
Validate that the operation is being applied on numerical SDVariables (not boolean or utf8).
|
protected static void |
SDValidation.validateNumerical(String opName,
String inputName,
SDVariable v)
Validate that the operation is being applied on a numerical SDVariable (not boolean or utf8).
|
protected static void |
SDValidation.validateNumerical(String opName,
String inputName,
SDVariable[] vars) |
protected static void |
SDValidation.validateSameType(String opName,
boolean numericalOnly,
SDVariable... vars)
Validate that the operation is being applied on array with the exact same datatypes (which may optionally be
restricted to numerical SDVariables only (not boolean or utf8))
|
SDVariable |
SDBaseOps.variance(SDVariable x,
boolean biasCorrected,
boolean keepDims,
int... dimensions)
Variance array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.variance(SDVariable x,
boolean biasCorrected,
int... dimensions)
Variance array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.variance(String name,
SDVariable x,
boolean biasCorrected,
boolean keepDims,
int... dimensions)
Variance array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.variance(String name,
SDVariable x,
boolean biasCorrected,
int... dimensions)
Variance array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDLoss.weightedCrossEntropyWithLogits(SDVariable targets,
SDVariable inputs,
SDVariable weights)
Weighted cross entropy loss with logits
|
SDVariable |
SDLoss.weightedCrossEntropyWithLogits(String name,
SDVariable targets,
SDVariable inputs,
SDVariable weights)
Weighted cross entropy loss with logits
|
SDVariable |
SDBitwise.xor(SDVariable x,
SDVariable y)
Bitwise XOR operation (exclusive OR).
|
SDVariable |
SDMath.xor(SDVariable x,
SDVariable y)
Boolean XOR (exclusive OR) operation: elementwise (x != 0) XOR (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBitwise.xor(String name,
SDVariable x,
SDVariable y)
Bitwise XOR operation (exclusive OR).
|
SDVariable |
SDMath.xor(String name,
SDVariable x,
SDVariable y)
Boolean XOR (exclusive OR) operation: elementwise (x != 0) XOR (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDImage.yiqToRgb(SDVariable input)
Converting image from YIQ to RGB format
|
SDVariable |
SDImage.yiqToRgb(String name,
SDVariable input)
Converting image from YIQ to RGB format
|
SDVariable |
SDImage.yuvToRgb(SDVariable input)
Converting image from YUV to RGB format
|
SDVariable |
SDImage.yuvToRgb(String name,
SDVariable input)
Converting image from YUV to RGB format
|
SDVariable |
SDMath.zeroFraction(SDVariable input)
Full array zero fraction array reduction operation, optionally along specified dimensions: out = (count(x == 0) / length(x))
|
SDVariable |
SDMath.zeroFraction(String name,
SDVariable input)
Full array zero fraction array reduction operation, optionally along specified dimensions: out = (count(x == 0) / length(x))
|
SDVariable |
SDBaseOps.zerosLike(SDVariable input)
Return a variable of all 0s, with the same shape as the input variable.
|
SDVariable |
SDBaseOps.zerosLike(String name,
SDVariable input)
Return a variable of all 0s, with the same shape as the input variable.
|
| Modifier and Type | Method and Description |
|---|---|
static int |
FlatBuffersMapper.asFlatNode(@NonNull SameDiff sameDiff,
@NonNull DifferentialFunction node,
@NonNull com.google.flatbuffers.FlatBufferBuilder bufferBuilder,
List<SDVariable> variables,
Map<String,Integer> reverseMap,
Map<String,Integer> forwardMap,
Map<String,Integer> framesMap,
AtomicInteger idCounter,
Integer id) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
SubGraph.inputs() |
List<SDVariable> |
SubGraph.outputs() |
List<SDVariable> |
SubGraphProcessor.processSubgraph(SameDiff sd,
SubGraph subGraph)
Replace the subgraph, and return the new outputs that should replace the old outputs.
Note that the order of the outputs you return matters! If the original outputs are [A,B,C] and you return output variables [X,Y,Z], then anywhere "A" was used as input will now use "X"; similarly Y replaces B, and Z replaces C. |
| Modifier and Type | Method and Description |
|---|---|
static SDVariable |
SameDiffUtils.reductionBroadcastableWithOrigShape(int origRank,
int[] reduceDims,
SDVariable toExpand)
Add 1s as required to the array make an array possible to be broadcast with the original (pre-reduce) array.
|
static SDVariable |
SameDiffUtils.reductionBroadcastableWithOrigShape(SDVariable origInput,
SDVariable axis,
SDVariable toExpand) |
static SDVariable |
SameDiffUtils.reductionShape(SDVariable shape,
SDVariable axis,
boolean keepDim) |
| Modifier and Type | Method and Description |
|---|---|
static ExternalErrorsFunction |
SameDiffUtils.externalErrors(SameDiff sameDiff,
Map<String,INDArray> externalGradients,
SDVariable... inputs) |
static ExternalErrorsFunction |
SameDiffUtils.externalErrors(SameDiff sameDiff,
SDVariable[] inputs) |
static SDVariable |
SameDiffUtils.reductionBroadcastableWithOrigShape(int origRank,
int[] reduceDims,
SDVariable toExpand)
Add 1s as required to the array make an array possible to be broadcast with the original (pre-reduce) array.
|
static SDVariable |
SameDiffUtils.reductionBroadcastableWithOrigShape(SDVariable origInput,
SDVariable axis,
SDVariable toExpand) |
static SDVariable |
SameDiffUtils.reductionShape(SDVariable shape,
SDVariable axis,
boolean keepDim) |
static void |
SameDiffUtils.validateDifferentialFunctionSameDiff(SameDiff sameDiff,
SDVariable function,
DifferentialFunction op) |
| Modifier and Type | Method and Description |
|---|---|
TestCase |
TestCase.expected(SDVariable var,
Function<INDArray,String> validationFn) |
TestCase |
TestCase.expected(@NonNull SDVariable var,
@NonNull INDArray output)
Validate the output (forward pass) for a single variable using INDArray.equals(INDArray)
|
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
OpImportOverride.initFromTensorFlow(List<SDVariable> inputs,
List<SDVariable> controlDepInputs,
NODE_TYPE nodeDef,
SameDiff initWith,
Map<String,ATTR_TYPE> attributesForNode,
GRAPH_TYPE graph)
Initialize the operation and return its output variables
|
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
OpImportOverride.initFromTensorFlow(List<SDVariable> inputs,
List<SDVariable> controlDepInputs,
NODE_TYPE nodeDef,
SameDiff initWith,
Map<String,ATTR_TYPE> attributesForNode,
GRAPH_TYPE graph)
Initialize the operation and return its output variables
|
List<SDVariable> |
OpImportOverride.initFromTensorFlow(List<SDVariable> inputs,
List<SDVariable> controlDepInputs,
NODE_TYPE nodeDef,
SameDiff initWith,
Map<String,ATTR_TYPE> attributesForNode,
GRAPH_TYPE graph)
Initialize the operation and return its output variables
|
| Modifier and Type | Method and Description |
|---|---|
SDVariable |
Activation.asSameDiff(SameDiff sd,
SDVariable input)
Get the Activation as a SameDiff variable
|
SDVariable |
Activation.asSameDiff(String variableName,
SameDiff sd,
SDVariable input)
Get the Activation as a SameDiff variable
|
| Modifier and Type | Method and Description |
|---|---|
SDVariable |
Activation.asSameDiff(SameDiff sd,
SDVariable input)
Get the Activation as a SameDiff variable
|
SDVariable |
Activation.asSameDiff(String variableName,
SameDiff sd,
SDVariable input)
Get the Activation as a SameDiff variable
|
| Modifier and Type | Field and Description |
|---|---|
protected SDVariable[] |
DynamicCustomOp.outputVariables |
| Modifier and Type | Method and Description |
|---|---|
SDVariable[] |
DynamicCustomOp.outputVariables() |
SDVariable[] |
BaseOp.outputVariables(String baseName) |
SDVariable[] |
DynamicCustomOp.outputVariables(String baseName) |
protected static SDVariable[] |
DynamicCustomOp.wrapOrNull(SDVariable in) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
DynamicCustomOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
NoOp.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
protected static SDVariable[] |
DynamicCustomOp.wrapOrNull(SDVariable in) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
DynamicCustomOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
NoOp.doDiff(List<SDVariable> f1) |
| Constructor and Description |
|---|
BaseBroadcastBoolOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BaseBroadcastBoolOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BaseBroadcastBoolOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BaseBroadcastBoolOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BaseBroadcastBoolOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BaseBroadcastBoolOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BaseBroadcastOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BaseBroadcastOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BaseBroadcastOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BaseBroadcastOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BaseBroadcastOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BaseBroadcastOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BaseIndexAccumulation(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
BaseIndexAccumulation(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
boolean keepDims,
int[] dimensions) |
BaseReduceBoolOp(SameDiff sameDiff,
SDVariable input,
int... dimensions) |
BaseReduceBoolOp(SameDiff sameDiff,
SDVariable input,
int[] dimensions,
boolean keepDims) |
BaseReduceBoolOp(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
BaseReduceFloatOp(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
BaseReduceFloatOp(SameDiff sameDiff,
SDVariable input,
int... dimensions) |
BaseReduceFloatOp(SameDiff sameDiff,
SDVariable input,
int[] dimensions,
boolean keepDims) |
BaseReduceFloatOp(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
BaseReduceLongOp(SameDiff sameDiff,
SDVariable input,
int... dimensions) |
BaseReduceLongOp(SameDiff sameDiff,
SDVariable input,
int[] dimensions,
boolean keepDims) |
BaseReduceLongOp(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
BaseReduceOp(SameDiff sameDiff,
SDVariable i_v) |
BaseReduceOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
BaseReduceOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions,
boolean keepDims) |
BaseReduceOp(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
BaseReduceOp(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions,
boolean keepDims) |
BaseReduceSameOp(SameDiff sameDiff,
SDVariable input,
int... dimensions) |
BaseReduceSameOp(SameDiff sameDiff,
SDVariable input,
int[] dimensions,
boolean keepDims) |
BaseReduceSameOp(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
BaseScalarBoolOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
BaseScalarBoolOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
BaseScalarBoolOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace,
Object[] extraArgs) |
BaseScalarBoolOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
Object[] extraArgs) |
BaseScalarOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
BaseScalarOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
BaseScalarOp(SameDiff sameDiff,
@NonNull SDVariable i_v,
Number scalar,
boolean inPlace,
Object[] extraArgs) |
BaseScalarOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
Object[] extraArgs) |
BaseTransformAnyOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BaseTransformAnyOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
Object[] extraArgs) |
BaseTransformAnyOp(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs) |
BaseTransformAnyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
BaseTransformAnyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
BaseTransformAnyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
Object[] extraArgs) |
BaseTransformBoolOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BaseTransformBoolOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
Object[] extraArgs) |
BaseTransformBoolOp(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs) |
BaseTransformBoolOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
BaseTransformBoolOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
BaseTransformBoolOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
Object[] extraArgs) |
BaseTransformFloatOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BaseTransformFloatOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
Object[] extraArgs) |
BaseTransformFloatOp(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs) |
BaseTransformOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BaseTransformOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
Object[] extraArgs) |
BaseTransformOp(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs) |
BaseTransformOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
BaseTransformOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
BaseTransformOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
Object[] extraArgs) |
BaseTransformSameOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BaseTransformSameOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
Object[] extraArgs) |
BaseTransformSameOp(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs) |
BaseTransformSameOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
BaseTransformSameOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
BaseTransformSameOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
Object[] extraArgs) |
BaseTransformStrictOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BaseTransformStrictOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
Object[] extraArgs) |
BaseTransformStrictOp(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs) |
BaseTransformStrictOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
BaseTransformStrictOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
DynamicCustomOp(SameDiff sameDiff,
SDVariable arg) |
DynamicCustomOp(SameDiff sameDiff,
SDVariable[] args) |
DynamicCustomOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
DynamicCustomOp(String opName,
SameDiff sameDiff,
SDVariable[] args) |
DynamicCustomOp(String opName,
SameDiff sameDiff,
SDVariable[] args,
boolean inPlace)
Initialize this for
SameDiff execution
Any extra int or float arguments for operations
must be added to the respective TArguments
or IArguments lists upon construction |
NoOp(SameDiff sd,
SDVariable in) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
Triu.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
Triu.doDiff(List<SDVariable> f1) |
| Constructor and Description |
|---|
AdjustContrast(@NonNull SameDiff sameDiff,
@NonNull SDVariable in,
double factor) |
AdjustContrast(@NonNull SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable factor) |
AdjustContrast(@NonNull SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable factor) |
AdjustHue(@NonNull SameDiff sameDiff,
@NonNull SDVariable in,
double factor) |
AdjustHue(@NonNull SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable factor) |
AdjustHue(@NonNull SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable factor) |
AdjustSaturation(@NonNull SameDiff sameDiff,
@NonNull SDVariable in,
double factor) |
AdjustSaturation(@NonNull SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable factor) |
AdjustSaturation(@NonNull SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable factor) |
BetaInc(@NonNull SameDiff sameDiff,
@NonNull SDVariable a,
@NonNull SDVariable b,
@NonNull SDVariable x) |
BetaInc(@NonNull SameDiff sameDiff,
@NonNull SDVariable a,
@NonNull SDVariable b,
@NonNull SDVariable x) |
BetaInc(@NonNull SameDiff sameDiff,
@NonNull SDVariable a,
@NonNull SDVariable b,
@NonNull SDVariable x) |
BitCast(SameDiff sameDiff,
SDVariable in,
SDVariable dataType) |
CompareAndBitpack(SameDiff sameDiff,
SDVariable threshold) |
Digamma(@NonNull SameDiff sameDiff,
@NonNull SDVariable x) |
DivideNoNan(SameDiff sameDiff,
SDVariable in1,
SDVariable in2) |
DrawBoundingBoxes(SameDiff sameDiff,
SDVariable boxes,
SDVariable colors) |
FakeQuantWithMinMaxVarsPerChannel(SameDiff sameDiff,
SDVariable x,
SDVariable min,
SDVariable max,
int num_bits,
boolean narrow) |
Flatten(SameDiff sameDiff,
char order,
SDVariable... inputs) |
FusedBatchNorm(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable scale,
@NonNull SDVariable offset,
int dataFormat,
int isTraining) |
FusedBatchNorm(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable scale,
@NonNull SDVariable offset,
int dataFormat,
int isTraining) |
FusedBatchNorm(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable scale,
@NonNull SDVariable offset,
int dataFormat,
int isTraining) |
FusedBatchNorm(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable scale,
@NonNull SDVariable offset,
@NonNull SDVariable dataFormat,
@NonNull SDVariable isTraining) |
FusedBatchNorm(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable scale,
@NonNull SDVariable offset,
@NonNull SDVariable dataFormat,
@NonNull SDVariable isTraining) |
FusedBatchNorm(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable scale,
@NonNull SDVariable offset,
@NonNull SDVariable dataFormat,
@NonNull SDVariable isTraining) |
FusedBatchNorm(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable scale,
@NonNull SDVariable offset,
@NonNull SDVariable dataFormat,
@NonNull SDVariable isTraining) |
FusedBatchNorm(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable scale,
@NonNull SDVariable offset,
@NonNull SDVariable dataFormat,
@NonNull SDVariable isTraining) |
HsvToRgb(SameDiff sameDiff,
SDVariable input) |
Igamma(@NonNull SameDiff sameDiff,
@NonNull SDVariable n,
@NonNull SDVariable x) |
Igamma(@NonNull SameDiff sameDiff,
@NonNull SDVariable n,
@NonNull SDVariable x) |
Igammac(@NonNull SameDiff sameDiff,
@NonNull SDVariable n,
@NonNull SDVariable x) |
Igammac(@NonNull SameDiff sameDiff,
@NonNull SDVariable n,
@NonNull SDVariable x) |
Lgamma(@NonNull SameDiff sameDiff,
@NonNull SDVariable x) |
LinearSolve(SameDiff sameDiff,
SDVariable a,
SDVariable b,
boolean adjoint) |
LinearSolve(SameDiff sameDiff,
SDVariable a,
SDVariable b,
SDVariable adjoint) |
Logdet(SameDiff sameDiff,
SDVariable input) |
Lstsq(@NonNull SameDiff sameDiff,
@NonNull SDVariable matrix,
@NonNull SDVariable rhs,
double l2_regularizer,
boolean fast) |
Lstsq(@NonNull SameDiff sameDiff,
@NonNull SDVariable matrix,
@NonNull SDVariable rhs,
double l2_regularizer,
boolean fast) |
Lu(SameDiff sameDiff,
SDVariable input) |
MatrixBandPart(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
int minLower,
int maxUpper) |
MatrixBandPart(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
SDVariable minLower,
SDVariable maxUpper) |
MatrixBandPart(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
SDVariable minLower,
SDVariable maxUpper) |
Polygamma(@NonNull SameDiff sameDiff,
@NonNull SDVariable n,
@NonNull SDVariable x) |
Polygamma(@NonNull SameDiff sameDiff,
@NonNull SDVariable n,
@NonNull SDVariable x) |
RandomCrop(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable shape) |
RandomCrop(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable shape) |
RgbToGrayscale(SameDiff sameDiff,
SDVariable image) |
RgbToHsv(SameDiff sameDiff,
SDVariable input) |
RgbToYiq(SameDiff sameDiff,
SDVariable input) |
RgbToYuv(SameDiff sameDiff,
SDVariable input) |
Roll(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
int shift) |
Roll(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable shift) |
Roll(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable shift) |
Roll(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable shift,
@NonNull SDVariable axes) |
Roll(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable shift,
@NonNull SDVariable axes) |
Roll(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable shift,
@NonNull SDVariable axes) |
ToggleBits(@NonNull SameDiff sameDiff,
@NonNull SDVariable input) |
TriangularSolve(SameDiff sameDiff,
SDVariable matrix,
SDVariable rhs,
boolean lower,
boolean adjoint) |
TriangularSolve(SameDiff sameDiff,
SDVariable matrix,
SDVariable rhs,
SDVariable lower,
SDVariable adjoint) |
Triu(SameDiff sameDiff,
SDVariable in) |
Triu(SameDiff sameDiff,
SDVariable in,
int diag) |
TriuBp(SameDiff sameDiff,
SDVariable in,
SDVariable grad) |
TriuBp(SameDiff sameDiff,
SDVariable in,
SDVariable grad,
int diag) |
YiqToRgb(SameDiff sameDiff,
SDVariable input) |
YuvToRgb(SameDiff sameDiff,
SDVariable input) |
| Constructor and Description |
|---|
BiasAdd(SameDiff sameDiff,
SDVariable input,
SDVariable bias,
boolean nchw) |
BiasAddGrad(SameDiff sameDiff,
SDVariable input,
SDVariable bias,
SDVariable gradient,
boolean nchw) |
BroadcastAddOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastAddOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastAddOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastAMax(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastAMax(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastAMax(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastAMax(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastAMax(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastAMax(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastAMin(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastAMin(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastAMin(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastAMin(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastAMin(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastAMin(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastCopyOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastCopyOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastCopyOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastCopyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastCopyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastCopyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastDivOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastDivOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastDivOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastGradientArgs(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastGradientArgs(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastGradientArgs(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastMax(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastMax(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastMax(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastMax(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastMax(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastMax(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastMin(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastMin(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastMin(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastMin(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastMin(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastMin(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastMulOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastMulOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastMulOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastMulOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastMulOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastMulOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastRDivOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastRDivOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastRDivOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastRDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastRDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastRDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastRSubOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastRSubOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastRSubOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastRSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastRSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastRSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastSubOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastSubOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastSubOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastTo(SameDiff sameDiff,
SDVariable input,
SDVariable shape) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
BroadcastEqualTo.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastGreaterThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastGreaterThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastLessThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastLessThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastNotEqual.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
BroadcastEqualTo.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastGreaterThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastGreaterThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastLessThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastLessThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastNotEqual.doDiff(List<SDVariable> f1) |
| Constructor and Description |
|---|
BroadcastEqualTo(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastEqualTo(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastEqualTo(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastGreaterThan(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastGreaterThan(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastGreaterThan(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastGreaterThan(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastGreaterThan(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastGreaterThan(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastGreaterThanOrEqual(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastGreaterThanOrEqual(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastGreaterThanOrEqual(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastGreaterThanOrEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastGreaterThanOrEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastGreaterThanOrEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastLessThan(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastLessThan(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastLessThan(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastLessThan(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastLessThan(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastLessThan(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastLessThanOrEqual(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastLessThanOrEqual(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastLessThanOrEqual(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastLessThanOrEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastLessThanOrEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastLessThanOrEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastNotEqual(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastNotEqual(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastNotEqual(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastNotEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastNotEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastNotEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
| Constructor and Description |
|---|
Select(SameDiff sameDiff,
SDVariable[] args) |
Select(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
Where(SameDiff sameDiff,
SDVariable[] args) |
Where(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
WhereNumpy(SameDiff sameDiff,
SDVariable[] args) |
WhereNumpy(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
| Modifier and Type | Method and Description |
|---|---|
SDVariable[] |
Enter.outputVariables() |
SDVariable[] |
Exit.outputVariables() |
SDVariable[] |
LoopCond.outputVariables() |
SDVariable[] |
Merge.outputVariables() |
SDVariable[] |
NextIteration.outputVariables() |
SDVariable[] |
Switch.outputVariables() |
SDVariable[] |
While.outputVariables() |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
StopGradient.doDiff(List<SDVariable> gradients) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
StopGradient.doDiff(List<SDVariable> gradients) |
| Constructor and Description |
|---|
BaseCompatOp(SameDiff sameDiff,
SDVariable[] inputs) |
Enter(SameDiff sameDiff,
SDVariable[] inputs) |
Enter(SameDiff sameDiff,
String frameName,
SDVariable input) |
Enter(SameDiff sameDiff,
String frameName,
SDVariable input,
boolean isConstant) |
Exit(SameDiff sameDiff,
SDVariable x) |
Merge(SameDiff sd,
SDVariable... inputs) |
NextIteration(SameDiff sameDiff,
SDVariable x) |
StopGradient(SameDiff sd,
SDVariable in) |
Switch(SameDiff sameDiff,
SDVariable input,
SDVariable predicate) |
While(SameDiff sameDiff,
SDVariable[] inputs) |
While(SameDiff sameDiff,
String frameName,
SDVariable input) |
While(SameDiff sameDiff,
String frameName,
SDVariable input,
boolean isConstant) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
FreeGridOp.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
FreeGridOp.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
CropAndResize.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ExtractImagePatches.doDiff(List<SDVariable> f1) |
List<SDVariable> |
NonMaxSuppression.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
NonMaxSuppressionV3.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
NonMaxSuppressionWithOverlaps.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
ResizeArea.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ResizeBilinear.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ResizeNearestNeighbor.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
CropAndResize.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ExtractImagePatches.doDiff(List<SDVariable> f1) |
List<SDVariable> |
NonMaxSuppression.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
NonMaxSuppressionV3.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
NonMaxSuppressionWithOverlaps.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
ResizeArea.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ResizeBilinear.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ResizeNearestNeighbor.doDiff(List<SDVariable> f1) |
| Constructor and Description |
|---|
CropAndResize(@NonNull SameDiff sameDiff,
@NonNull SDVariable image,
@NonNull SDVariable cropBoxes,
@NonNull SDVariable boxIndices,
@NonNull SDVariable cropOutSize,
@NonNull CropAndResize.Method method,
double extrapolationValue) |
CropAndResize(@NonNull SameDiff sameDiff,
@NonNull SDVariable image,
@NonNull SDVariable cropBoxes,
@NonNull SDVariable boxIndices,
@NonNull SDVariable cropOutSize,
@NonNull CropAndResize.Method method,
double extrapolationValue) |
CropAndResize(@NonNull SameDiff sameDiff,
@NonNull SDVariable image,
@NonNull SDVariable cropBoxes,
@NonNull SDVariable boxIndices,
@NonNull SDVariable cropOutSize,
@NonNull CropAndResize.Method method,
double extrapolationValue) |
CropAndResize(@NonNull SameDiff sameDiff,
@NonNull SDVariable image,
@NonNull SDVariable cropBoxes,
@NonNull SDVariable boxIndices,
@NonNull SDVariable cropOutSize,
@NonNull CropAndResize.Method method,
double extrapolationValue) |
CropAndResize(@NonNull SameDiff sameDiff,
SDVariable image,
SDVariable cropBoxes,
SDVariable boxIndices,
SDVariable cropOutSize,
double extrapolationValue) |
ExtractImagePatches(@NonNull SameDiff samediff,
@NonNull SDVariable input,
@NonNull int[] kSizes,
@NonNull int[] strides,
@NonNull int[] rates,
boolean sameMode) |
ExtractImagePatches(@NonNull SameDiff samediff,
@NonNull SDVariable input,
int kH,
int kW,
int sH,
int sW,
int rH,
int rW,
boolean sameMode) |
ImageResize(@NonNull SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable size,
boolean preserveAspectRatio,
boolean antialias,
ImageResizeMethod method) |
ImageResize(@NonNull SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable size,
boolean preserveAspectRatio,
boolean antialias,
ImageResizeMethod method) |
NonMaxSuppression(SameDiff sameDiff,
SDVariable boxes,
SDVariable scores,
int maxOutSize,
double iouThreshold,
double scoreThreshold) |
NonMaxSuppression(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
NonMaxSuppression(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
NonMaxSuppression(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
NonMaxSuppression(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
NonMaxSuppression(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
NonMaxSuppressionV3(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
NonMaxSuppressionV3(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
NonMaxSuppressionV3(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
NonMaxSuppressionV3(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
NonMaxSuppressionV3(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
NonMaxSuppressionWithOverlaps(SameDiff sameDiff,
SDVariable boxes,
SDVariable scores,
int maxOutSize,
double iouThreshold,
double scoreThreshold) |
NonMaxSuppressionWithOverlaps(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
NonMaxSuppressionWithOverlaps(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
NonMaxSuppressionWithOverlaps(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
NonMaxSuppressionWithOverlaps(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
NonMaxSuppressionWithOverlaps(SameDiff sameDiff,
@NonNull SDVariable boxes,
@NonNull SDVariable scores,
@NonNull SDVariable maxOutSize,
@NonNull SDVariable iouThreshold,
@NonNull SDVariable scoreThreshold) |
ResizeArea(@NonNull SameDiff sd,
@NonNull SDVariable image,
int height,
int width,
boolean alignCorners) |
ResizeBicubic(@NonNull SameDiff sameDiff,
@NonNull SDVariable image,
SDVariable size,
boolean alignCorners,
boolean alignPixelCenters) |
ResizeBicubic(@NonNull SameDiff sameDiff,
@NonNull SDVariable image,
SDVariable size,
boolean alignCorners,
boolean alignPixelCenters) |
ResizeBilinear(@NonNull SameDiff sd,
@NonNull SDVariable input,
int height,
int width,
boolean alignCorners,
boolean halfPixelCenters) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
FirstIndex.doDiff(List<SDVariable> f1) |
List<SDVariable> |
LastIndex.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
FirstIndex.doDiff(List<SDVariable> f1) |
List<SDVariable> |
LastIndex.doDiff(List<SDVariable> f1) |
| Constructor and Description |
|---|
FirstIndex(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
Condition condition,
int... dimensions) |
FirstIndex(SameDiff sameDiff,
SDVariable i_v,
Condition condition,
boolean keepDims,
int... dimensions) |
LastIndex(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
Condition condition,
int... dimensions) |
LastIndex(SameDiff sameDiff,
SDVariable i_v,
Condition condition,
boolean keepDims,
int... dimensions) |
LastIndex(SameDiff sameDiff,
SDVariable x,
@NonNull Condition condition,
int... dimensions) |
| Constructor and Description |
|---|
ArgAmax(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
ArgAmin(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
ArgMax(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
ArgMin(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
| Modifier and Type | Method and Description |
|---|---|
SDVariable[] |
ExternalErrorsFunction.outputVariables(String baseName) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
ExternalErrorsFunction.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
ExternalErrorsFunction.doDiff(List<SDVariable> f1) |
| Constructor and Description |
|---|
ExternalErrorsFunction(SameDiff sd,
List<SDVariable> inputs,
Map<String,INDArray> gradients) |
| Constructor and Description |
|---|
AvgPooling2D(SameDiff sameDiff,
SDVariable input,
Pooling2DConfig config) |
AvgPooling3D(SameDiff sameDiff,
SDVariable input,
Pooling3DConfig config) |
BatchNorm(SameDiff sameDiff,
SDVariable[] inputFunctions,
INDArray[] inputArrays,
INDArray[] outputArrays,
boolean inPlace,
boolean applyGamma,
boolean applyBeta,
double epsilon,
int[] axis) |
BatchNorm(SameDiff sameDiff,
SDVariable input,
SDVariable mean,
SDVariable variance,
SDVariable gamma,
SDVariable beta,
double epsilon,
int[] axis) |
BatchNormDerivative(SameDiff sameDiff,
SDVariable[] inputFunctions,
INDArray[] inputArrays,
INDArray[] outputArrays,
boolean inPlace,
boolean applyGamma,
boolean applyBeta,
double epsilon,
int[] axis) |
Col2Im(SameDiff sameDiff,
SDVariable[] inputFunctions,
INDArray[] inputArrays,
INDArray[] outputs,
Conv2DConfig conv2DConfig) |
Col2Im(@NonNull SameDiff sd,
@NonNull SDVariable input,
@NonNull Conv2DConfig config) |
Conv1D(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv1DConfig config) |
Conv1D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull Conv1DConfig conv1DConfig) |
Conv1D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull Conv1DConfig conv1DConfig) |
Conv1D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull Conv1DConfig conv1DConfig) |
Conv1DDerivative(@NonNull SameDiff sameDiff,
@NonNull SDVariable[] inputs,
@NonNull Conv1DConfig config) |
Conv1DDerivative(@NonNull SameDiff sd,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
SDVariable gradOut,
@NonNull Conv1DConfig config) |
Conv1DDerivative(@NonNull SameDiff sd,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
SDVariable gradOut,
@NonNull Conv1DConfig config) |
Conv1DDerivative(@NonNull SameDiff sd,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
SDVariable gradOut,
@NonNull Conv1DConfig config) |
Conv2D(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv2DConfig config) |
Conv2D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull Conv2DConfig conv2DConfig) |
Conv2D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull Conv2DConfig conv2DConfig) |
Conv2D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull Conv2DConfig conv2DConfig) |
Conv2DDerivative(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv2DConfig config) |
Conv3D(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv3DConfig config) |
Conv3D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull Conv3DConfig config) |
Conv3D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull Conv3DConfig config) |
Conv3D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull Conv3DConfig config) |
Conv3DDerivative(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv3DConfig conv3DConfig) |
DeConv2D(SameDiff sameDiff,
SDVariable[] inputs,
DeConv2DConfig config) |
DeConv2D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
DeConv2DConfig config) |
DeConv2D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
DeConv2DConfig config) |
DeConv2D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
DeConv2DConfig config) |
DeConv2DDerivative(SameDiff sameDiff,
SDVariable[] inputs,
DeConv2DConfig config) |
DeConv2DTF(SameDiff sameDiff,
SDVariable[] inputs,
DeConv2DConfig config) |
DeConv3D(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
@NonNull DeConv3DConfig config) |
DeConv3D(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
@NonNull DeConv3DConfig config) |
DeConv3D(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull DeConv3DConfig config) |
DeConv3D(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull DeConv3DConfig config) |
DeConv3D(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull DeConv3DConfig config) |
DeConv3DDerivative(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
SDVariable grad,
DeConv3DConfig config) |
DeConv3DDerivative(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
SDVariable grad,
DeConv3DConfig config) |
DeConv3DDerivative(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
SDVariable grad,
DeConv3DConfig config) |
DeConv3DTF(@NonNull SameDiff sameDiff,
@NonNull SDVariable shape,
@NonNull SDVariable weights,
@NonNull SDVariable input,
@NonNull DeConv3DConfig config) |
DeConv3DTF(@NonNull SameDiff sameDiff,
@NonNull SDVariable shape,
@NonNull SDVariable weights,
@NonNull SDVariable input,
@NonNull DeConv3DConfig config) |
DeConv3DTF(@NonNull SameDiff sameDiff,
@NonNull SDVariable shape,
@NonNull SDVariable weights,
@NonNull SDVariable input,
@NonNull DeConv3DConfig config) |
DepthToSpace(SameDiff sameDiff,
SDVariable[] args,
int blockSize,
DataFormat dataFormat) |
DepthToSpace(SameDiff sameDiff,
SDVariable args,
int blockSize,
DataFormat dataFormat) |
DepthwiseConv2D(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv2DConfig config) |
DepthwiseConv2D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull Conv2DConfig conv2DConfig) |
DepthwiseConv2D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull Conv2DConfig conv2DConfig) |
DepthwiseConv2D(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull Conv2DConfig conv2DConfig) |
DepthwiseConv2DBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
@NonNull SDVariable gradO,
@NonNull Conv2DConfig config) |
DepthwiseConv2DBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
@NonNull SDVariable gradO,
@NonNull Conv2DConfig config) |
DepthwiseConv2DBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
@NonNull SDVariable gradO,
@NonNull Conv2DConfig config) |
DepthwiseConv2DBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull SDVariable gradO,
@NonNull Conv2DConfig config) |
DepthwiseConv2DBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull SDVariable gradO,
@NonNull Conv2DConfig config) |
DepthwiseConv2DBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull SDVariable gradO,
@NonNull Conv2DConfig config) |
DepthwiseConv2DBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable weights,
SDVariable bias,
@NonNull SDVariable gradO,
@NonNull Conv2DConfig config) |
Im2col(SameDiff sameDiff,
SDVariable[] inputFunctions,
INDArray[] inputArrays,
INDArray[] outputs,
Conv2DConfig conv2DConfig) |
Im2col(SameDiff sd,
SDVariable input,
Conv2DConfig config) |
Im2colBp(SameDiff sd,
SDVariable input,
Conv2DConfig config) |
Im2colBp(SameDiff sameDiff,
SDVariable i2cInput,
SDVariable gradAtOutput,
Conv2DConfig conv2DConfig) |
LocalResponseNormalization(SameDiff sameDiff,
SDVariable[] inputFunctions,
boolean inPlace,
LocalResponseNormalizationConfig config) |
LocalResponseNormalization(SameDiff sameDiff,
SDVariable input,
LocalResponseNormalizationConfig config) |
LocalResponseNormalizationDerivative(SameDiff sameDiff,
SDVariable[] inputFunctions,
boolean inPlace,
LocalResponseNormalizationConfig config) |
MaxPooling2D(SameDiff sameDiff,
SDVariable input,
Pooling2DConfig config) |
MaxPooling3D(SameDiff sameDiff,
SDVariable input,
Pooling3DConfig config) |
MaxPoolWithArgmax(SameDiff sameDiff,
SDVariable input,
Pooling2DConfig config) |
Pooling2D(SameDiff sameDiff,
SDVariable[] inputs,
Pooling2DConfig config) |
Pooling2DDerivative(SameDiff sameDiff,
SDVariable[] inputs,
Pooling2DConfig config) |
Pooling3D(SameDiff sameDiff,
SDVariable[] inputs,
INDArray[] inputArrays,
INDArray[] outputs,
boolean inPlace,
Pooling3DConfig pooling3DConfig,
Pooling3D.Pooling3DType type) |
Pooling3DDerivative(SameDiff sameDiff,
SDVariable[] inputs,
INDArray[] inputArrays,
INDArray[] outputs,
boolean inPlace,
Pooling3DConfig pooling3DConfig,
Pooling3D.Pooling3DType type) |
SConv2D(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv2DConfig conv2DConfig) |
SConv2D(@NonNull SameDiff sameDiff,
@NonNull SDVariable layerInput,
@NonNull SDVariable depthWeights,
SDVariable pointWeights,
SDVariable bias,
@NonNull Conv2DConfig conv2DConfig) |
SConv2D(@NonNull SameDiff sameDiff,
@NonNull SDVariable layerInput,
@NonNull SDVariable depthWeights,
SDVariable pointWeights,
SDVariable bias,
@NonNull Conv2DConfig conv2DConfig) |
SConv2D(@NonNull SameDiff sameDiff,
@NonNull SDVariable layerInput,
@NonNull SDVariable depthWeights,
SDVariable pointWeights,
SDVariable bias,
@NonNull Conv2DConfig conv2DConfig) |
SConv2DDerivative(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv2DConfig conv2DConfig) |
SpaceToDepth(SameDiff sameDiff,
SDVariable[] args,
int blockSize,
DataFormat dataFormat) |
SpaceToDepth(SameDiff sameDiff,
SDVariable x,
int blockSize,
DataFormat dataFormat) |
Upsampling2d(SameDiff sameDiff,
SDVariable input,
boolean nchw,
int scaleH,
int scaleW) |
Upsampling2d(SameDiff sameDiff,
SDVariable input,
int scale) |
Upsampling2d(SameDiff sameDiff,
SDVariable input,
int scaleH,
int scaleW,
boolean nchw) |
Upsampling2dDerivative(SameDiff sameDiff,
SDVariable input,
SDVariable gradient,
boolean nchw,
int scaleH,
int scaleW) |
Upsampling3d(SameDiff sameDiff,
SDVariable input,
boolean ncdhw,
int scaleD,
int scaleH,
int scaleW) |
Upsampling3dBp(SameDiff sameDiff,
SDVariable input,
SDVariable grad0,
boolean ncdhw) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
GRU.doDiff(List<SDVariable> grads) |
List<SDVariable> |
GRUCell.doDiff(List<SDVariable> grads) |
List<SDVariable> |
LSTMBlock.doDiff(List<SDVariable> grads) |
List<SDVariable> |
LSTMBlockCell.doDiff(List<SDVariable> grads) |
List<SDVariable> |
LSTMLayer.doDiff(List<SDVariable> grads) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
GRU.doDiff(List<SDVariable> grads) |
List<SDVariable> |
GRUCell.doDiff(List<SDVariable> grads) |
List<SDVariable> |
LSTMBlock.doDiff(List<SDVariable> grads) |
List<SDVariable> |
LSTMBlockCell.doDiff(List<SDVariable> grads) |
List<SDVariable> |
LSTMLayer.doDiff(List<SDVariable> grads) |
| Constructor and Description |
|---|
GRU(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable hI,
@NonNull SDVariable Wx,
@NonNull SDVariable Wh,
@NonNull SDVariable biases) |
GRU(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable hI,
@NonNull SDVariable Wx,
@NonNull SDVariable Wh,
@NonNull SDVariable biases) |
GRU(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable hI,
@NonNull SDVariable Wx,
@NonNull SDVariable Wh,
@NonNull SDVariable biases) |
GRU(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable hI,
@NonNull SDVariable Wx,
@NonNull SDVariable Wh,
@NonNull SDVariable biases) |
GRU(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable hI,
@NonNull SDVariable Wx,
@NonNull SDVariable Wh,
@NonNull SDVariable biases) |
GRUBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable hI,
@NonNull SDVariable Wx,
@NonNull SDVariable Wh,
@NonNull SDVariable biases,
@NonNull SDVariable dLdh) |
GRUBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable hI,
@NonNull SDVariable Wx,
@NonNull SDVariable Wh,
@NonNull SDVariable biases,
@NonNull SDVariable dLdh) |
GRUBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable hI,
@NonNull SDVariable Wx,
@NonNull SDVariable Wh,
@NonNull SDVariable biases,
@NonNull SDVariable dLdh) |
GRUBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable hI,
@NonNull SDVariable Wx,
@NonNull SDVariable Wh,
@NonNull SDVariable biases,
@NonNull SDVariable dLdh) |
GRUBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable hI,
@NonNull SDVariable Wx,
@NonNull SDVariable Wh,
@NonNull SDVariable biases,
@NonNull SDVariable dLdh) |
GRUBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable hI,
@NonNull SDVariable Wx,
@NonNull SDVariable Wh,
@NonNull SDVariable biases,
@NonNull SDVariable dLdh) |
GRUCell(SameDiff sameDiff,
SDVariable x,
SDVariable hLast,
GRUWeights weights) |
LSTMBlock(@NonNull SameDiff sameDiff,
SDVariable maxTSLength,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights weights,
LSTMConfiguration configuration) |
LSTMBlockCell(SameDiff sameDiff,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights weights,
LSTMConfiguration configuration) |
LSTMLayer(@NonNull SameDiff sameDiff,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
SDVariable maxTSLength,
LSTMLayerWeights weights,
LSTMLayerConfig configuration) |
LSTMLayerBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
SDVariable cLast,
SDVariable yLast,
SDVariable maxTSLength,
@NonNull LSTMLayerWeights weights,
@NonNull LSTMLayerConfig configuration,
SDVariable dLdh,
SDVariable dLdhL,
SDVariable dLdcL) |
LSTMLayerBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
SDVariable cLast,
SDVariable yLast,
SDVariable maxTSLength,
@NonNull LSTMLayerWeights weights,
@NonNull LSTMLayerConfig configuration,
SDVariable dLdh,
SDVariable dLdhL,
SDVariable dLdcL) |
SRU(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable initialC,
SDVariable mask,
@NonNull SRUWeights weights) |
SRU(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable initialC,
SDVariable mask,
@NonNull SRUWeights weights) |
SRU(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable initialC,
SDVariable mask,
@NonNull SRUWeights weights) |
SRUCell(SameDiff sameDiff,
SDVariable x,
SDVariable cLast,
SRUWeights weights) |
| Modifier and Type | Method and Description |
|---|---|
SDVariable[] |
GRUCellConfiguration.args() |
SDVariable[] |
LSTMCellConfiguration.args() |
| Modifier and Type | Method and Description |
|---|---|
SDVariable |
LSTMLayerOutputs.getLastOutput() |
SDVariable |
SRULayerOutputs.getLastOutput()
Get y, the output of the cell, for the last time step.
|
SDVariable |
LSTMLayerOutputs.getLastState() |
SDVariable |
SRULayerOutputs.getLastState()
Get c, the state of the cell, for the last time step.
|
SDVariable |
GRUCellOutputs.getOutput()
Get h, the output of the cell.
|
SDVariable |
LSTMCellOutputs.getOutput()
Get y, the output of the cell.
|
SDVariable |
LSTMLayerOutputs.getOutput()
Get h, the output of the cell for all time steps.
|
SDVariable |
SRUCellOutputs.getOutput()
Get h, the output of the cell.
|
SDVariable |
SRULayerOutputs.getOutput()
Get h, the output of the cell.
|
SDVariable |
LSTMCellOutputs.getState()
Get c, the cell's state.
|
SDVariable |
SRUCellOutputs.getState()
Get c, the state of the cell.
|
SDVariable |
SRULayerOutputs.getState()
Get c, the state of the cell.
|
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
GRUCellOutputs.getAllOutputs()
Get all outputs returned by the cell.
|
List<SDVariable> |
LSTMCellOutputs.getAllOutputs()
Get all outputs returned by the cell.
|
List<SDVariable> |
SRUCellOutputs.getAllOutputs()
Get all outputs returned by the cell.
|
List<SDVariable> |
SRULayerOutputs.getAllOutputs()
Get all outputs returned by the cell.
|
| Constructor and Description |
|---|
GRUCellOutputs(SDVariable[] outputs) |
LSTMCellOutputs(SDVariable[] outputs) |
LSTMLayerOutputs(SDVariable[] outputs,
LSTMLayerConfig lstmLayerConfig) |
SRUCellOutputs(SDVariable[] outputs) |
SRULayerOutputs(SDVariable[] outputs) |
| Modifier and Type | Method and Description |
|---|---|
SDVariable[] |
GRUWeights.args() |
SDVariable[] |
LSTMLayerWeights.args() |
SDVariable[] |
LSTMWeights.args() |
abstract SDVariable[] |
RNNWeights.args() |
SDVariable[] |
SRUWeights.args() |
SDVariable[] |
LSTMLayerWeights.argsWithInputs(SDVariable... inputs) |
SDVariable[] |
RNNWeights.argsWithInputs(SDVariable... inputs) |
| Modifier and Type | Method and Description |
|---|---|
SDVariable[] |
LSTMLayerWeights.argsWithInputs(SDVariable... inputs) |
SDVariable[] |
RNNWeights.argsWithInputs(SDVariable... inputs) |
| Modifier and Type | Method and Description |
|---|---|
protected static SDVariable |
BaseLoss.getWeights(SameDiff sd,
SDVariable weights,
SDVariable predictions) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
AbsoluteDifferenceLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
CosineDistanceLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
CtcLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
HingeLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
HuberLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
L2Loss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
LogLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
LogPoissonLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
MeanPairwiseSquaredErrorLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
MeanSquaredErrorLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SigmoidCrossEntropyLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SoftmaxCrossEntropyLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SoftmaxCrossEntropyWithLogitsLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SparseSoftmaxCrossEntropyLossWithLogits.doDiff(List<SDVariable> grad) |
| Modifier and Type | Method and Description |
|---|---|
protected static SDVariable |
BaseLoss.getWeights(SameDiff sd,
SDVariable weights,
SDVariable predictions) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
AbsoluteDifferenceLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
CosineDistanceLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
CtcLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
HingeLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
HuberLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
L2Loss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
LogLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
LogPoissonLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
MeanPairwiseSquaredErrorLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
MeanSquaredErrorLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SigmoidCrossEntropyLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SoftmaxCrossEntropyLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SoftmaxCrossEntropyWithLogitsLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SparseSoftmaxCrossEntropyLossWithLogits.doDiff(List<SDVariable> grad) |
| Constructor and Description |
|---|
AbsoluteDifferenceLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
AbsoluteDifferenceLoss(SameDiff sameDiff,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
BaseLoss(@NonNull SameDiff sameDiff,
@NonNull LossReduce lossReduce,
@NonNull SDVariable predictions,
SDVariable weights,
@NonNull SDVariable labels) |
BaseLoss(@NonNull SameDiff sameDiff,
@NonNull LossReduce lossReduce,
@NonNull SDVariable predictions,
SDVariable weights,
@NonNull SDVariable labels) |
BaseLoss(@NonNull SameDiff sameDiff,
@NonNull LossReduce lossReduce,
@NonNull SDVariable predictions,
SDVariable weights,
@NonNull SDVariable labels) |
CosineDistanceLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
int dimension) |
CosineDistanceLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension) |
CtcLoss(SameDiff sameDiff,
SDVariable targetLabels,
SDVariable logitInputs,
SDVariable targetLabelLengths,
SDVariable logitInputLengths) |
HingeLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
HingeLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
HuberLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
double delta) |
HuberLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta) |
L2Loss(SameDiff sameDiff,
SDVariable var) |
LogLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
double epsilon) |
LogLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon) |
LogPoissonLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
LogPoissonLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
boolean full) |
LogPoissonLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
boolean full) |
MeanPairwiseSquaredErrorLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
MeanPairwiseSquaredErrorLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
MeanSquaredErrorLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
MeanSquaredErrorLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SigmoidCrossEntropyLoss(SameDiff sameDiff,
LossReduce reductionMode,
SDVariable logits,
SDVariable weights,
SDVariable labels) |
SigmoidCrossEntropyLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable logits,
SDVariable weights,
SDVariable labels,
double labelSmoothing) |
SigmoidCrossEntropyLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable logits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing) |
SoftmaxCrossEntropyLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable logits,
SDVariable weights,
SDVariable labels) |
SoftmaxCrossEntropyLoss(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable logits,
SDVariable weights,
SDVariable labels,
double labelSmoothing) |
SoftmaxCrossEntropyLoss(SameDiff sameDiff,
SDVariable labels,
SDVariable logits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing) |
SoftmaxCrossEntropyWithLogitsLoss(SameDiff sameDiff,
SDVariable logits,
SDVariable labels,
int classesDim) |
SparseSoftmaxCrossEntropyLossWithLogits(@NonNull SameDiff sameDiff,
@NonNull SDVariable logits,
@NonNull SDVariable labels) |
SparseSoftmaxCrossEntropyLossWithLogits(@NonNull SameDiff sameDiff,
@NonNull SDVariable logits,
@NonNull SDVariable labels) |
WeightedCrossEntropyLoss(SameDiff sameDiff,
SDVariable targets,
SDVariable inputs,
SDVariable weights) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
AbsoluteDifferenceLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
BaseLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
CtcLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
MeanPairwiseSquaredErrorLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SoftmaxCrossEntropyWithLogitsLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SparseSoftmaxCrossEntropyLossWithLogitsBp.doDiff(List<SDVariable> grad) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
AbsoluteDifferenceLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
BaseLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
CtcLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
MeanPairwiseSquaredErrorLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SoftmaxCrossEntropyWithLogitsLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SparseSoftmaxCrossEntropyLossWithLogitsBp.doDiff(List<SDVariable> grad) |
| Constructor and Description |
|---|
AbsoluteDifferenceLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
BaseLossBp(@NonNull SameDiff sameDiff,
@NonNull LossReduce lossReduce,
@NonNull SDVariable predictions,
@NonNull SDVariable weights,
@NonNull SDVariable labels) |
BaseLossBp(@NonNull SameDiff sameDiff,
@NonNull LossReduce lossReduce,
@NonNull SDVariable predictions,
@NonNull SDVariable weights,
@NonNull SDVariable labels) |
BaseLossBp(@NonNull SameDiff sameDiff,
@NonNull LossReduce lossReduce,
@NonNull SDVariable predictions,
@NonNull SDVariable weights,
@NonNull SDVariable labels) |
CosineDistanceLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
int dimension) |
CtcLossBp(SameDiff sameDiff,
SDVariable targetLabels,
SDVariable logitInputs,
SDVariable targetLabelLengths,
SDVariable logitInputLengths) |
HingeLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
HuberLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
double delta) |
LogLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
double epsilon) |
LogPoissonLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
LogPoissonLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels,
boolean full) |
MeanPairwiseSquaredErrorLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
MeanSquaredErrorLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable predictions,
SDVariable weights,
SDVariable labels) |
SigmoidCrossEntropyLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable logits,
SDVariable weights,
SDVariable labels,
double labelSmoothing) |
SoftmaxCrossEntropyLossBp(SameDiff sameDiff,
LossReduce lossReduce,
SDVariable logits,
SDVariable weights,
SDVariable labels,
double labelSmoothing) |
SoftmaxCrossEntropyWithLogitsLossBp(SameDiff sameDiff,
SDVariable logits,
SDVariable labels,
int classesDim) |
SparseSoftmaxCrossEntropyLossWithLogitsBp(SameDiff sameDiff,
SDVariable logits,
SDVariable labels) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
InvertedPredicateMetaOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
PostulateMetaOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
PredicateMetaOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ReduceMetaOp.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
InvertedPredicateMetaOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
PostulateMetaOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
PredicateMetaOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ReduceMetaOp.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
Mmul.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
MmulBp.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Moments.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SufficientStatistics.doDiff(List<SDVariable> grad) |
List<SDVariable> |
TensorMmul.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
TensorMmulBp.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
ZeroFraction.doDiff(List<SDVariable> grad) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
Mmul.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
MmulBp.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Moments.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SufficientStatistics.doDiff(List<SDVariable> grad) |
List<SDVariable> |
TensorMmul.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
TensorMmulBp.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
ZeroFraction.doDiff(List<SDVariable> grad) |
| Constructor and Description |
|---|
Mmul(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
Mmul(SameDiff sameDiff,
SDVariable x,
SDVariable y,
boolean transposeX,
boolean transposeY,
boolean transposeZ) |
Mmul(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
MMulTranspose mt) |
MmulBp(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable eps) |
MmulBp(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable eps,
MMulTranspose mt) |
Moments(SameDiff sameDiff,
SDVariable input) |
Moments(SameDiff sameDiff,
SDVariable input,
int[] axes) |
NormalizeMoments(SameDiff sameDiff,
SDVariable counts,
SDVariable means,
SDVariable variances) |
NormalizeMoments(SameDiff sameDiff,
SDVariable counts,
SDVariable means,
SDVariable variances,
double shift) |
SufficientStatistics(SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable axis,
SDVariable shift) |
SufficientStatistics(SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable axis,
SDVariable shift) |
SufficientStatistics(SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable axis,
SDVariable shift) |
TensorMmul(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[][] dimensions) |
TensorMmul(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[][] dimensions,
MMulTranspose mMulTranspose) |
TensorMmul(SameDiff sameDiff,
SDVariable x,
SDVariable y,
int[] dimensionsX,
int[] dimensionsY,
boolean transposeX,
boolean transposeY,
boolean transposeZ) |
TensorMmulBp(SameDiff samediff,
SDVariable x,
SDVariable y,
SDVariable gradAtOutput,
int[][] axes) |
TensorMmulBp(SameDiff samediff,
SDVariable x,
SDVariable y,
SDVariable gradAtOutput,
int[] axesX,
int[] axesY) |
ZeroFraction(SameDiff sameDiff,
SDVariable input) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
All.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Any.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IsInf.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
IsNaN.doDiff(List<SDVariable> i_v) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
All.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Any.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IsInf.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
IsNaN.doDiff(List<SDVariable> i_v) |
| Constructor and Description |
|---|
All(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
Any(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
IsInf(SameDiff sameDiff,
SDVariable i_v,
int[] dims) |
IsInf(SameDiff sameDiff,
SDVariable i_v,
int[] dims,
boolean keepDims) |
IsNaN(SameDiff sameDiff,
SDVariable i_v,
int[] dims) |
IsNaN(SameDiff sameDiff,
SDVariable i_v,
int[] dims,
boolean keepDims) |
| Constructor and Description |
|---|
BaseReductionBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
BaseReductionBp(SameDiff sameDiff,
SDVariable origInput1,
SDVariable origInput2,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
CumProdBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean exclusive,
boolean reverse,
int... axis) |
CumSumBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean exclusive,
boolean reverse,
int... axis) |
DotBp(SameDiff sameDiff,
SDVariable origInput1,
SDVariable origInput2,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
MaxBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
MeanBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
MinBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
Norm1Bp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
Norm2Bp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
NormMaxBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
PowBp(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable dLdz) |
ProdBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
SquaredNormBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
StandardDeviationBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean biasCorrected,
boolean keepDims,
int... dimensions) |
SumBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
VarianceBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean biasCorrected,
boolean keepDims,
int... dimensions) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
BatchMmul.doDiff(List<SDVariable> grads) |
List<SDVariable> |
LogSumExp.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
BatchMmul.doDiff(List<SDVariable> grads) |
List<SDVariable> |
LogSumExp.doDiff(List<SDVariable> f1) |
| Constructor and Description |
|---|
BatchMmul(SameDiff sameDiff,
SDVariable[] matrices,
boolean transposeA,
boolean transposeB) |
BatchMmul(SameDiff sameDiff,
SDVariable[] matricesA,
SDVariable[] matricesB,
boolean transposeA,
boolean transposeB) |
BatchMmul(SameDiff sameDiff,
SDVariable[] matricesA,
SDVariable[] matricesB,
boolean transposeA,
boolean transposeB) |
LogSumExp(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
LogSumExp(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
AMean.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Entropy.doDiff(List<SDVariable> f1) |
List<SDVariable> |
LogEntropy.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Mean.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Norm1.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Norm2.doDiff(List<SDVariable> grad) |
List<SDVariable> |
NormMax.doDiff(List<SDVariable> grad) |
List<SDVariable> |
ShannonEntropy.doDiff(List<SDVariable> f1) |
List<SDVariable> |
SquaredNorm.doDiff(List<SDVariable> grad) |
static List<SDVariable> |
Entropy.grad(SameDiff sd,
SDVariable arg,
SDVariable grad,
int[] dimensions) |
| Modifier and Type | Method and Description |
|---|---|
static List<SDVariable> |
Entropy.grad(SameDiff sd,
SDVariable arg,
SDVariable grad,
int[] dimensions) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
AMean.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Entropy.doDiff(List<SDVariable> f1) |
List<SDVariable> |
LogEntropy.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Mean.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Norm1.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Norm2.doDiff(List<SDVariable> grad) |
List<SDVariable> |
NormMax.doDiff(List<SDVariable> grad) |
List<SDVariable> |
ShannonEntropy.doDiff(List<SDVariable> f1) |
List<SDVariable> |
SquaredNorm.doDiff(List<SDVariable> grad) |
| Constructor and Description |
|---|
AMean(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
AMean(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
Entropy(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
LogEntropy(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
Mean(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
Norm1(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
Norm2(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
NormMax(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
NormMax(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
ShannonEntropy(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
ShannonEntropy(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
SquaredNorm(SameDiff sameDiff,
SDVariable input,
boolean keepDims,
int... dimensions) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
CountNonZero.doDiff(List<SDVariable> f1) |
List<SDVariable> |
CountZero.doDiff(List<SDVariable> f1) |
List<SDVariable> |
MatchCondition.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
CountNonZero.doDiff(List<SDVariable> f1) |
List<SDVariable> |
CountZero.doDiff(List<SDVariable> f1) |
List<SDVariable> |
MatchCondition.doDiff(List<SDVariable> f1) |
| Constructor and Description |
|---|
CountNonZero(SameDiff sameDiff,
SDVariable input,
int... dimensions) |
CountZero(SameDiff sameDiff,
SDVariable input,
int... dimensions) |
MatchCondition(SameDiff sameDiff,
SDVariable in,
Condition condition) |
MatchCondition(SameDiff sameDiff,
SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
AMax.doDiff(List<SDVariable> f1) |
List<SDVariable> |
AMin.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ASum.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Max.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Min.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Prod.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Sum.doDiff(List<SDVariable> i_v1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
AMax.doDiff(List<SDVariable> f1) |
List<SDVariable> |
AMin.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ASum.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Max.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Min.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Prod.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Sum.doDiff(List<SDVariable> i_v1) |
| Constructor and Description |
|---|
AMax(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
AMax(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
AMin(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
AMin(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
ASum(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
ASum(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
Max(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
Max(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
Min(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
Prod(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
Prod(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
Sum(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
Sum(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
CosineDistance.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
CosineSimilarity.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Dot.doDiff(List<SDVariable> f1) |
List<SDVariable> |
EqualsWithEps.doDiff(List<SDVariable> f1) |
List<SDVariable> |
EuclideanDistance.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
HammingDistance.doDiff(List<SDVariable> f1) |
List<SDVariable> |
JaccardDistance.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ManhattanDistance.doDiff(List<SDVariable> i_v1) |
static List<SDVariable> |
CosineSimilarity.doDiff(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable gradOut,
boolean keepDims,
int... dimensions) |
| Modifier and Type | Method and Description |
|---|---|
static List<SDVariable> |
CosineSimilarity.doDiff(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable gradOut,
boolean keepDims,
int... dimensions) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
CosineDistance.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
CosineSimilarity.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Dot.doDiff(List<SDVariable> f1) |
List<SDVariable> |
EqualsWithEps.doDiff(List<SDVariable> f1) |
List<SDVariable> |
EuclideanDistance.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
HammingDistance.doDiff(List<SDVariable> f1) |
List<SDVariable> |
JaccardDistance.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ManhattanDistance.doDiff(List<SDVariable> i_v1) |
| Constructor and Description |
|---|
BaseReduce3Op(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
BaseReduce3Op(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int... dimensions) |
CosineDistance(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int... dimensions) |
CosineSimilarity(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
CosineSimilarity(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
Dot(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int... dimensions) |
EqualsWithEps(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions,
double eps) |
EqualsWithEps(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions,
double eps) |
EuclideanDistance(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
EuclideanDistance(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
HammingDistance(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int... dimensions) |
JaccardDistance(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int... dimensions) |
ManhattanDistance(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
ManhattanDistance(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int... dimensions) |
| Constructor and Description |
|---|
LeakyReLU(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double alpha) |
LeakyReLU(SameDiff sameDiff,
SDVariable i_v,
double alpha) |
LeakyReLU(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs,
double alpha) |
LogX(SameDiff sameDiff,
SDVariable i_v,
double base) |
Pow(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double pow) |
Pow(SameDiff sameDiff,
SDVariable i_v,
double pow) |
Pow(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs,
double pow) |
PowDerivative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double pow) |
PRelu(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable alpha,
int... sharedAxes) |
PRelu(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable alpha,
int... sharedAxes) |
RectifiedLinear(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double cutoff) |
RectifiedLinear(SameDiff sameDiff,
SDVariable i_v,
double cutoff) |
RectifiedLinearDerivative(SameDiff sd,
SDVariable input,
SDVariable gradient) |
Relu6(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double cutoff) |
Relu6(SameDiff sameDiff,
SDVariable i_v,
double cutoff) |
ReplaceNans(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double set) |
ReplaceNans(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs,
double set) |
ScalarAdd(@NonNull SameDiff sameDiff,
@NonNull SDVariable i_v,
Number scalar) |
ScalarAdd(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
ScalarAdd(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace,
Object[] extraArgs) |
ScalarAdd(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
Object[] extraArgs) |
ScalarDivision(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
ScalarDivision(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
ScalarFMod(SameDiff sd,
SDVariable in,
Number number) |
ScalarMax(SameDiff sd,
SDVariable in,
Number number) |
ScalarMin(SameDiff sd,
SDVariable in,
Number number) |
ScalarMultiplication(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
ScalarMultiplication(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
ScalarRemainder(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
ScalarRemainder(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
ScalarReverseDivision(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
ScalarReverseDivision(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
ScalarReverseSubtraction(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
ScalarReverseSubtraction(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
ScalarSet(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
ScalarSet(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
ScalarSubtraction(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
ScalarSubtraction(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
Step(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double cutoff) |
Step(SameDiff sameDiff,
SDVariable i_v,
double cutoff) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
ScalarAnd.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarEps.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarEquals.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarGreaterThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarGreaterThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarLessThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarLessThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarNot.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarNotEquals.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarOr.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarSetValue.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarXor.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
ScalarAnd.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarEps.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarEquals.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarGreaterThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarGreaterThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarLessThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarLessThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarNot.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarNotEquals.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarOr.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarSetValue.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarXor.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
ScatterAdd.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterDiv.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterMax.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterMin.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterMul.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterNd.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterNdAdd.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterNdSub.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterNdUpdate.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterSub.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterUpdate.doDiff(List<SDVariable> gradOut) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
ScatterAdd.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterDiv.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterMax.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterMin.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterMul.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterNd.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterNdAdd.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterNdSub.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterNdUpdate.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterSub.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterUpdate.doDiff(List<SDVariable> gradOut) |
| Constructor and Description |
|---|
BroadcastDynamicShape(SameDiff sameDiff,
SDVariable in,
SDVariable shape) |
Concat(SameDiff sameDiff,
int concatDimension,
SDVariable... inputs) |
Concat(SameDiff sameDiff,
SDVariable[] inputs,
int concatDimension) |
ConfusionMatrix(SameDiff sameDiff,
SDVariable labels,
SDVariable pred,
DataType dataType) |
ConfusionMatrix(SameDiff sameDiff,
SDVariable labels,
SDVariable pred,
Integer numClasses) |
ConfusionMatrix(SameDiff sameDiff,
SDVariable labels,
SDVariable pred,
Integer numClasses,
SDVariable weights) |
ConfusionMatrix(SameDiff sameDiff,
SDVariable labels,
SDVariable pred,
SDVariable weights) |
ConfusionMatrix(SameDiff sameDiff,
SDVariable labels,
SDVariable pred,
SDVariable weights,
DataType dataType) |
ConfusionMatrix(SameDiff sameDiff,
SDVariable labels,
SDVariable pred,
SDVariable weights,
Integer numClasses) |
Create(String name,
SameDiff sameDiff,
SDVariable input,
boolean initialize) |
Create(String name,
SameDiff sameDiff,
SDVariable input,
char order,
boolean initialize,
DataType dataType) |
Cross(SameDiff sameDiff,
SDVariable[] args) |
Cross(SameDiff sameDiff,
SDVariable a,
SDVariable b) |
Diag(SameDiff sameDiff,
SDVariable input) |
Diag(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
DiagPart(SameDiff sameDiff,
SDVariable in) |
DiagPart(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
ExpandDims(SameDiff sameDiff,
SDVariable[] args) |
ExpandDims(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
ExpandDims(SameDiff sameDiff,
SDVariable[] args,
int axis) |
ExpandDims(SameDiff sameDiff,
SDVariable args,
int axis) |
Eye(SameDiff sameDiff,
SDVariable numRows) |
Eye(SameDiff sameDiff,
SDVariable numRows,
SDVariable numCols) |
Eye(SameDiff sameDiff,
SDVariable numRows,
SDVariable numCols,
DataType dataType,
int[] batchDimension) |
Eye(SameDiff sameDiff,
SDVariable numRows,
SDVariable numCols,
SDVariable batch_shape) |
Flatten2D(SameDiff sameDiff,
SDVariable i_v,
long axis) |
Gather(SameDiff sameDiff,
SDVariable df,
int[] indices,
int axis) |
Gather(SameDiff sameDiff,
SDVariable input,
int[] indices,
int axis,
boolean inPlace) |
Gather(SameDiff sameDiff,
SDVariable df,
SDVariable indices,
int axis) |
Gather(SameDiff sameDiff,
SDVariable input,
SDVariable indices,
int axis,
boolean inPlace) |
GatherNd(SameDiff sameDiff,
SDVariable input,
SDVariable indices) |
Linspace(SameDiff sameDiff,
SDVariable from,
SDVariable to,
SDVariable length,
DataType dataType) |
MergeAvg(SameDiff sameDiff,
SDVariable... inputs) |
MergeMax(SameDiff sameDiff,
SDVariable... inputs) |
MergeMaxIndex(@NonNull SameDiff sameDiff,
SDVariable... inputs) |
MergeMaxIndex(@NonNull SameDiff sd,
@NonNull SDVariable[] x,
@NonNull DataType dataType) |
MergeSum(SameDiff sameDiff,
SDVariable... inputs) |
MeshGrid(SameDiff sd,
boolean cartesian,
SDVariable... inputs) |
MeshGrid(SameDiff sd,
SDVariable[] inputs,
boolean cartesian) |
OneHot(SameDiff sameDiff,
SDVariable indices,
int depth) |
OneHot(SameDiff sameDiff,
SDVariable indices,
int depth,
int axis,
double on,
double off,
DataType dataType) |
OnesLike(SameDiff sameDiff,
SDVariable input) |
OnesLike(SameDiff sameDiff,
SDVariable input,
DataType dataType) |
OnesLike(String name,
SameDiff sameDiff,
SDVariable input) |
OnesLike(String name,
SameDiff sameDiff,
SDVariable input,
DataType dataType) |
ParallelStack(SameDiff sameDiff,
SDVariable[] values) |
Permute(SameDiff sameDiff,
SDVariable i_v,
int... permuteDims) |
Permute(SameDiff sd,
SDVariable input,
SDVariable permuteDims) |
Rank(SameDiff sameDiff,
SDVariable input) |
Rank(SameDiff sameDiff,
SDVariable input,
boolean inPlace) |
ReductionShape(@NonNull SameDiff sameDiff,
@NonNull SDVariable shape,
@NonNull SDVariable axis,
boolean keepDims) |
ReductionShape(@NonNull SameDiff sameDiff,
@NonNull SDVariable shape,
@NonNull SDVariable axis,
boolean keepDims) |
Repeat(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace,
int axis) |
Repeat(SameDiff sameDiff,
SDVariable[] args,
int axis) |
Reshape(SameDiff sameDiff,
SDVariable i_v,
long[] shape) |
Reshape(SameDiff sameDiff,
SDVariable i_v,
SDVariable shape) |
SequenceMask(SameDiff sameDiff,
SDVariable input,
DataType dataType) |
SequenceMask(SameDiff sameDiff,
SDVariable input,
int maxLen,
DataType dataType) |
SequenceMask(SameDiff sameDiff,
SDVariable input,
SDVariable maxLen,
DataType dataType) |
Shape(SameDiff sameDiff,
SDVariable input) |
Shape(SameDiff sameDiff,
SDVariable input,
boolean inPlace) |
ShapeN(SameDiff sameDiff,
SDVariable[] inputs,
boolean inPlace) |
Size(SameDiff sameDiff,
SDVariable input) |
SizeAt(SameDiff sameDiff,
SDVariable input,
int dimension) |
Slice(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull int[] begin,
@NonNull int[] size) |
Slice(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable begin,
@NonNull SDVariable end) |
Slice(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable begin,
@NonNull SDVariable end) |
Slice(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable begin,
@NonNull SDVariable end) |
Split(SameDiff sameDiff,
SDVariable input,
int numSplit,
int splitDim) |
Squeeze(SameDiff sameDiff,
SDVariable arg,
int squeezeDims) |
Squeeze(SameDiff sameDiff,
SDVariable arg,
int[] squeezeDims) |
Stack(SameDiff sameDiff,
SDVariable[] values,
int axis) |
Stack(SameDiff sameDiff,
SDVariable values,
int axis) |
StridedSlice(SameDiff sameDiff,
SDVariable in,
int[] begin,
int[] end,
int[] strides) |
StridedSlice(SameDiff sameDiff,
SDVariable in,
@NonNull int[] begin,
@NonNull int[] end,
@NonNull int[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
StridedSlice(SameDiff sameDiff,
SDVariable in,
long[] begin,
long[] end,
long[] strides) |
StridedSlice(SameDiff sameDiff,
SDVariable in,
@NonNull long[] begin,
@NonNull long[] end,
@NonNull long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
Tile(SameDiff sameDiff,
SDVariable i_v,
int[] axis) |
Tile(SameDiff sameDiff,
SDVariable i_v,
SDVariable axis) |
Transpose(SameDiff sameDiff,
SDVariable i_v) |
Transpose(SameDiff sameDiff,
SDVariable in,
int[] permuteDims) |
Transpose(SameDiff sameDiff,
SDVariable in,
SDVariable permuteDims) |
Unstack(SameDiff sameDiff,
SDVariable value,
int axis) |
Unstack(SameDiff sameDiff,
SDVariable value,
int axis,
int num) |
ZerosLike(SameDiff sameDiff,
SDVariable input) |
ZerosLike(String name,
SameDiff sameDiff,
SDVariable input) |
ZerosLike(String name,
SameDiff sameDiff,
SDVariable input,
boolean inPlace) |
ZerosLike(String name,
SameDiff sameDiff,
SDVariable input,
boolean inPlace,
DataType dataType) |
ZerosLike(String name,
SameDiff sameDiff,
SDVariable input,
DataType dataType) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
SliceBp.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
StridedSliceBp.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
TileBp.doDiff(List<SDVariable> i_v) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
SliceBp.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
StridedSliceBp.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
TileBp.doDiff(List<SDVariable> i_v) |
| Constructor and Description |
|---|
ConcatBp(@NonNull SameDiff sameDiff,
int concatDimension,
SDVariable... inputsAndGrad) |
ConcatBp(@NonNull SameDiff sameDiff,
SDVariable... inputsGradAxis) |
MergeAvgBp(SameDiff sameDiff,
@NonNull SDVariable[] inputs,
@NonNull SDVariable gradO) |
MergeAvgBp(SameDiff sameDiff,
@NonNull SDVariable[] inputs,
@NonNull SDVariable gradO) |
MergeMaxBp(SameDiff sameDiff,
@NonNull SDVariable[] inputs,
@NonNull SDVariable gradO) |
MergeMaxBp(SameDiff sameDiff,
@NonNull SDVariable[] inputs,
@NonNull SDVariable gradO) |
SliceBp(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable gradient,
@NonNull int[] begin,
@NonNull int[] size) |
SliceBp(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable gradient,
@NonNull int[] begin,
@NonNull int[] size) |
SliceBp(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable gradient,
@NonNull SDVariable begin,
@NonNull SDVariable size) |
SliceBp(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable gradient,
@NonNull SDVariable begin,
@NonNull SDVariable size) |
SliceBp(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable gradient,
@NonNull SDVariable begin,
@NonNull SDVariable size) |
SliceBp(SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable gradient,
@NonNull SDVariable begin,
@NonNull SDVariable size) |
StridedSliceBp(SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable grad,
@NonNull long[] begin,
@NonNull long[] end,
@NonNull long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
StridedSliceBp(SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable grad,
@NonNull long[] begin,
@NonNull long[] end,
@NonNull long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
StridedSliceBp(SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable grad,
@NonNull SDVariable begin,
@NonNull SDVariable end,
@NonNull SDVariable strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
StridedSliceBp(SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable grad,
@NonNull SDVariable begin,
@NonNull SDVariable end,
@NonNull SDVariable strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
StridedSliceBp(SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable grad,
@NonNull SDVariable begin,
@NonNull SDVariable end,
@NonNull SDVariable strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
StridedSliceBp(SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable grad,
@NonNull SDVariable begin,
@NonNull SDVariable end,
@NonNull SDVariable strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
StridedSliceBp(SameDiff sameDiff,
@NonNull SDVariable in,
@NonNull SDVariable grad,
@NonNull SDVariable begin,
@NonNull SDVariable end,
@NonNull SDVariable strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
TileBp(SameDiff sameDiff,
SDVariable in,
SDVariable grad,
int[] repeat) |
TileBp(SameDiff sameDiff,
SDVariable in,
SDVariable repeat,
SDVariable grad) |
| Modifier and Type | Method and Description |
|---|---|
SDVariable |
TensorArray.concat(SDVariable flow) |
SDVariable |
TensorArray.gather(SDVariable flow,
int... indices) |
SDVariable |
TensorArray.gather(SDVariable flow,
SDVariable indices) |
SDVariable |
TensorArray.read(int index) |
SDVariable |
TensorArray.read(SDVariable index) |
SDVariable |
TensorArray.scatter(SDVariable flow,
SDVariable value,
int... indices) |
SDVariable |
TensorArray.scatter(SDVariable flow,
SDVariable value,
SDVariable indices) |
SDVariable |
TensorArray.stack(SDVariable flow) |
SDVariable |
TensorArray.unstack(SDVariable flow,
SDVariable value) |
SDVariable |
TensorArray.write(SDVariable flow,
int index,
SDVariable value) |
SDVariable |
TensorArray.write(SDVariable flow,
SDVariable index,
SDVariable value) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
BaseTensorOp.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
SDVariable |
TensorArray.concat(SDVariable flow) |
SDVariable |
TensorArray.gather(SDVariable flow,
int... indices) |
SDVariable |
TensorArray.gather(SDVariable flow,
SDVariable indices) |
SDVariable |
TensorArray.read(SDVariable index) |
SDVariable |
TensorArray.scatter(SDVariable flow,
SDVariable value,
int... indices) |
SDVariable |
TensorArray.scatter(SDVariable flow,
SDVariable value,
SDVariable indices) |
SDVariable |
TensorArray.stack(SDVariable flow) |
SDVariable |
TensorArray.unstack(SDVariable flow,
SDVariable value) |
SDVariable |
TensorArray.write(SDVariable flow,
int index,
SDVariable value) |
SDVariable |
TensorArray.write(SDVariable flow,
SDVariable index,
SDVariable value) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
BaseTensorOp.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
StandardDeviation.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Variance.doDiff(List<SDVariable> grad) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
StandardDeviation.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Variance.doDiff(List<SDVariable> grad) |
| Constructor and Description |
|---|
StandardDeviation(SameDiff sameDiff,
SDVariable i_v,
boolean biasCorrected,
boolean keepDims,
int[] dimensions) |
Variance(SameDiff sameDiff,
SDVariable i_v,
boolean biasCorrected,
boolean keepDims,
int[] dimensions) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
Angle.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Assert.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BinCount.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
CheckNumerics.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Cholesky.doDiff(List<SDVariable> f1) |
List<SDVariable> |
HistogramFixedWidth.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IdentityN.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
MaxOut.doDiff(List<SDVariable> f1) |
List<SDVariable> |
NthElement.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Pad.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
ReluLayer.doDiff(List<SDVariable> gradient) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
Angle.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Assert.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BinCount.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
CheckNumerics.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Cholesky.doDiff(List<SDVariable> f1) |
List<SDVariable> |
HistogramFixedWidth.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IdentityN.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
MaxOut.doDiff(List<SDVariable> f1) |
List<SDVariable> |
NthElement.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Pad.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
ReluLayer.doDiff(List<SDVariable> gradient) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
Assign.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
IsMax.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
Assign.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
IsMax.doDiff(List<SDVariable> f1) |
| Constructor and Description |
|---|
Assign(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
IsMax(SameDiff sameDiff,
SDVariable i_v) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
BooleanNot.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IsFinite.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
IsInf.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
IsNaN.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
MatchConditionTransform.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
BooleanNot.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IsFinite.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
IsInf.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
IsNaN.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
MatchConditionTransform.doDiff(List<SDVariable> f1) |
| Constructor and Description |
|---|
BooleanNot(SameDiff sameDiff,
SDVariable i_v) |
IsFinite(SameDiff sameDiff,
SDVariable i_v) |
IsFinite(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
IsInf(SameDiff sameDiff,
SDVariable i_v) |
IsInf(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
IsNaN(SameDiff sameDiff,
SDVariable i_v) |
IsNaN(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
MatchConditionTransform(SameDiff sameDiff,
SDVariable in,
Condition condition) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
ClipByAvgNorm.doDiff(List<SDVariable> grad) |
List<SDVariable> |
ClipByNorm.doDiff(List<SDVariable> grad) |
List<SDVariable> |
ClipByValue.doDiff(List<SDVariable> grad) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
ClipByAvgNorm.doDiff(List<SDVariable> grad) |
List<SDVariable> |
ClipByNorm.doDiff(List<SDVariable> grad) |
List<SDVariable> |
ClipByValue.doDiff(List<SDVariable> grad) |
| Constructor and Description |
|---|
ClipByAvgNorm(SameDiff sameDiff,
SDVariable x,
double clipValue,
int... dimensions) |
ClipByNorm(SameDiff sameDiff,
SDVariable x,
double clipValue,
int... dimensions) |
ClipByNormBp(SameDiff sameDiff,
SDVariable x,
SDVariable eps,
double clipValue,
int... dimensions) |
ClipByValue(SameDiff sameDiff,
SDVariable x,
double clipValueMin,
double clipValueMax) |
ClipByValue(SameDiff sameDiff,
SDVariable x,
double clipValueMin,
double clipValueMax,
boolean inPlace) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
CompareAndReplace.doDiff(List<SDVariable> grad) |
List<SDVariable> |
CompareAndSet.doDiff(List<SDVariable> gradient) |
List<SDVariable> |
Eps.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
CompareAndReplace.doDiff(List<SDVariable> grad) |
List<SDVariable> |
CompareAndSet.doDiff(List<SDVariable> gradient) |
List<SDVariable> |
Eps.doDiff(List<SDVariable> f1) |
| Constructor and Description |
|---|
CompareAndReplace(SameDiff sameDiff,
SDVariable to,
SDVariable from,
Condition condition) |
CompareAndSet(SameDiff sameDiff,
SDVariable to,
Number set,
Condition condition) |
Eps(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
Eps(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
Eps(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
Object[] extraArgs) |
| Constructor and Description |
|---|
Assign(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
ATan2(SameDiff sameDiff,
SDVariable y,
SDVariable x) |
BatchToSpace(SameDiff sameDiff,
SDVariable[] args,
int[] blocks,
int[][] crops,
boolean inPlace) |
BatchToSpace(SameDiff sameDiff,
SDVariable x,
int[] blocks,
int[][] crops,
boolean inPlace) |
BatchToSpace(SameDiff sameDiff,
SDVariable x,
int[] blocks,
int[] croppingTop,
int... croppingBottom) |
BatchToSpaceND(SameDiff sameDiff,
SDVariable[] args,
int[] blocks,
int[][] crops,
boolean inPlace) |
BitsHammingDistance(@NonNull SameDiff sd,
@NonNull SDVariable x,
@NonNull SDVariable y) |
BitsHammingDistance(@NonNull SameDiff sd,
@NonNull SDVariable x,
@NonNull SDVariable y) |
BitwiseAnd(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
BitwiseOr(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
BitwiseXor(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
Choose(SameDiff sameDiff,
SDVariable[] args,
Condition condition) |
Choose(String opName,
SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
CReLU(SameDiff sd,
SDVariable input) |
CReluBp(SameDiff sd,
SDVariable input,
SDVariable epsilonNext) |
CumProd(SameDiff sameDiff,
SDVariable x,
boolean exclusive,
boolean reverse,
int... axis) |
CumProd(SameDiff sameDiff,
SDVariable x,
int... axis) |
CumSum(SameDiff sameDiff,
SDVariable x,
boolean exclusive,
boolean reverse,
int... axis) |
CumSum(SameDiff sameDiff,
SDVariable x,
int... axis) |
CyclicRShiftBits(SameDiff sameDiff,
SDVariable x,
SDVariable shift) |
CyclicShiftBits(SameDiff sameDiff,
SDVariable x,
SDVariable shift) |
Dilation2D(SameDiff sameDiff,
SDVariable[] inputAndWeights,
int[] strides,
int[] rates,
boolean isSameMode,
boolean inPlace) |
Dilation2D(SameDiff sameDiff,
SDVariable df,
SDVariable weights,
int[] strides,
int[] rates,
boolean isSameMode) |
DotProductAttention(SameDiff sameDiff,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled,
boolean withWeights) |
DotProductAttentionBp(SameDiff sameDiff,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable eps,
SDVariable mask,
boolean scaled) |
DynamicPartition(SameDiff sameDiff,
SDVariable input,
SDVariable[] partitions,
int numPartitions) |
DynamicPartition(SameDiff sameDiff,
SDVariable input,
SDVariable[] partitions,
int numPartitions) |
DynamicPartition(SameDiff sameDiff,
SDVariable input,
SDVariable partitions,
int numPartitions) |
DynamicStitch(SameDiff sameDiff,
SDVariable[] indices,
SDVariable[] inputs) |
DynamicStitch(SameDiff sameDiff,
SDVariable[] indices,
SDVariable[] inputs) |
EqualTo(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
EqualTo(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
FakeQuantWithMinMaxArgs(SameDiff sd,
SDVariable input,
float min,
float max,
boolean narrowRange,
int numBits) |
FakeQuantWithMinMaxVars(SameDiff sd,
SDVariable input,
SDVariable min,
SDVariable max,
boolean narrowRange,
int numBits) |
Fill(SameDiff sameDiff,
SDVariable shape,
DataType dtype,
double value) |
GreaterThan(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
GreaterThan(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
GreaterThanOrEqual(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
GreaterThanOrEqual(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
InTopK(SameDiff sd,
SDVariable predictions,
SDVariable targets,
int k) |
InvertPermutation(SameDiff sameDiff,
SDVariable input) |
InvertPermutation(SameDiff sameDiff,
SDVariable input,
boolean inPlace) |
IsNonDecreasing(SameDiff sameDiff,
SDVariable input) |
IsNonDecreasing(SameDiff sameDiff,
SDVariable[] args) |
IsNonDecreasing(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
IsNumericTensor(SameDiff sameDiff,
SDVariable args) |
IsNumericTensor(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
IsStrictlyIncreasing(SameDiff sameDiff,
SDVariable input) |
IsStrictlyIncreasing(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
LayerNorm(SameDiff sameDiff,
SDVariable input,
SDVariable gain,
boolean channelsFirst,
int... dimensions) |
LayerNorm(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions) |
LayerNorm(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions) |
LayerNorm(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions) |
LayerNormBp(SameDiff sameDiff,
SDVariable input,
SDVariable gain,
SDVariable gradient,
boolean channelsFirst,
int... dimensions) |
LayerNormBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable gain,
SDVariable bias,
@NonNull SDVariable gradient,
boolean channelsFirst,
int... dimensions) |
LayerNormBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable gain,
SDVariable bias,
@NonNull SDVariable gradient,
boolean channelsFirst,
int... dimensions) |
LayerNormBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable gain,
SDVariable bias,
@NonNull SDVariable gradient,
boolean channelsFirst,
int... dimensions) |
LayerNormBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable gain,
SDVariable bias,
@NonNull SDVariable gradient,
boolean channelsFirst,
int... dimensions) |
LessThan(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
LessThan(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
LessThanOrEqual(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
LessThanOrEqual(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
ListDiff(@NonNull SameDiff sd,
@NonNull SDVariable x,
@NonNull SDVariable y) |
ListDiff(@NonNull SameDiff sd,
@NonNull SDVariable x,
@NonNull SDVariable y) |
LogicalAnd(SameDiff sd,
SDVariable in1,
SDVariable in2) |
LogicalNot(SameDiff sd,
SDVariable in1,
SDVariable in2) |
LogicalOr(SameDiff sd,
SDVariable in1,
SDVariable in2) |
LogicalXor(SameDiff sd,
SDVariable in1,
SDVariable in2) |
LogMatrixDeterminant(SameDiff sameDiff,
SDVariable in,
boolean inPlace) |
LogSoftMax(SameDiff sameDiff,
SDVariable i_v) |
LogSoftMax(SameDiff sameDiff,
SDVariable i_v,
int dimension) |
MatrixDeterminant(SameDiff sameDiff,
SDVariable in) |
MatrixDeterminant(SameDiff sameDiff,
SDVariable in,
boolean inPlace) |
MatrixDiag(SameDiff sameDiff,
SDVariable in,
boolean inPlace) |
MatrixDiagPart(SameDiff sameDiff,
SDVariable in,
boolean inPlace) |
MatrixInverse(SameDiff sameDiff,
SDVariable in) |
MatrixInverse(SameDiff sameDiff,
SDVariable in,
boolean inPlace) |
MatrixSetDiag(SameDiff sameDiff,
SDVariable in,
SDVariable diag) |
MatrixSetDiag(SameDiff sameDiff,
SDVariable in,
SDVariable diag,
boolean inPlace) |
Max(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
Max(SameDiff sameDiff,
@NonNull SDVariable first,
@NonNull SDVariable second) |
Max(SameDiff sameDiff,
@NonNull SDVariable first,
@NonNull SDVariable second) |
MaximumBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y,
@NonNull SDVariable gradO) |
MaximumBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y,
@NonNull SDVariable gradO) |
MaximumBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y,
@NonNull SDVariable gradO) |
Min(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
Min(SameDiff sameDiff,
@NonNull SDVariable first,
@NonNull SDVariable second) |
Min(SameDiff sameDiff,
@NonNull SDVariable first,
@NonNull SDVariable second) |
MultiHeadDotProductAttention(SameDiff sameDiff,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled,
boolean withWeights) |
MultiHeadDotProductAttentionBp(SameDiff sameDiff,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable eps,
SDVariable mask,
boolean scaled) |
NotEqualTo(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
NotEqualTo(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
Pow(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
Qr(SameDiff sameDiff,
SDVariable input,
boolean fullMatrices) |
Reverse(@NonNull SameDiff sameDiff,
@NonNull SDVariable i_v,
int... dimensions) |
ReverseBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable i_v,
@NonNull SDVariable grad,
int... dimensions) |
ReverseBp(@NonNull SameDiff sameDiff,
@NonNull SDVariable i_v,
@NonNull SDVariable grad,
int... dimensions) |
ReverseSequence(SameDiff sameDiff,
SDVariable i_v,
SDVariable seqLengths) |
ReverseSequence(SameDiff sameDiff,
SDVariable i_v,
SDVariable seqLengths,
int seqDim,
int batchDim) |
RShiftBits(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
ShiftBits(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
SoftMax(SameDiff sameDiff,
SDVariable[] args) |
SoftMax(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
SoftMax(SameDiff sameDiff,
SDVariable[] args,
int dimension) |
SoftMax(SameDiff sameDiff,
SDVariable[] args,
int dimension,
boolean inPlace) |
SoftMax(SameDiff sameDiff,
SDVariable x,
int dimension) |
SpaceToBatch(SameDiff sameDiff,
SDVariable[] args,
int[] blocks,
int[][] padding,
boolean inPlace) |
SpaceToBatch(SameDiff sameDiff,
SDVariable x,
int[] blocks,
int[] paddingTop,
int... paddingBottom) |
SpaceToBatchND(SameDiff sameDiff,
SDVariable[] args,
int[] blocks,
int[][] padding,
boolean inPlace) |
Standardize(SameDiff sameDiff,
SDVariable i_v,
int... dimensions) |
StandardizeBp(SameDiff sameDiff,
SDVariable i_v,
SDVariable grad,
int... dimensions) |
Svd(SameDiff sd,
SDVariable input,
boolean fullUV,
boolean computeUv) |
Svd(SameDiff sd,
SDVariable input,
boolean fullUV,
boolean computeUv,
int switchNum) |
ThresholdRelu(SameDiff sd,
SDVariable input,
boolean inPlace,
double cutoff) |
ThresholdRelu(SameDiff sd,
SDVariable input,
double cutoff) |
TopK(SameDiff sd,
SDVariable in,
int k,
boolean sorted) |
Trace(SameDiff sd,
SDVariable in) |
Unique(SameDiff sd,
SDVariable in) |
UniqueWithCounts(SameDiff sd,
SDVariable in) |
XwPlusB(SameDiff sameDiff,
SDVariable input,
SDVariable weights,
SDVariable bias) |
Zeta(SameDiff sameDiff,
SDVariable x,
SDVariable q) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
SegmentMax.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentMean.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentMin.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentProd.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentSum.doDiff(List<SDVariable> gradients) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
SegmentMax.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentMean.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentMin.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentProd.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentSum.doDiff(List<SDVariable> gradients) |
| Constructor and Description |
|---|
SegmentMax(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds) |
SegmentMean(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds) |
SegmentMin(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds) |
SegmentProd(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds) |
SegmentSum(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
Cast.doDiff(List<SDVariable> i_v) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
Cast.doDiff(List<SDVariable> i_v) |
| Constructor and Description |
|---|
Cast(SameDiff sameDiff,
SDVariable arg,
@NonNull DataType dst) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
RSqrt.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Sqrt.doDiff(List<SDVariable> i_v) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
RSqrt.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Sqrt.doDiff(List<SDVariable> i_v) |
| Constructor and Description |
|---|
RSqrt(SameDiff sameDiff,
SDVariable i_v) |
RSqrt(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Sqrt(SameDiff sameDiff,
SDVariable i_v) |
Sqrt(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
BinaryMinimalRelativeError.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
BinaryRelativeError.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
RelativeError.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Set.doDiff(List<SDVariable> i_v) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
BinaryMinimalRelativeError.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
BinaryRelativeError.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
RelativeError.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Set.doDiff(List<SDVariable> i_v) |
| Constructor and Description |
|---|
BinaryMinimalRelativeError(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BinaryMinimalRelativeError(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
BinaryMinimalRelativeError(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
BinaryRelativeError(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BinaryRelativeError(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
BinaryRelativeError(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
RelativeError(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
RelativeError(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
RelativeError(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
Set(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
| Constructor and Description |
|---|
AddOp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y) |
AddOp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y) |
Axpy(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double p) |
Axpy(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
double p) |
Axpy(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
double p) |
CopyOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
CopyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
CopyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
DivOp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y) |
DivOp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y) |
FloorDivOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
FloorDivOp(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
FloorModOp(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
FModOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
FModOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
FModOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
MergeAddOp(SameDiff sameDiff,
SDVariable[] args) |
MergeAddOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
ModOp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y) |
ModOp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y) |
MulOp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y) |
MulOp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y) |
PowPairwise(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
PowPairwise(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
RDivOp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y) |
RDivOp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y) |
RealDivOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
RemainderOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
RemainderOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
RemainderOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
RSubOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
RSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
RSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
SquaredDifferenceOp(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
SquaredDifferenceOp(SameDiff sameDiff,
SDVariable x,
SDVariable y,
boolean inPlace) |
SubOp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y) |
SubOp(@NonNull SameDiff sameDiff,
@NonNull SDVariable x,
@NonNull SDVariable y) |
TruncateDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
TruncateDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
BaseArithmeticBackpropOp.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
SquaredDifferenceBpOp.doDiff(List<SDVariable> i_v1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
BaseArithmeticBackpropOp.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
SquaredDifferenceBpOp.doDiff(List<SDVariable> i_v1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
And.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Not.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Or.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Xor.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
And.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Not.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Or.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Xor.doDiff(List<SDVariable> f1) |
| Constructor and Description |
|---|
And(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
And(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double comparable) |
And(SameDiff sameDiff,
SDVariable ix,
SDVariable iy) |
Not(SameDiff sameDiff,
SDVariable i_v) |
Or(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double comparable) |
Or(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
Or(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
Xor(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double comparable) |
Xor(SameDiff sameDiff,
SDVariable ix,
SDVariable iy) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
Abs.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
AMax.doDiff(List<SDVariable> f1) |
List<SDVariable> |
AMin.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Ceil.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Cube.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Floor.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Identity.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Max.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Min.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Negative.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
OneMinus.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Reciprocal.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Round.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Sign.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Square.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
TimesOneMinus.doDiff(List<SDVariable> f1) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
Abs.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
AMax.doDiff(List<SDVariable> f1) |
List<SDVariable> |
AMin.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Ceil.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Cube.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Floor.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Identity.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Max.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Min.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Negative.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
OneMinus.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Reciprocal.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Round.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Sign.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Square.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
TimesOneMinus.doDiff(List<SDVariable> f1) |
| Constructor and Description |
|---|
Abs(SameDiff sameDiff,
SDVariable i_v) |
Abs(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
AMax(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2) |
AMin(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2) |
Ceil(SameDiff sameDiff,
SDVariable i_v) |
Ceil(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Cube(SameDiff sameDiff,
SDVariable i_v) |
Cube(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Floor(SameDiff sameDiff,
SDVariable i_v) |
Floor(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Identity(SameDiff sd,
SDVariable input) |
Max(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2) |
Min(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2) |
Negative(SameDiff sameDiff,
SDVariable i_v) |
Negative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
OneMinus(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Reciprocal(SameDiff sameDiff,
SDVariable in) |
Round(SameDiff sameDiff,
SDVariable i_v) |
Round(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Sign(SameDiff sameDiff,
SDVariable i_v) |
Sign(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Square(SameDiff sameDiff,
SDVariable i_v) |
Square(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
TimesOneMinus(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
UnsortedSegmentMax.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentMean.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentMin.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentProd.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentSqrtN.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentSum.doDiff(List<SDVariable> gradients) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
UnsortedSegmentMax.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentMean.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentMin.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentProd.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentSqrtN.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentSum.doDiff(List<SDVariable> gradients) |
| Constructor and Description |
|---|
UnsortedSegmentMax(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds,
int numSegments) |
UnsortedSegmentMean(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds,
int numSegments) |
UnsortedSegmentMin(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds,
int numSegments) |
UnsortedSegmentProd(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds,
int numSegments) |
UnsortedSegmentSqrtN(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds,
int numSegments) |
UnsortedSegmentSum(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds,
int numSegments) |
| Constructor and Description |
|---|
ACos(SameDiff sameDiff,
SDVariable i_v) |
ACos(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
ACosh(SameDiff sameDiff,
SDVariable i_v) |
ACosh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
ASin(SameDiff sameDiff,
SDVariable i_v) |
ASin(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
ASinh(SameDiff sameDiff,
SDVariable i_v) |
ASinh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
ATan(SameDiff sameDiff,
SDVariable i_v) |
ATan(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
ATanh(SameDiff sameDiff,
SDVariable i_v) |
ATanh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Cos(SameDiff sameDiff,
SDVariable i_v) |
Cos(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Cosh(SameDiff sameDiff,
SDVariable i_v) |
Cosh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
ELU(SameDiff sameDiff,
SDVariable i_v) |
Erf(SameDiff sameDiff,
SDVariable i_v) |
Erf(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Erfc(SameDiff sameDiff,
SDVariable i_v) |
Erfc(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Exp(SameDiff sameDiff,
SDVariable i_v) |
Exp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Expm1(SameDiff sameDiff,
SDVariable i_v) |
Expm1(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
GELU(SameDiff sameDiff,
SDVariable i_v) |
GELU(SameDiff sameDiff,
SDVariable i_v,
boolean precise) |
GELU(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
boolean precise) |
GELUDerivative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
HardSigmoid(SameDiff sameDiff,
SDVariable in) |
HardSigmoid(SameDiff sameDiff,
SDVariable in,
boolean inPlace) |
HardTanh(SameDiff sameDiff,
SDVariable i_v) |
HardTanh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Log(SameDiff sameDiff,
SDVariable i_v) |
Log(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Log1p(SameDiff sameDiff,
SDVariable i_v) |
Log1p(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
LogSigmoid(SameDiff sameDiff,
SDVariable i_v) |
LogSigmoid(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Mish(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
MishDerivative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
MishDerivative(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
MishDerivative(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
PreciseGELU(SameDiff sameDiff,
SDVariable i_v) |
PreciseGELU(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
boolean precise) |
PreciseGELUDerivative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
boolean precise) |
RationalTanh(SameDiff sameDiff,
SDVariable i_v) |
RationalTanh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
RectifiedTanh(SameDiff sameDiff,
SDVariable i_v) |
RectifiedTanh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Rint(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
SELU(SameDiff sameDiff,
SDVariable i_v) |
SELU(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
SetRange(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double min,
double max) |
Sigmoid(SameDiff sameDiff,
SDVariable i_v) |
Sigmoid(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
SigmoidDerivative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace)
Deprecated.
|
SigmoidDerivative(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2)
Deprecated.
|
SigmoidDerivative(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace)
Deprecated.
|
Sin(SameDiff sameDiff,
SDVariable i_v) |
Sin(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Sinh(SameDiff sameDiff,
SDVariable i_v) |
Sinh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
SoftPlus(SameDiff sameDiff,
SDVariable i_v) |
SoftPlus(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
SoftSign(SameDiff sameDiff,
SDVariable i_v) |
SoftSign(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Stabilize(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double realMin,
double cutOff,
double k) |
Swish(SameDiff sameDiff,
SDVariable i_v) |
Swish(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
SwishDerivative(SameDiff sameDiff,
SDVariable i_v) |
SwishDerivative(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
SwishDerivative(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
Tan(SameDiff sameDiff,
SDVariable i_v) |
Tan(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Tanh(SameDiff sameDiff,
SDVariable i_v) |
Tanh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
TanhDerivative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace)
Deprecated.
|
| Constructor and Description |
|---|
BaseRandomOp(SameDiff sameDiff,
SDVariable i_v) |
| Constructor and Description |
|---|
RandomStandardNormal(SameDiff sameDiff,
SDVariable[] args) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
DistributionUniform.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
RandomBernoulli.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
RandomExponential.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
RandomNormal.doDiff(List<SDVariable> grad) |
| Modifier and Type | Method and Description |
|---|---|
List<SDVariable> |
DistributionUniform.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
RandomBernoulli.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
RandomExponential.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
RandomNormal.doDiff(List<SDVariable> grad) |
| Constructor and Description |
|---|
DistributionUniform(SameDiff sd,
SDVariable shape,
double min,
double max) |
DistributionUniform(SameDiff sd,
SDVariable shape,
double min,
double max,
DataType dataType) |
RandomBernoulli(SameDiff sd,
SDVariable shape,
double p) |
RandomExponential(SameDiff sd,
SDVariable shape,
double lambda) |
RandomGamma(@NonNull SameDiff sameDiff,
@NonNull SDVariable shape,
@NonNull SDVariable alpha,
SDVariable beta,
int... seeds) |
RandomGamma(@NonNull SameDiff sameDiff,
@NonNull SDVariable shape,
@NonNull SDVariable alpha,
SDVariable beta,
int... seeds) |
RandomGamma(@NonNull SameDiff sameDiff,
@NonNull SDVariable shape,
@NonNull SDVariable alpha,
SDVariable beta,
int... seeds) |
RandomNormal(SameDiff sameDiff,
SDVariable shape,
double mean,
double stdev) |
RandomPoisson(@NonNull SameDiff sameDiff,
@NonNull SDVariable shape,
@NonNull SDVariable rate,
int... seeds) |
RandomPoisson(@NonNull SameDiff sameDiff,
@NonNull SDVariable shape,
@NonNull SDVariable rate,
int... seeds) |
RandomShuffle(@NonNull SameDiff sameDiff,
@NonNull SDVariable value,
int... seeds) |
| Constructor and Description |
|---|
DropOut(SameDiff sameDiff,
SDVariable input,
double p) |
DropOutInverted(SameDiff sameDiff,
SDVariable input,
double p) |
Range(SameDiff sd,
SDVariable from,
SDVariable to,
SDVariable step,
DataType dataType) |
| Modifier and Type | Field and Description |
|---|---|
protected SDVariable |
SameDiffLoss.scorePerExampleVariable |
| Modifier and Type | Method and Description |
|---|---|
abstract SDVariable |
SameDiffLoss.defineLoss(SameDiff sd,
SDVariable layerInput,
SDVariable labels)
Define the loss function.
NOTE: The score on a *per example* basis - should return a SDVariable with shape [minibatch], where out[i] is the score for the ith minibatch |
| Modifier and Type | Method and Description |
|---|---|
abstract SDVariable |
SameDiffLoss.defineLoss(SameDiff sd,
SDVariable layerInput,
SDVariable labels)
Define the loss function.
NOTE: The score on a *per example* basis - should return a SDVariable with shape [minibatch], where out[i] is the score for the ith minibatch |
Copyright © 2021. All rights reserved.