TD_​OneClassSVM

This function performs one-class training for classification analysis on data sets.

Options

Alpha
Specify the Elasticnet parameter for penalty computation. It only becomes effective if RegularizationLambda > 0. The value represents the contribution ratio of L1 in the penalty. A value of 1.0 indicates L1 (LASSO) only, a value of 0 indicates L2 (Ridge) only, and a value in between is a combination of L1 and L2. Default: 0.15 (15% L1, 85% L2). Must be a float value between 0 and 1.
BatchSize
Specify the number of observations (training samples) processed in a single minibatch per AMP. A value of 0 or higher than the number of rows on an AMP processes all rows on the AMP, such that the entire dataset is processed in a single iteration, and the algorithm becomes Gradient Descent. Value is a positive integer.
DecayRate
Specify the decay rate for the learning rate (invtime and adaptive).
DecaySteps
Specify the number of iterations without decay for the adaptive learning rate. The learning rate changes by decay rate after this many iterations.
InitialEta
Specify the initial value of eta for the learning rate. For ‘constant’, this value is the learning rate for all iterations.
InputColumns
Specify the names of the input table columns that need to be used for training the model (predictors, features or independent variables)
Intercept
Specify whether intercept should be estimated or not (based on whether data is already centered or not).
IterNumNoChange
Specify the number of iterations (minibatches) with no improvement in loss including the tolerance to stop training. A value of 0 indicates no early stopping and the algorithm continues until MaxIterNum iterations are reached. Value is a positive integer.
LearningRate
Specify the learning rate algorithm for SGD iterations.
LocalSGDIterations
Specify the number of local iterations to be used for Local SGD algorithm. Must be a positive integer value. A value of 0 implies Local SGD is disabled. A value higher than 0 enables Local SGD and that many local iterations are performed before updating the weights for the global model. With Local SGD algorithm, recommended values for arguments are as follows: LocalSGDIterations: 10, MaxIterNum: 100, BatchSize: 50, IterNumNoChange: 5.
MaxIterNum
Specify the maximum number of iterations (minibatches) over the training data batches. Value is a positive integer less than 10,000,000.
Momentum
Specify the value to use for the momentum learning rate optimizer. Must be a non-negative float value between 0 and 1. A larger value indicates a higher momentum contribution. A value of 0 means the momentum optimizer is disabled. For a good momentum contribution, a value between 0.6-0.95 is recommended.
Nesterov
Specify whether Nesterov optimization should be applied to the momentum optimizer or not. Only applicable when momentum > 0.
RegularizationLambda
Specify the amount of regularization to be added. The higher the value, the stronger the regularization. It is also used to compute the learning rate when the learning rate is set to ‘optimal’. Must be a non-negative float value. A value of 0 means no regularization.
Output Schema
Output Schema, if Volatile is true then use user login as the schema.
Output Table
Output Table
VAL Location
VAL Location
Volatile
Specifies whether the table should be a VOLATILE table. If true, then the table is automatically deleted, otherwise it is users responsibility to remove or clean it up for space.
Tolerance
Specify the stopping criteria in terms of loss function improvement. Applicable when IterNumNoChange is greater than 0. Value is a positive integer.

Input Ports

Icon
Connection to a Teradata Database Instance
Icon
Specifies the table containing the input data.

Output Ports

Icon
output of TD_OneClassSVM

Nodes

Extensions

Links