Implements John Platt's sequential minimal optimization algorithm for training a support vector classifier. This implementation globally replaces all missing values and transforms nominal attributes into binary ones
It also normalizes all attributes by default.(In that case the coefficients in the output are based on the normalized data, not the original data --- this is important for interpreting the classifier.)
Multi-class problems are solved using pairwise classification (1-vs-1 and if logistic models are built pairwise coupling according to Hastie and Tibshirani, 1998).
To obtain proper probability estimates, use the option that fits logistic regression models to the outputs of the support vector machine.
In the multi-class case the predicted probabilities are coupled using Hastie and Tibshirani's pairwise coupling method.
Note: for improved speed normalization should be turned off when operating on SparseInstances.
For more information on the SMO algorithm, see
J.Platt: Fast Training of Support Vector Machines using Sequential Minimal Optimization.
In B.Schoelkopf and C.
Burges and A.Smola, editors, Advances in Kernel Methods - Support Vector Learning, 1998.
S.S.
Keerthi, S.K.Shevade, C.
Bhattacharyya, K.R.K.Murthy (2001).
Improvements to Platt's SMO Algorithm for SVM Classifier Design.Neural Computation.
13(3):637-649.
Trevor Hastie, Robert Tibshirani: Classification by Pairwise Coupling.In: Advances in Neural Information Processing Systems, 1998.
(based on WEKA 3.7)
For further options, click the 'More' - button in the dialog.
All weka dialogs have a panel where you can specify classifier-specific parameters.
D: If set, classifier is run in debug mode and may output additional info to the console
no-checks: Turns off all checks - use with caution! Turning them off assumes that data is purely numeric, doesn't contain any missing values, and has a nominal class. Turning them off also means that no header information will be stored if the machine is linear. Finally, it also assumes that no instance has a weight equal to 0. (default: checks on)
C: The complexity constant C. (default 1)
N: Whether to 0=normalize/1=standardize/2=neither. (default 0=normalize)
L: The tolerance parameter. (default 1.0e-3)
P: The epsilon for round-off error. (default 1.0e-12)
M: Fit logistic models to SVM outputs.
V: The number of folds for the internal cross-validation. (default -1, use training data)
W: The random number seed. (default 1)
K: The Kernel to use. (default: weka.classifiers.functions.supportVector.PolyKernel)
D: Enables debugging output (if available) to be printed. (default: off)
no-checks: Turns off all checks - use with caution! (default: checks on)
C: The size of the cache (a prime number), 0 for full cache and -1 to turn it off. (default: 250007)
E: The Exponent to use. (default: 1.0)
L: Use lower-order terms. (default: no)
The Preliminary Attribute Check tests the underlying classifier against the DataTable specification at the inport of the node. Columns that are compatible with the classifier are marked with a green 'ok'. Columns which are potentially not compatible are assigned a red error message.
Important: If a column is marked as 'incompatible', it does not necessarily mean that the classifier cannot be executed! Sometimes, the error message 'Cannot handle String class' simply means that no nominal values are available (yet). This may change during execution of the predecessor nodes.
Capabilities: [Nominal attributes, Binary attributes, Unary attributes, Empty nominal attributes, Numeric attributes, Missing values, Nominal class, Binary class, Missing class values] Dependencies: [Nominal attributes, Binary attributes, Unary attributes, Empty nominal attributes, Numeric attributes, Date attributes, String attributes, Relational attributes] min # Instance: 1
It shows the command line options according to the current classifier configuration and mainly serves to support the node's configuration via flow variables.
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension KNIME Weka Data Mining Integration (3.7) from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.