Implements stochastic gradient descent for learning a linear binary class SVM or binary class logistic regression on text data
Operates directly (and only) on String attributes.Other types of input attributes are accepted but ignored during training and classification.
(based on WEKA 3.7)
For further options, click the 'More' - button in the dialog.
All weka dialogs have a panel where you can specify classifier-specific parameters.
F: Set the loss function to minimize. 0 = hinge loss (SVM), 1 = log loss (logistic regression) (default = 0)
output-probs: Output probabilities for SVMs (fits a logsitic model to the output of the SVM)
L: The learning rate (default = 0.01).
R: The lambda regularization constant (default = 0.0001)
E: The number of epochs to perform (batch learning only, default = 500)
W: Use word frequencies instead of binary bag of words.
P: How often to prune the dictionary of low frequency words (default = 0, i.e. don't prune)
M: Minimum word frequency. Words with less than this frequence are ignored. If periodic pruning is turned on then this is also used to determine which words to remove from the dictionary (default = 3).
normalize: Normalize document length (use in conjunction with -norm and -lnorm)
norm: Specify the norm that each instance must have (default 1.0)
lnorm: Specify L-norm to use (default 2.0)
lowercase: Convert all tokens to lowercase before adding to the dictionary.
stoplist: Ignore words that are in the stoplist.
stopwords: A file containing stopwords to override the default ones. Using this option automatically sets the flag ('-stoplist') to use the stoplist if the file exists. Format: one stopword per line, lines starting with '#' are interpreted as comments and ignored.
tokenizer: The tokenizing algorihtm (classname plus parameters) to use. (default: weka.core.tokenizers.WordTokenizer)
stemmer: The stemmering algorihtm (classname plus parameters) to use.
The Preliminary Attribute Check tests the underlying classifier against the DataTable specification at the inport of the node. Columns that are compatible with the classifier are marked with a green 'ok'. Columns which are potentially not compatible are assigned a red error message.
Important: If a column is marked as 'incompatible', it does not necessarily mean that the classifier cannot be executed! Sometimes, the error message 'Cannot handle String class' simply means that no nominal values are available (yet). This may change during execution of the predecessor nodes.
Capabilities: [Nominal attributes, Binary attributes, Unary attributes, Empty nominal attributes, Numeric attributes, Date attributes, String attributes, Missing values, Binary class, Missing class values] Dependencies:  min # Instance: 0
It shows the command line options according to the current classifier configuration and mainly serves to support the node's configuration via flow variables.
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.Try NodePit Runner!
Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to email@example.com, follow @NodePit on Twitter or botsin.space/@nodepit on Mastodon.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.