Stanford tagger

This Node Is Deprecated — This node is kept for backwards-compatibility, but the usage in new workflows is no longer recommended. The documentation below might contain more information.

This node assigns to each term of a document a part of speech (POS) tag. It is applicable for French, English and German texts. The underlying tagger models are models of the Stanford NLP group:
http://nlp.stanford.edu/software/tagger.shtml

For English texts the Penn Treebank tag set is used:
http://www.cis.upenn.edu/~treebank).
For German texts the STTS tag set is used:
http://www.ims.uni-stuttgart.de/projekte/CQPDemos/Bundestag/help-tagset.html.
For French texts the French Treebank tag set is used: http://www.llf.cnrs.fr/Gens/Abeille/French-Treebank-fr.php.

Note: the provided tagger models vary in memory consumption and processing speed. Especially the models English bidirectional, German hgc, and Germany dewac require a lot of memory. For the usage of these models it is recommended to run KNIME with at least 2GB of heap space. To increase the head space, change the -Xmx setting in the knime.ini file. If KNIME is running with less than 1.5GB heap space it is recommended to use English left3words, English left3words caseless, or German fast models for tagging of english or german texts.

Descriptions of the models (taken from the website of the Stanford NLP group):

  • English bidirectional: Trained on WSJ sections 0-18 using a bidirectional architecture and including word shape and distributional similarity features.
  • English left3words: Trained on WSJ sections 0-18 and extra parser training data using the left3words architecture and includes word shape and distributional similarity features.
  • English left3words caseless: Trained on WSJ sections 0-18 and extra parser training data using the left3words architecture and includes word shape and distributional similarity features. Ignores case.
  • German hgc: Trained on the first 80% of the Negra corpus, which uses the STTS tagset.
  • German dewac: This model uses features from the distributional similarity clusters built from the deWac web corpus.
  • German Fast: Lacks distributional similarity features, but is several times faster than the other alternatives.
  • French: Trained on the French treebank.

Options

Tagger options

Tagger model
The tagger model to use.

General options

Number of maximal parallel tagging processes
Defines the maximal number of parallel threads that are used for tagging. Please note, that for each thread a tagging model will be loaded into memory. If this value is set to a number greater than 1, make sure that enough heap space is available, in order to be able to load the models. If you are not sure how much heap is available for KNIME, leave the number to 1.
Word tokenizer
Select the tokenizer used for word tokenization. Go to Preferences -> KNIME -> Textprocessing to read the description for each tokenizer.

Input Ports

Icon
The input table containing the documents to tag.

Output Ports

Icon
An output table containing the tagged documents.

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.