Stanford Tagger

This node assigns to each term of a document a part of speech (POS) tag. It is applicable for French, English, German, Spanish and Arabic texts. The underlying tagger models are models of the Stanford NLP group:
http://nlp.stanford.edu/software/tagger.shtml

For English texts the Penn Treebank tag set is used:
https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html.
For German texts the STTS tag set is used:
http://www.ims.uni-stuttgart.de/forschung/ressourcen/lexika/TagSets/stts-table.html.
For French texts the French Treebank tag set is used:
http://www.llf.cnrs.fr/Gens/Abeille/French-Treebank-fr.php.
For Spanish texts the Ancora Treebank tag set is used:
https://nlp.stanford.edu/software/spanish-faq.shtml#tagset.
For Arabic texts a Arabic Penn Treebank tag set is used:
https://nlp.stanford.edu/software/parser-arabic-faq.html#d.
There are also German, Spanish and French models using the Universal Dependencies POS tag set:
http://universaldependencies.org/u/pos/.

Note: the provided tagger models vary in memory consumption and processing speed. Especially the models English bidirectional, WSJ bidirectional, German hgc, and German dewac require a lot of memory. For the usage of these models it is recommended to run KNIME with at least 2GB of heap space. To increase the heap space, change the -Xmx setting in the knime.ini file. If KNIME is running with less than 1.5GB heap space it is recommended to use English left3words, English left3words caseless, or German fast models for tagging of english or german texts.

Descriptions of the models (taken from the website of the Stanford NLP group):

  • English bidirectional: Trained on WSJ sections 0-18 using a bidirectional architecture and including word shape and distributional similarity features. Penn Treebank tagset.
  • English left3words: Trained on WSJ sections 0-18 and extra parser training data using the left3words architecture and includes word shape and distributional similarity features. Penn Treebank tagset.
  • English left3words caseless: Trained on WSJ sections 0-18 and extra parser training data using the left3words architecture and includes word shape and distributional similarity features. Penn Treebank tagset. Ignores case.
  • English WSJ 0-18 bidirectional distsim: Trained on WSJ sections 0-18 using a bidirectional architecture and including word shape and distributional similarity features. Penn Treebank tagset.
  • English WSJ 0-18 bidirectional distsim: Trained on WSJ sections 0-18 using a bidirectional architecture and including word shape and distributional similarity features. Penn Treebank tagset.
  • English WSJ 0-18 bidirectional no distsim: Trained on WSJ sections 0-18 using a bidirectional architecture and including word shape. No distributional similarity features. Penn Treebank tagset.
  • English WSJ 0-18 caseless left 3 words distsim: Trained on WSJ sections 0-18 left3words architecture and includes word shape and distributional similarity features. Penn Treebank tagset. Ignores case.
  • English WSJ 0-18 left 3 words distsim: Trained on WSJ sections 0-18 using the left3words architecture and includes word shape and distributional similarity features. Penn Treebank tagset.
  • English WSJ 0-18 left 3 words no distsim: Trained on WSJ sections 0-18 using the left3words architecture and includes word shape. Penn Treebank tagset.

To use following tagger models, the specific language pack has to be installed. (File -> Install KNIME Extensions...)

  • German hgc: Trained on the first 80% of the Negra corpus, which uses the STTS tagset.
  • German dewac: This model uses features from the distributional similarity clusters built from the deWac web corpus.
  • German fast: Lacks distributional similarity features, but is several times faster than the other alternatives.
  • German fast caseless: Lacks distributional similarity features, but is several times faster than the other alternatives. Ignores case.
  • German UD: This is a model that produces Universal Dependencies POS tags.
  • French: Trained on the French treebank.
  • French UD: This is a model that produces Universal Dependencies POS tags.
  • Spanish: Trained on the Spanish Ancora tagset.
  • Spanish distsim: Trained on the French Spanish ancora tagset.
  • Spanish UD: This is a model that produces Universal Dependencies POS tags.
  • Arabic: This is a model that produces POS tags for Arabic language.

Options

General options

Document column
The column containing the documents to tag.
Replace column
If checked, the documents of the selected document column will be replaced by the new tagged documents. Otherwise the tagged documents will be appended as new column.
Append column
The name of the new appended column, containing the tagged documents.
Word tokenizer
Select the tokenizer used for word tokenization. Go to Preferences -> KNIME -> Textprocessing to read the description for each tokenizer.
Number of maximal parallel tagging processes
Defines the maximal number of parallel threads that are used for tagging. Please note, that for each thread a tagging model will be loaded into memory. If this value is set to a number greater than 1, make sure that enough heap space is available, in order to be able to load the models. If you are not sure how much heap is available for KNIME, leave the number to 1.

Tagger options

Tagger model
The tagger model to use.

Input Ports

Icon
The input table containing the documents to tag.

Output Ports

Icon
An output table containing the tagged documents.

Popular Predecessors

Views

This node has no views

Workflows

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.