The workflow starts with a list of documents, which have been downloaded from PubMed and parsed beforehand and saved as data table. The data is available in the workflow directory.
The documents are assigned to two categories and are split, based on the category assignments, into two sets. The first set consists of documents about human and aids, the second set consists of documents about mouse and cancer.
Part of speech tags as well as gene names are recognized and assigned by the corresponding tagger (POS tagger and Abner tagger), in order to assign a color based on a tag type later on.
After preprocessing by filtering and stemming. Bag of words are created and frequencies computed. Again filtering is applied based on the frequencies. Finally the remaining terms are visulaized in a Tag Cloud.
This workflow shows how to import textual data, preprocess documents by filtering and stemming, transform documents into a bag of words, and finally visualize them using a Tag Cloud.
Get this workflow from the following link: Download
05_Named_Entity_Tag_Cloud consists of the following 26 nodes(s):
05_Named_Entity_Tag_Cloud contains nodes provided by the following 3 plugin(s):
Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to firstname.lastname@example.org, follow @NodePit on Twitter, or chat on Gitter!
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.