First, any missing values in your dataset are filled in to ensure all records are complete. Next, a new text column is created by combining or transforming existing text fields, making the data ready for further analysis.
Loads your raw dataset from a CSV file into KNIME, making it available for all further processing steps.
Generates a new column by applying a custom expression to your data. This lets you combine, transform, or calculate values from existing columns, adding useful information for later analysis.
Applies a Python script to each row, allowing for advanced or custom text processing that goes beyond standard KNIME nodes. This step is useful for implementing specific logic or transformations tailored to your data needs.
First, specific text replacements are made across your data to standardize or clean up values. Then, a custom Java snippet is applied to each row, allowing you to generate new features or further transform your text data based on your own logic. This sequence ensures your text fields are both consistent and enriched for later analysis.
Keeps only the columns needed for your analysis, removing any unnecessary data and making the table easier to work with for the next steps.
Saves the processed data into an Excel file, creating a final report that can be easily shared or used for further analysis outside of KNIME.
Creates summary tables by grouping your data based on the message column, then counting how many times each event or tone appears for each message. This helps you quickly see the distribution and frequency of different events or tones within your messages.
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.