Icon

03 Response control

<p>This workflow demonstrates how <strong>a few-shot prompt</strong> can be used to perform <strong>aspect-based sentiment analysis </strong>an how to validate the model's response. It showcases <strong>response control techniques </strong>by guiding the model with labelled examples and checking whether the predicted<strong> output follows the intended format (string or JSON</strong>).</p><p>The workflow processes product review data for laptops, combining review content with few-shot examples to generate aspect-specific prompts. It then sends these prompts to an LLM, which is expected to identify both the aspect and the <strong>corresponding sentiment</strong>. After the response is received, <strong>validation</strong> steps compare the output against the original prompt, checking for consistency.</p>

Prompt engineering - create few-shot prompt examples

Clean the response and check if the LLM correctly returned both the aspect and the sentiment

Prompt engineering - create the full prompt for each review with instructions and examples

Set up the OpenAI API key, authenticate, and select the LLM. Send the prompt to LLM to extract aspect and sentiment.

Import few-shot examples

Rebuild the prompt to instruct the model to return the result in JSON format.

Convert the JSON response into table, extract aspect and sentiment, and evaluate output consistency

03 Response Control - Aspect Based Sentiment Analysis

This workflow demonstrates how a few-shot prompt can be used to perform aspect-based sentiment analysis an how to validate the model's response. It showcases response control techniques by guiding the model with labelled examples and checking whether the predicted output follows the intended format (string or JSON).

The workflow processes product review data for laptops, combining review content with few-shot examples to generate aspect-specific prompts. It then sends these prompts to an LLM, which is expected to identify both the aspect and the corresponding sentiment. After the response is received, validation steps compare the output against the original prompt, checking for consistency.

Prompt engineering - create few-shot prompt examples and format them as JSON

Option 1: return output as a string
Option 2: return output as JSON
Get API Key
Create promptwith instructions and examples+ request JSON Output
Expression
Evaluate responseusing columns
Expression
Table Row to Variable
Create promptwith instructionsand examples
Expression
Inspect mismatches
Create JSON exampleswith aspectand sentiment
Expression
Examples
Table Reader
Evaluate response
Expression
Concatenate examplesand separatethem by \n\n
GroupBy
Validate the responses: column presence,column names, types,domain & missing values
Table Validator
Parse output stringto JSON
String to JSON
Clean leading and trailing whitespace
String Cleaner
Transform JSONto table
JSON to Table
Reviews
Table Reader
Join examplesand sentiment
Expression
Table Row to Variable
Concatenate examplesand separatethem by \n\n
GroupBy
Extract aspects andpredict sentiment
LLM Prompter
Extract aspects andpredict sentiment+ activate JSON Output
LLM Prompter
OpenAI Authenticator
Connect to model
OpenAI LLM Selector
Inspect mismatches

Nodes

Extensions

Links