few-shots learning can be achieved on LLM via in-context learning.
This example shows how to achieve a data app that translates roman dialect.
The input of the model:
<<<
Translate: {wow}
ammazza
Translate: {wow}
da paura
Translate: {come on}:
daje!
Translate: {feeling tired}:
abbiocco
Translate: {nap}:
'na pennica
Translate: {hell yeah}:
avoja
Translate: {it's hot}:
sto ‘a schiumà
Translate: {let's go}:
damose
Translate: {what's up?}:
>>>
The output:
<<< Che succede? >>>
URL: Prompt Engineering - Wikipedia https://en.wikipedia.org/wiki/Prompt_engineering
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!