This node fine-tunes an OpenAI Chat Model using structured conversation data. It is useful when you want to adapt a model to a specific tone, domain, or workflow — for example, tailoring it for financial advice, customer support, or internal knowledge assistants.
Each row in the input table represents a message in a conversation. The table must contain at least 10 distinct conversations, and each must include at least one system
message to define the assistant’s behavior.
The fine-tuning process learns from examples: it does not memorize answers, but generalizes from the patterns in the assistant replies. You define how the assistant should respond to user inputs by providing example dialogues with the desired outputs.
Fine-tuned models are stored on OpenAI's servers and can afterwards be selected in the OpenAI LLM Selector
.
To delete a fine-tuned model, use the OpenAI Fine-Tuned Model Deleter
node.
For pricing, see the OpenAI documentation.
To fine-tune a model for the finance domain, you might provide example conversations that emphasize clear, compliant financial guidance. Here is an example fine-tuning table:
ID | Role | Content |
1 | system | You are a financial assistant who gives concise, compliant guidance. |
1 | user | Should I invest in tech stocks right now? |
1 | assistant | I can't give specific advice, but tech stocks are volatile. Consider your risk profile. |
2 | user | What's diversification? |
2 | assistant | Diversification spreads assets across sectors to reduce risk. |
Credential Handling: To pass your API key securely, use the Credentials Configuration node. If "Save password in configuration (weakly encrypted)" is not enabled, the credentials will not persist after closing the workflow.
Column containing references to group rows into conversations.
Column containing the message role. Can be either 'system', 'assistant' or 'user'.
Column containing the message contents.
An epoch refers to one full cycle through the training dataset. If set to 'Auto', OpenAI will determine a reasonable value.
Available options:
An epoch refers to one full cycle through the training dataset.
A larger batch size means that model parameters are updated less frequently, but with lower variance. If set to 'Auto', OpenAI will determine a reasonable value.
Available options:
A larger batch size means that model parameters are updated less frequently, but with lower variance.
A smaller learning rate may be useful to avoid overfitting. If set to 'Auto', OpenAI will determine a reasonable value.
Available options:
A smaller learning rate may be useful to avoid overfitting.
A string of up to 18 characters that will be added to your fine-tuned model name.
The time interval in seconds in which the node will check the progress of the fine-tuning job.
Configured OpenAI Chat Model which supports fine-tuning.
The data should be presented across 3 columns:
One column specifying a conversation ID, one representing the role of a message (system, assistant and user), and the third for the content of the message.
The table has to include at least 10 conversations, each of which must contain at least one system message.
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension KNIME Python Extension Development (Labs) from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.