OpenAI Chat Model Fine-Tuner

This node fine-tunes an OpenAI Chat Model using structured conversation data. It is useful when you want to adapt a model to a specific tone, domain, or workflow — for example, tailoring it for financial advice, customer support, or internal knowledge assistants.

Each row in the input table represents a message in a conversation. The table must contain at least 10 distinct conversations, and each must include at least one system message to define the assistant’s behavior. The fine-tuning process learns from examples: it does not memorize answers, but generalizes from the patterns in the assistant replies. You define how the assistant should respond to user inputs by providing example dialogues with the desired outputs.

Fine-tuned models are stored on OpenAI's servers and can afterwards be selected in the OpenAI LLM Selector. To delete a fine-tuned model, use the OpenAI Fine-Tuned Model Deleter node.

For pricing, see the OpenAI documentation.

To fine-tune a model for the finance domain, you might provide example conversations that emphasize clear, compliant financial guidance. Here is an example fine-tuning table:

ID Role Content
1 system You are a financial assistant who gives concise, compliant guidance.
1 user Should I invest in tech stocks right now?
1 assistant I can't give specific advice, but tech stocks are volatile. Consider your risk profile.
2 user What's diversification?
2 assistant Diversification spreads assets across sectors to reduce risk.

Credential Handling: To pass your API key securely, use the Credentials Configuration node. If "Save password in configuration (weakly encrypted)" is not enabled, the credentials will not persist after closing the workflow.

Options

Data

Conversation ID column

Column containing references to group rows into conversations.

Role column

Column containing the message role. Can be either 'system', 'assistant' or 'user'.

Content column

Column containing the message contents.

Fine-tuning

Training epochs

An epoch refers to one full cycle through the training dataset. If set to 'Auto', OpenAI will determine a reasonable value.

Available options:

  • Auto: OpenAI will determine a reasonable value for the configuration.
  • Custom: Allows to specify a custom value for the configuration.
Number of training epochs

An epoch refers to one full cycle through the training dataset.

Batch size

A larger batch size means that model parameters are updated less frequently, but with lower variance. If set to 'Auto', OpenAI will determine a reasonable value.

Available options:

  • Auto: OpenAI will determine a reasonable value for the configuration.
  • Custom: Allows to specify a custom value for the configuration.
Custom batch size

A larger batch size means that model parameters are updated less frequently, but with lower variance.

Learning rate factor

A smaller learning rate may be useful to avoid overfitting. If set to 'Auto', OpenAI will determine a reasonable value.

Available options:

  • Auto: OpenAI will determine a reasonable value for the configuration.
  • Custom: Allows to specify a custom value for the configuration.
Custom scaling factor

A smaller learning rate may be useful to avoid overfitting.

Output

Model name suffix

A string of up to 18 characters that will be added to your fine-tuned model name.

Polling interval (s)

The time interval in seconds in which the node will check the progress of the fine-tuning job.

Input Ports

Icon

Configured OpenAI Chat Model which supports fine-tuning.

Icon

The data should be presented across 3 columns:

One column specifying a conversation ID, one representing the role of a message (system, assistant and user), and the third for the content of the message.

The table has to include at least 10 conversations, each of which must contain at least one system message.

Output Ports

Icon

Configured fine-tuned OpenAI Chat Model connection.

Icon

Metrics to evaluate the fine-tuning performance. The values of the metrics are: 'train loss', 'train accuracy', 'valid loss', and 'valid mean token accuracy' for each step of training.

Popular Predecessors

  • No recommendations found

Popular Successors

  • No recommendations found

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.