Create Fine Tuning Job

Go to Product

Creates a fine-tuning job which begins the process of creating a new model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete. [Learn more about fine-tuning](/docs/guides/fine-tuning)

Options

Body

Request body which must comply to the following JSON Schema:

{
  "required" : [ "model", "training_file" ],
  "type" : "object",
  "properties" : {
    "model" : {
      "description" : "The name of the model to fine-tune. You can select one of the\n[supported models](/docs/guides/fine-tuning/which-models-can-be-fine-tuned).\n",
      "example" : "gpt-4o-mini",
      "anyOf" : [ {
        "type" : "string"
      }, {
        "type" : "string",
        "enum" : [ "babbage-002", "davinci-002", "gpt-3.5-turbo", "gpt-4o-mini" ]
      } ],
      "x-oaiTypeLabel" : "string"
    },
    "training_file" : {
      "type" : "string",
      "description" : "The ID of an uploaded file that contains training data.\n\nSee [upload file](/docs/api-reference/files/create) for how to upload a file.\n\nYour dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose `fine-tune`.\n\nThe contents of the file should differ depending on if the model uses the [chat](/docs/api-reference/fine-tuning/chat-input) or [completions](/docs/api-reference/fine-tuning/completions-input) format.\n\nSee the [fine-tuning guide](/docs/guides/fine-tuning) for more details.\n",
      "example" : "file-abc123"
    },
    "hyperparameters" : {
      "type" : "object",
      "properties" : {
        "batch_size" : {
          "description" : "Number of examples in each batch. A larger batch size means that model parameters\nare updated less frequently, but with lower variance.\n",
          "oneOf" : [ {
            "type" : "string",
            "enum" : [ "auto" ]
          }, {
            "maximum" : 256,
            "minimum" : 1,
            "type" : "integer"
          } ],
          "default" : "auto"
        },
        "learning_rate_multiplier" : {
          "description" : "Scaling factor for the learning rate. A smaller learning rate may be useful to avoid\noverfitting.\n",
          "oneOf" : [ {
            "type" : "string",
            "enum" : [ "auto" ]
          }, {
            "minimum" : 0,
            "exclusiveMinimum" : true,
            "type" : "number"
          } ],
          "default" : "auto"
        },
        "n_epochs" : {
          "description" : "The number of epochs to train the model for. An epoch refers to one full cycle\nthrough the training dataset.\n",
          "oneOf" : [ {
            "type" : "string",
            "enum" : [ "auto" ]
          }, {
            "maximum" : 50,
            "minimum" : 1,
            "type" : "integer"
          } ],
          "default" : "auto"
        }
      },
      "description" : "The hyperparameters used for the fine-tuning job."
    },
    "suffix" : {
      "maxLength" : 64,
      "minLength" : 1,
      "type" : "string",
      "description" : "A string of up to 64 characters that will be added to your fine-tuned model name.\n\nFor example, a `suffix` of \"custom-model-name\" would produce a model name like `ft:gpt-4o-mini:openai:custom-model-name:7p4lURel`.\n",
      "nullable" : true
    },
    "validation_file" : {
      "type" : "string",
      "description" : "The ID of an uploaded file that contains validation data.\n\nIf you provide this file, the data is used to generate validation\nmetrics periodically during fine-tuning. These metrics can be viewed in\nthe fine-tuning results file.\nThe same data should not be present in both train and validation files.\n\nYour dataset must be formatted as a JSONL file. You must upload your file with the purpose `fine-tune`.\n\nSee the [fine-tuning guide](/docs/guides/fine-tuning) for more details.\n",
      "nullable" : true,
      "example" : "file-abc123"
    },
    "integrations" : {
      "type" : "array",
      "description" : "A list of integrations to enable for your fine-tuning job.",
      "nullable" : true,
      "items" : {
        "required" : [ "type", "wandb" ],
        "type" : "object",
        "properties" : {
          "type" : {
            "description" : "The type of integration to enable. Currently, only \"wandb\" (Weights and Biases) is supported.\n",
            "oneOf" : [ {
              "type" : "string",
              "enum" : [ "wandb" ]
            } ]
          },
          "wandb" : {
            "required" : [ "project" ],
            "type" : "object",
            "properties" : {
              "project" : {
                "type" : "string",
                "description" : "The name of the project that the new run will be created under.\n",
                "example" : "my-wandb-project"
              },
              "name" : {
                "type" : "string",
                "description" : "A display name to set for the run. If not set, we will use the Job ID as the name.\n",
                "nullable" : true
              },
              "entity" : {
                "type" : "string",
                "description" : "The entity to use for the run. This allows you to set the team or username of the WandB user that you would\nlike associated with the run. If not set, the default entity for the registered WandB API key is used.\n",
                "nullable" : true
              },
              "tags" : {
                "type" : "array",
                "description" : "A list of tags to be attached to the newly created run. These tags are passed through directly to WandB. Some\ndefault tags are generated by OpenAI: \"openai/finetune\", \"openai/{base-model}\", \"openai/{ftjob-abcdef}\".\n",
                "items" : {
                  "type" : "string",
                  "example" : "custom-tag"
                }
              }
            },
            "description" : "The settings for your integration with Weights and Biases. This payload specifies the project that\nmetrics will be sent to. Optionally, you can set an explicit display name for your run, add tags\nto your run, and set a default entity (team, username, etc) to be associated with your run.\n"
          }
        }
      }
    },
    "seed" : {
      "maximum" : 2147483647,
      "minimum" : 0,
      "type" : "integer",
      "description" : "The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases.\nIf a seed is not specified, one will be generated for you.\n",
      "nullable" : true,
      "example" : 42
    }
  }
}
Result Format

Specify how the response should be mapped to the table output. The following formats are available:

Raw Response: Returns the raw response in a single row with the following columns:

  • body: Response body
  • status: HTTP status code

Input Ports

Icon
Configuration data.

Output Ports

Icon
Result of the request depending on the selected Result Format.
Icon
Configuration data (this is the same as the input port; it is provided as passthrough for sequentially chaining nodes to declutter your workflow connections).

Popular Predecessors

  • No recommendations found

Popular Successors

  • No recommendations found

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.