Creates a job that fine-tunes a specified model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete. [Learn more about fine-tuning](/docs/guides/fine-tuning)
Request body which must comply to the following JSON Schema:
{ "required" : [ "model", "training_file" ], "type" : "object", "properties" : { "model" : { "description" : "The name of the model to fine-tune. You can select one of the\n[supported models](/docs/guides/fine-tuning/what-models-can-be-fine-tuned).\n", "example" : "gpt-3.5-turbo", "anyOf" : [ { "type" : "string" }, { "type" : "string", "enum" : [ "babbage-002", "davinci-002", "gpt-3.5-turbo" ] } ], "x-oaiTypeLabel" : "string" }, "training_file" : { "type" : "string", "description" : "The ID of an uploaded file that contains training data.\n\nSee [upload file](/docs/api-reference/files/upload) for how to upload a file.\n\nYour dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose `fine-tune`.\n\nSee the [fine-tuning guide](/docs/guides/fine-tuning) for more details.\n", "example" : "file-abc123" }, "hyperparameters" : { "type" : "object", "properties" : { "batch_size" : { "description" : "Number of examples in each batch. A larger batch size means that model parameters\nare updated less frequently, but with lower variance.\n", "oneOf" : [ { "type" : "string", "enum" : [ "auto" ] }, { "maximum" : 256, "minimum" : 1, "type" : "integer" } ], "default" : "auto" }, "learning_rate_multiplier" : { "description" : "Scaling factor for the learning rate. A smaller learning rate may be useful to avoid\noverfitting.\n", "oneOf" : [ { "type" : "string", "enum" : [ "auto" ] }, { "minimum" : 0, "exclusiveMinimum" : true, "type" : "number" } ], "default" : "auto" }, "n_epochs" : { "description" : "The number of epochs to train the model for. An epoch refers to one full cycle\nthrough the training dataset.\n", "oneOf" : [ { "type" : "string", "enum" : [ "auto" ] }, { "maximum" : 50, "minimum" : 1, "type" : "integer" } ], "default" : "auto" } }, "description" : "The hyperparameters used for the fine-tuning job." }, "suffix" : { "maxLength" : 40, "minLength" : 1, "type" : "string", "description" : "A string of up to 18 characters that will be added to your fine-tuned model name.\n\nFor example, a `suffix` of \"custom-model-name\" would produce a model name like `ft:gpt-3.5-turbo:openai:custom-model-name:7p4lURel`.\n", "nullable" : true }, "validation_file" : { "type" : "string", "description" : "The ID of an uploaded file that contains validation data.\n\nIf you provide this file, the data is used to generate validation\nmetrics periodically during fine-tuning. These metrics can be viewed in\nthe fine-tuning results file.\nThe same data should not be present in both train and validation files.\n\nYour dataset must be formatted as a JSONL file. You must upload your file with the purpose `fine-tune`.\n\nSee the [fine-tuning guide](/docs/guides/fine-tuning) for more details.\n", "nullable" : true, "example" : "file-abc123" } } }
Specify how the response should be mapped to the table output. The following formats are available:
Raw Response: Returns the raw response in a single row with the following columns:
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension OpenAI Nodes from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com, follow @NodePit on Twitter or botsin.space/@nodepit on Mastodon.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.