Create an assistant with a model and instructions.
Request body which must comply to the following JSON Schema:
{ "required" : [ "model" ], "type" : "object", "properties" : { "model" : { "description" : "model_description", "example" : "gpt-4o", "anyOf" : [ { "type" : "string" }, { "type" : "string", "enum" : [ "gpt-4o", "gpt-4o-2024-08-06", "gpt-4o-2024-05-13", "gpt-4o-2024-08-06", "gpt-4o-mini", "gpt-4o-mini-2024-07-18", "gpt-4-turbo", "gpt-4-turbo-2024-04-09", "gpt-4-0125-preview", "gpt-4-turbo-preview", "gpt-4-1106-preview", "gpt-4-vision-preview", "gpt-4", "gpt-4-0314", "gpt-4-0613", "gpt-4-32k", "gpt-4-32k-0314", "gpt-4-32k-0613", "gpt-3.5-turbo", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-1106", "gpt-3.5-turbo-0125", "gpt-3.5-turbo-16k-0613" ] } ], "x-oaiTypeLabel" : "string" }, "name" : { "maxLength" : 256, "type" : "string", "description" : "assistant_name_param_description", "nullable" : true }, "description" : { "maxLength" : 512, "type" : "string", "description" : "assistant_description_param_description", "nullable" : true }, "instructions" : { "maxLength" : 256000, "type" : "string", "description" : "assistant_instructions_param_description", "nullable" : true }, "tools" : { "maxItems" : 128, "type" : "array", "description" : "assistant_tools_param_description", "items" : { "oneOf" : [ { "title" : "Code interpreter tool", "required" : [ "type" ], "type" : "object", "properties" : { "type" : { "type" : "string", "description" : "The type of tool being defined: `code_interpreter`", "enum" : [ "code_interpreter" ] } } }, { "title" : "FileSearch tool", "required" : [ "type" ], "type" : "object", "properties" : { "type" : { "type" : "string", "description" : "The type of tool being defined: `file_search`", "enum" : [ "file_search" ] }, "file_search" : { "type" : "object", "properties" : { "max_num_results" : { "maximum" : 50, "minimum" : 1, "type" : "integer", "description" : "The maximum number of results the file search tool should output. The default is 20 for `gpt-4*` models and 5 for `gpt-3.5-turbo`. This number should be between 1 and 50 inclusive.\n\nNote that the file search tool may output fewer than `max_num_results` results. See the [file search tool documentation](/docs/assistants/tools/file-search/customizing-file-search-settings) for more information.\n" }, "ranking_options" : { "title" : "File search tool call ranking options", "required" : [ "score_threshold" ], "type" : "object", "properties" : { "ranker" : { "type" : "string", "description" : "The ranker to use for the file search. If not specified will use the `auto` ranker.", "enum" : [ "auto", "default_2024_08_21" ] }, "score_threshold" : { "maximum" : 1, "minimum" : 0, "type" : "number", "description" : "The score threshold for the file search. All values must be a floating point number between 0 and 1." } }, "description" : "The ranking options for the file search. If not specified, the file search tool will use the `auto` ranker and a score_threshold of 0.\n\nSee the [file search tool documentation](/docs/assistants/tools/file-search/customizing-file-search-settings) for more information.\n" } }, "description" : "Overrides for the file search tool." } } }, { "title" : "Function tool", "required" : [ "function", "type" ], "type" : "object", "properties" : { "type" : { "type" : "string", "description" : "The type of tool being defined: `function`", "enum" : [ "function" ] }, "function" : { "required" : [ "name" ], "type" : "object", "properties" : { "description" : { "type" : "string", "description" : "A description of what the function does, used by the model to choose when and how to call the function." }, "name" : { "type" : "string", "description" : "The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64." }, "parameters" : { "type" : "object", "additionalProperties" : true, "description" : "The parameters the functions accepts, described as a JSON Schema object. See the [guide](/docs/guides/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format. \n\nOmitting `parameters` defines a function with an empty parameter list." }, "strict" : { "type" : "boolean", "description" : "Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the `parameters` field. Only a subset of JSON Schema is supported when `strict` is `true`. Learn more about Structured Outputs in the [function calling guide](docs/guides/function-calling).", "nullable" : true, "default" : false } } } } } ], "x-oaiExpandable" : true }, "default" : [ ] }, "tool_resources" : { "type" : "object", "properties" : { "code_interpreter" : { "type" : "object", "properties" : { "file_ids" : { "maxItems" : 20, "type" : "array", "description" : "A list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter` tool. There can be a maximum of 20 files associated with the tool.\n", "items" : { "type" : "string" }, "default" : [ ] } } }, "file_search" : { "type" : "object", "properties" : { "vector_store_ids" : { "maxItems" : 1, "type" : "array", "description" : "The [vector store](/docs/api-reference/vector-stores/object) attached to this assistant. There can be a maximum of 1 vector store attached to the assistant.\n", "items" : { "type" : "string" } }, "vector_stores" : { "maxItems" : 1, "type" : "array", "description" : "A helper to create a [vector store](/docs/api-reference/vector-stores/object) with file_ids and attach it to this assistant. There can be a maximum of 1 vector store attached to the assistant.\n", "items" : { "type" : "object", "properties" : { "file_ids" : { "maxItems" : 10000, "type" : "array", "description" : "A list of [file](/docs/api-reference/files) IDs to add to the vector store. There can be a maximum of 10000 files in a vector store.\n", "items" : { "type" : "string" } }, "chunking_strategy" : { "type" : "object", "description" : "The chunking strategy used to chunk the file(s). If not set, will use the `auto` strategy.", "oneOf" : [ { "title" : "Auto Chunking Strategy", "required" : [ "type" ], "type" : "object", "properties" : { "type" : { "type" : "string", "description" : "Always `auto`.", "enum" : [ "auto" ] } }, "additionalProperties" : false, "description" : "The default strategy. This strategy currently uses a `max_chunk_size_tokens` of `800` and `chunk_overlap_tokens` of `400`." }, { "title" : "Static Chunking Strategy", "required" : [ "static", "type" ], "type" : "object", "properties" : { "type" : { "type" : "string", "description" : "Always `static`.", "enum" : [ "static" ] }, "static" : { "required" : [ "chunk_overlap_tokens", "max_chunk_size_tokens" ], "type" : "object", "properties" : { "max_chunk_size_tokens" : { "maximum" : 4096, "minimum" : 100, "type" : "integer", "description" : "The maximum number of tokens in each chunk. The default value is `800`. The minimum value is `100` and the maximum value is `4096`." }, "chunk_overlap_tokens" : { "type" : "integer", "description" : "The number of tokens that overlap between chunks. The default value is `400`.\n\nNote that the overlap must not exceed half of `max_chunk_size_tokens`.\n" } }, "additionalProperties" : false } }, "additionalProperties" : false } ], "x-oaiExpandable" : true }, "metadata" : { "type" : "object", "description" : "Set of 16 key-value pairs that can be attached to a vector store. This can be useful for storing additional information about the vector store in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.\n", "x-oaiTypeLabel" : "map" } } } } }, "oneOf" : [ { "required" : [ "vector_store_ids" ] }, { "required" : [ "vector_stores" ] } ] } }, "description" : "A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.\n", "nullable" : true }, "metadata" : { "type" : "object", "description" : "metadata_description", "nullable" : true, "x-oaiTypeLabel" : "map" }, "temperature" : { "maximum" : 2, "minimum" : 0, "type" : "number", "description" : "run_temperature_description", "nullable" : true, "example" : 1, "default" : 1 }, "top_p" : { "maximum" : 1, "minimum" : 0, "type" : "number", "description" : "run_top_p_description", "nullable" : true, "example" : 1, "default" : 1 }, "response_format" : { "description" : "Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.\n\nSetting to `{ \"type\": \"json_schema\", \"json_schema\": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).\n\nSetting to `{ \"type\": \"json_object\" }` enables JSON mode, which ensures the message the model generates is valid JSON.\n\n**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly \"stuck\" request. Also note that the message content may be partially cut off if `finish_reason=\"length\"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.\n", "oneOf" : [ { "type" : "string", "description" : "`auto` is the default value\n", "enum" : [ "auto" ] }, { "required" : [ "type" ], "type" : "object", "properties" : { "type" : { "type" : "string", "description" : "The type of response format being defined: `text`", "enum" : [ "text" ] } } }, { "required" : [ "type" ], "type" : "object", "properties" : { "type" : { "type" : "string", "description" : "The type of response format being defined: `json_object`", "enum" : [ "json_object" ] } } }, { "required" : [ "json_schema", "type" ], "type" : "object", "properties" : { "type" : { "type" : "string", "description" : "The type of response format being defined: `json_schema`", "enum" : [ "json_schema" ] }, "json_schema" : { "required" : [ "name", "type" ], "type" : "object", "properties" : { "description" : { "type" : "string", "description" : "A description of what the response format is for, used by the model to determine how to respond in the format." }, "name" : { "type" : "string", "description" : "The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64." }, "schema" : { "type" : "object", "additionalProperties" : true, "description" : "The schema for the response format, described as a JSON Schema object." }, "strict" : { "type" : "boolean", "description" : "Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the `schema` field. Only a subset of JSON Schema is supported when `strict` is `true`. To learn more, read the [Structured Outputs guide](/docs/guides/structured-outputs).", "nullable" : true, "default" : false } } } } } ], "x-oaiExpandable" : true } }, "additionalProperties" : false }
Specify how the response should be mapped to the table output. The following formats are available:
Raw Response: Returns the raw response in a single row with the following columns:
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension OpenAI Nodes from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.