Cancels a model response with the given ID. Only responses created with
the background
parameter set to true
can be cancelled.
Learn more.
Specify how the response should be mapped to the table output. The following formats are available:
Structured Table: Returns a parsed table with data split into rows and columns.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
top_p
but not both.An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature
but not both.
safety_identifier
and prompt_cache_key
. Use prompt_cache_key
instead to maintain caching optimizations.
A stable identifier for your end-users.
Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.user
field. Learn more.Specifies the processing type used for serving the request.
When the service_tier
parameter is set, the response body will include the service_tier
value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
conversation
.gpt-5 and o-series models only
Configuration options for reasoning models.
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice
parameter.
The two categories of tools you can provide the model are:
tools
parameter to see how to specify which tools
the model can call.The truncation strategy to use for the model response.
auto
: If the context of this response and previous ones exceeds
the model's context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.disabled
(default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.response
.completed
, failed
,
in_progress
, cancelled
, queued
, or incomplete
.An array of content items generated by the model.
output
array is dependent
on the model's response.output
array and
assuming it's an assistant
message with the content generated by
the model, you might consider using the output_text
property where
supported in SDKs.A system (or developer) message inserted into the model's context.
When using along with previous_response_id
, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
output_text
items in the output
array, if any are present.
Supported in the Python and JavaScript SDKs.Raw Response: Returns the raw response in a single row with the following columns:
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension OpenAI Nodes from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.