Submit Tool Ouputs to Run

Go to Product

When a run has the status: "requires_action" and required_action.type is submit_tool_outputs, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.

Options

OpenAI Beta
Calls to the Assistants API require that you pass a beta HTTP header.
Thread Id
The ID of the thread to which this run belongs.
Run Id
The ID of the run that requires the tool output submission.
Body
Result Format

Specify how the response should be mapped to the table output. The following formats are available:

Structured Table: Returns a parsed table with data split into rows and columns.

  • Id: The identifier, which can be referenced in API endpoints.
  • Object: The object type, which is always thread.run.
  • Created At: The Unix timestamp (in seconds) for when the run was created.
  • Thread Id: The ID of the thread that was executed on as a part of this run.
  • Assistant Id: The ID of the assistant used for execution of this run.
  • Status: The status of the run, which can be either queued, in_progress, requires_action, cancelling, cancelled, failed, completed, incomplete, or expired.
  • Required Action: Details on the action required to continue the run. Will be null if no action is required.
  • Last Error: The last error associated with this run. Will be null if there are no errors.
  • Expires At: The Unix timestamp (in seconds) for when the run will expire.
  • Started At: The Unix timestamp (in seconds) for when the run was started.
  • Cancelled At: The Unix timestamp (in seconds) for when the run was cancelled.
  • Failed At: The Unix timestamp (in seconds) for when the run failed.
  • Completed At: The Unix timestamp (in seconds) for when the run was completed.
  • Incomplete Details: Details on why the run is incomplete. Will be null if the run is not incomplete.
  • Model: The model that the assistant used for this run.
  • Instructions: The instructions that the assistant used for this run.
  • Tools: The list of tools that the assistant used for this run.
  • Metadata:

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

    Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

  • Usage: Usage statistics related to the run. This value will be null if the run is not in a terminal state (i.e. in_progress, queued, etc.).
  • Temperature: The sampling temperature used for this run. If not set, defaults to 1.
  • Top P: The nucleus sampling value used for this run. If not set, defaults to 1.
  • Max Prompt Tokens: The maximum number of prompt tokens specified to have been used over the course of the run.
  • Max Completion Tokens: The maximum number of completion tokens specified to have been used over the course of the run.
  • Truncation Strategy: Controls for how a thread will be truncated prior to the run. Use this to control the initial context window of the run.
  • Tool Choice: Controls which (if any) tool is called by the model. none means the model will not call any tools and instead generates a message. auto is the default value and means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools before responding to the user. Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
  • Parallel Tool Calls: Whether to enable parallel function calling during tool use.
  • Response Format:

    Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.

    Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

    Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.

    Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

Raw Response: Returns the raw response in a single row with the following columns:

  • body: Response body
  • status: HTTP status code

Input Ports

Icon
Configuration data.

Output Ports

Icon
Result of the request depending on the selected Result Format.
Icon
Configuration data (this is the same as the input port; it is provided as passthrough for sequentially chaining nodes to declutter your workflow connections).

Popular Predecessors

  • No recommendations found

Popular Successors

  • No recommendations found

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.