Modify Assistant

Go to Product

Modifies an assistant.

Options

OpenAI Beta
Calls to the Assistants API require that you pass a beta HTTP header.
Assistant Id
The ID of the assistant to modify.
Body

Request body which must comply to the following JSON Schema:

{
  "type" : "object",
  "properties" : {
    "model" : {
      "description" : "model_description",
      "anyOf" : [ {
        "type" : "string"
      } ]
    },
    "name" : {
      "maxLength" : 256,
      "type" : "string",
      "description" : "assistant_name_param_description",
      "nullable" : true
    },
    "description" : {
      "maxLength" : 512,
      "type" : "string",
      "description" : "assistant_description_param_description",
      "nullable" : true
    },
    "instructions" : {
      "maxLength" : 256000,
      "type" : "string",
      "description" : "assistant_instructions_param_description",
      "nullable" : true
    },
    "tools" : {
      "maxItems" : 128,
      "type" : "array",
      "description" : "assistant_tools_param_description",
      "items" : {
        "oneOf" : [ {
          "title" : "Code interpreter tool",
          "required" : [ "type" ],
          "type" : "object",
          "properties" : {
            "type" : {
              "type" : "string",
              "description" : "The type of tool being defined: `code_interpreter`",
              "enum" : [ "code_interpreter" ]
            }
          }
        }, {
          "title" : "FileSearch tool",
          "required" : [ "type" ],
          "type" : "object",
          "properties" : {
            "type" : {
              "type" : "string",
              "description" : "The type of tool being defined: `file_search`",
              "enum" : [ "file_search" ]
            },
            "file_search" : {
              "type" : "object",
              "properties" : {
                "max_num_results" : {
                  "maximum" : 50,
                  "minimum" : 1,
                  "type" : "integer",
                  "description" : "The maximum number of results the file search tool should output. The default is 20 for gpt-4* models and 5 for gpt-3.5-turbo. This number should be between 1 and 50 inclusive.\n\nNote that the file search tool may output fewer than `max_num_results` results. See the [file search tool documentation](/docs/assistants/tools/file-search/number-of-chunks-returned) for more information.\n"
                }
              },
              "description" : "Overrides for the file search tool."
            }
          }
        }, {
          "title" : "Function tool",
          "required" : [ "function", "type" ],
          "type" : "object",
          "properties" : {
            "type" : {
              "type" : "string",
              "description" : "The type of tool being defined: `function`",
              "enum" : [ "function" ]
            },
            "function" : {
              "required" : [ "name" ],
              "type" : "object",
              "properties" : {
                "description" : {
                  "type" : "string",
                  "description" : "A description of what the function does, used by the model to choose when and how to call the function."
                },
                "name" : {
                  "type" : "string",
                  "description" : "The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."
                },
                "parameters" : {
                  "type" : "object",
                  "additionalProperties" : true,
                  "description" : "The parameters the functions accepts, described as a JSON Schema object. See the [guide](/docs/guides/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format. \n\nOmitting `parameters` defines a function with an empty parameter list."
                }
              }
            }
          }
        } ],
        "x-oaiExpandable" : true
      },
      "default" : [ ]
    },
    "tool_resources" : {
      "type" : "object",
      "properties" : {
        "code_interpreter" : {
          "type" : "object",
          "properties" : {
            "file_ids" : {
              "maxItems" : 20,
              "type" : "array",
              "description" : "Overrides the list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter` tool. There can be a maximum of 20 files associated with the tool.\n",
              "items" : {
                "type" : "string"
              },
              "default" : [ ]
            }
          }
        },
        "file_search" : {
          "type" : "object",
          "properties" : {
            "vector_store_ids" : {
              "maxItems" : 1,
              "type" : "array",
              "description" : "Overrides the [vector store](/docs/api-reference/vector-stores/object) attached to this assistant. There can be a maximum of 1 vector store attached to the assistant.\n",
              "items" : {
                "type" : "string"
              }
            }
          }
        }
      },
      "description" : "A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.\n",
      "nullable" : true
    },
    "metadata" : {
      "type" : "object",
      "description" : "metadata_description",
      "nullable" : true,
      "x-oaiTypeLabel" : "map"
    },
    "temperature" : {
      "maximum" : 2,
      "minimum" : 0,
      "type" : "number",
      "description" : "run_temperature_description",
      "nullable" : true,
      "example" : 1,
      "default" : 1
    },
    "top_p" : {
      "maximum" : 1,
      "minimum" : 0,
      "type" : "number",
      "description" : "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both.\n",
      "nullable" : true,
      "example" : 1,
      "default" : 1
    },
    "response_format" : {
      "description" : "Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.\n\nSetting to `{ \"type\": \"json_object\" }` enables JSON mode, which guarantees the message the model generates is valid JSON.\n\n**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly \"stuck\" request. Also note that the message content may be partially cut off if `finish_reason=\"length\"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.\n",
      "oneOf" : [ {
        "type" : "string",
        "description" : "`auto` is the default value\n",
        "enum" : [ "none", "auto" ]
      }, {
        "type" : "object",
        "properties" : {
          "type" : {
            "type" : "string",
            "description" : "Must be one of `text` or `json_object`.",
            "example" : "json_object",
            "default" : "text",
            "enum" : [ "text", "json_object" ]
          }
        },
        "description" : "An object describing the expected output of the model. If `json_object` only `function` type `tools` are allowed to be passed to the Run. If `text` the model can return text or any value needed.\n"
      } ],
      "x-oaiExpandable" : true
    }
  },
  "additionalProperties" : false
}
Result Format

Specify how the response should be mapped to the table output. The following formats are available:

Raw Response: Returns the raw response in a single row with the following columns:

  • body: Response body
  • status: HTTP status code

Input Ports

Icon
Configuration data.

Output Ports

Icon
Result of the request depending on the selected Result Format.
Icon
Configuration data (this is the same as the input port; it is provided as passthrough for sequentially chaining nodes to declutter your workflow connections).

Popular Predecessors

  • No recommendations found

Popular Successors

  • No recommendations found

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.