This node prompts a chat model using the provided user message, using an existing conversation history as context. An optional table containing tool definitions can be provided to enable tool calling.
Conversation history is a table containing two columns:
If the conversation history table is non-empty, it will be used as context when sending the new message to the chat model. To use only the conversation history table for prompting (without a new message), leave the new message setting empty and ensure that the last entry in the table has the 'human' role.
In order to enable tool calling, a table containing tool definitions must be connected to the dynamic input port of the node. If tool definitions are provided, the conversation history table must include the following columns to support tool calling (these columns will be populated by the chat model):
If the chat model decides to call a tool, the node appends a new 'ai' message with the above columns populated based on the selected tool. This information can then be used to route the downstream portion of the workflow appropriately. The output of the tool can then be fed back into the node by appending a new 'tool' message to the conversation history table, with the tool's output being the message content.
A common way to ensure that the tool call output is presented back to the Chat Model Prompter is to embed the node together with its tools in a Recursive Loop.
A tool definition is a JSON object describing the corresponding tool and its parameters. The more descriptive the definition, the more likely the LLM will call it appropriately.
Example:
{ "title": "number_adder", "type": "object", "description": "Adds two numbers.", "properties": { "a": { "title": "A", "type": "integer", "description": "First value to add" }, "b": { "title": "B", "type": "integer", "description": "Second value to add" } }, "required": ["a", "b"] }
Note: If you use the Credentials Configuration node and do not select the "Save password in configuration (weakly encrypted)" option for passing the API key for the chat model connector node, the Credentials Configuration node will need to be reconfigured upon reopening the workflow, as the credentials flow variable was not saved and will therefore not be available to downstream nodes.
Optional instructional message provided to the model at the start of the conversation, usually used to define various guidelines and rules for the model to adhere to.
Note: Certain models don't support system messages (e.g. OpenAI's o1-mini). For such models, the system message should be left empty.
Example: You are an expert in geospacial analytics, and you only reply using JSON.
Optional next message to prompt the chat model with. If provided, a corresponding row will be appended to the conversation table.
Choose between different output formats.
Available options:
If enabled, messages produced by this node will be appended to the provided conversation table.
Otherwise, the output table will only contain the new message and the model's reply.
Note: If an existing conversation is provided, it will still be used as context if this setting is disabled.
Select the column of the conversation table that specifies the role assigned to each message. The column can be empty if starting the conversation from scratch.
Example roles: 'human', 'ai'.
Select the column of the conversation table that specifies the messages. The column can be empty if starting the conversation from scratch.
If enabled, the 'New message' specified in the configuration dialog will not be appended to the conversation table while the last row is a tool call message.
In most cases, the new message would be what caused the model to invoke tool calling in the first place, and will have already been appended to the conversation table.
Select the column of the conversation table specifying tool names as strings.
This column gets populated with the name of tool the model decides to call.
Select the column of the conversation table specifying tool call IDs as strings.
This column gets populated with IDs of tool calls, so that they can be referenced by the model.
Select the column of the conversation table specifying tool call arguments as JSON objects.
This column gets populated with the expected input arguments for the tool the model decides to call.
Select the column of the tool definitions table containing definitions of the tools that should be available to the chat model.
Tool definitions take the form of JSON Schema-based objects, specifying the tool's name, description, parameters, and required fields.
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension KNIME Python Extension Development (Labs) from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.