KNIME Hub Chat Model Connector

Connects to a Chat Model configured in the GenAI Gateway of the connected KNIME Hub using the authentication provided via the input port.

Use this node to generate text, answer questions, summarize content or perform other text-based tasks.

Options

Model

Select the model to use.

Model Parameters

Maximum Response Length (token)

The maximum number of tokens to generate.

The token count of your prompt plus max_tokens cannot exceed the model's context length.

Temperature

Sampling temperature to use, between 0.0 and 2.0. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 for ones with a well-defined answer. It is generally recommend altering this or top_p but not both.

Number of concurrent requests

Maximum number of concurrent requests a single node (typically the LLM Prompter) can make to the GenAI Gateway. The more requests a node can make in parallel the faster it executes but too many requests might get rate limitted by some GenAI providers.

Top-p sampling

An alternative to sampling with temperature, where the model considers the results of the tokens (words) with top_p probability mass. Hence, 0.1 means only the tokens comprising the top 10% probability mass are considered.

Input Ports

Icon

Credential for a KNIME Hub.

Output Ports

Icon

A chat model that connects to the KNIME hub to make requests.

Popular Predecessors

  • No recommendations found

Popular Successors

  • No recommendations found

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.