IBM watsonx.ai LLM Selector

In order to use IBM watsonx.ai models, you'll need to create an IBM watsonx.ai account and obtain an API key. After successfully authenticating using the IBM watsonx.ai Authenticator node, you can select a Large Language Model (LLM) from a predefined list.

Refer to the IBM watsonx.ai documentation for more information on available chat models. At the moment, only the chat models from foundation models are supported. Refer to Choosing a model page for more information on chat models that support tool calls.

Note: If you want to use a space, make sure that the space has a valid runtime service instance. You can check this at IBM watsonx.ai Studio under Manage tab in your space.

Options

Model

The model to use for the chat completion.

Model Parameters

Temperature

Sampling temperature to use, between 0.0 and 2.0. Higher values will make the output more random, while lower values will make it more focused and deterministic.

It is generally recommended altering this or top_p but not both.

Maximum response length (token)

The maximum number of tokens to generate.

This value, plus the token count of your prompt, cannot exceed the model's context length.

Top-p sampling

An alternative to sampling with temperature, where the model considers the results of the tokens (words) with top_p probability mass. Hence, 0.1 means only the tokens comprising the top 10% probability mass are considered.

Number of concurrent requests

Maximum number of concurrent requests to LLMs that can be made, whether through API calls or to an inference server. Exceeding this limit may result in temporary restrictions on your access.

It is important to plan your usage according to the model provider's rate limits, and keep in mind that both software and hardware constraints can impact performance.

For OpenAI, please refer to the Limits page for the rate limits available to you.

Input Ports

Icon

The authentication for the IBM watsonx.ai API.

Output Ports

Icon

The IBM watsonx.ai Large Language Model which can be used in the LLM Prompter (Table) and LLM Prompter (Conversation) nodes.

Popular Predecessors

  • No recommendations found

Popular Successors

  • No recommendations found

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.