Gemini LLM Selector

This node allows selecting a Gemini language model using an authenticated connection obtained either from the Vertex AI Connector node, or from the Google AI Studio Authenticator node.

Options

Model

Select the Gemini language model to use. The list of available models is fetched using the provided Gemini connection.

If connection with the API cannot be established, the list is populated with known Gemini models appropriate for the connection type.

Maximum response tokens

Specify the number of tokens to constrain the model's responses to.

A token is equivalent to about 4 characters for Gemini models. 100 tokens are about 60-80 English words.

Temperature

Specify the temperature for the model's responses.

Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a more deterministic or less open-ended response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 is deterministic, meaning that the highest probability response is always selected.

Number of concurrent requests

Specify the number of maximum requests that can be made concurrently.

Increasing this number is particularly useful in conjunction with the LLM Prompter (Table) node, as it operates in a row-based manner (e.g. 10 concurrent requests would result in a batch of 10 rows being processed concurrently).

Input Ports

Icon

An authenticated connection to either Vertex AI or Google AI Studio.

Output Ports

Icon

A Gemini large language model.

Popular Predecessors

  • No recommendations found

Popular Successors

  • No recommendations found

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.