Mistral AI LLM Selector

This node establishes a connection with a Large Language Model (LLM) from Mistral AI. After successfully authenticating using the Mistral AI Authenticator node, you can select a model from those available in the Mistral AI API.

Options

Model

The model to use. The available models are fetched from the Mistral AI API if possible.

Model Parameters

Temperature

Sampling temperature to use, between 0.0 and 1.0.

Higher values produce more random and creative outputs, while lower values produce more focused and deterministic outputs. Mistral AI recommends values between 0.0 and 0.7.

Max Tokens

The maximum number of tokens to generate in the response.

Number of concurrent requests

Maximum number of requests sent to Mistral AI in parallel.

Increasing this value can improve throughput, but each parallel request also counts toward your Mistral AI API usage limits. If this value is set too high, some requests may be rejected because rate limits are exceeded, such as the allowed number of requests per second or tokens per minute.

Input Ports

Icon

The authentication for the Mistral AI API.

Output Ports

Icon

The Mistral AI large language model which can be used in the LLM Prompter and LLM Chat Prompter nodes.

Popular Predecessors

  • No recommendations found

Popular Successors

  • No recommendations found

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.