This node establishes a connection with a Large Language Model (LLM) from Mistral AI. After successfully authenticating using the Mistral AI Authenticator node, you can select a model from those available in the Mistral AI API.
The model to use. The available models are fetched from the Mistral AI API if possible.
Sampling temperature to use, between 0.0 and 1.0.
Higher values produce more random and creative outputs, while lower values produce more focused and deterministic outputs. Mistral AI recommends values between 0.0 and 0.7.
The maximum number of tokens to generate in the response.
Maximum number of requests sent to Mistral AI in parallel.
Increasing this value can improve throughput, but each parallel request also counts toward your Mistral AI API usage limits. If this value is set too high, some requests may be rejected because rate limits are exceeded, such as the allowed number of requests per second or tokens per minute.
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension KNIME Python Extension Development (Labs) from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!