Icon

Ollama Chat Model Configuration

Options

Temperature:
Sampling temperature to use, between 0.0 and 2.0.%%00010%%00010Higher values will lead to less deterministic answers.%%00010%%00010Try 0.9 for more creative applications, and 0 for ones with a well-defined answer. It is generally recommended altering this, or Top-p, but not both.
Top-p sampling:
An alternative to sampling with temperature, where the model considers the results of the tokens (words) with top_p probability mass. Hence, 0.1 means only the tokens comprising the top 10% probability mass are considered.
Maximum response length (token):
The maximum number of tokens to generate.%%00010%%00010This value, plus the token count of your prompt, cannot exceed the model's context length.
Seed
Set the seed parameter to any integer of your choice to have (mostly) deterministic outputs. The default value of 0 means that no seed is specified.%%00010%%00010If the seed and other model parameters are the same for each request, then responses will be mostly identical. There is a chance that responses will differ, due to the inherent non-determinism of OpenAI models.
Chat model:
OpenAI chat model

Input Ports

This node has no input ports

Output Ports

This node has no output ports

Nodes

Extensions

Links