This node establishes a connection with an OpenAI Large Language Model (LLM).
After successfully authenticating using the OpenAI Authenticator node, you can select an LLM from a predefined list or explore advanced options to get a list of all models available for your API key (including fine-tunes).
Note that only models compatible with OpenAI's Completions API will work with this node (unfortunately this information is not available programmatically). Find documentation about all models at OpenAI.
If you are looking for gpt-3.5-turbo or gpt-4, check out the OpenAI Chat Model Connector node.
Whether all available models are listed or only selected compatible ones.
Available options:
Select an OpenAI completions model to be used.
Select from a list of all available OpenAI models. The model chosen has to be compatible with OpenAI's Completions API. This configuration will overwrite the default model configurations when set.
The maximum number of tokens to generate.
This value, plus the token count of your prompt, cannot exceed the model's context length.
Sampling temperature to use, between 0.0 and 2.0.
Higher values will lead to less deterministic answers.
Try 0.9 for more creative applications, and 0 for ones with a well-defined answer. It is generally recommended altering this, or Top-p, but not both.
How many chat completion choices to generate for each input message.
Note: This parameter can quickly consume your token quota.
Set the seed parameter to any integer of your choice to have (mostly) deterministic outputs. The default value of 0 means that no seed is specified.
If the seed and other model parameters are the same for each request, then responses will be mostly identical. There is a chance that responses will differ, due to the inherent non-determinism of OpenAI models.
Please note that this feature is in beta and only currently supported for gpt-4-1106-preview and gpt-3.5-turbo-1106 [1].
[1] OpenAI Cookbook
Maximum number of concurrent requests to LLMs that can be made, whether through API calls or to an inference server. Exceeding this limit may result in temporary restrictions on your access.
It is important to plan your usage according to the model provider's rate limits, and keep in mind that both software and hardware constraints can impact performance.
For OpenAI, please refer to the Limits page for the rate limits available to you.
An alternative to sampling with temperature, where the model considers the results of the tokens (words) with top_p probability mass. Hence, 0.1 means only the tokens comprising the top 10% probability mass are considered.
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension KNIME Python Extension Development (Labs) from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.