This node establishes a connection with an Azure OpenAI Large Language Model (LLM). After successfully authenticating using the Azure OpenAI Authenticator node, enter the deployment name of the model you want to use. You can find the models on the Azure AI Studio at 'Management - Deployments'. Note that only models compatible with Azure OpenAI's Completions API will work with this node.
If you a looking for gpt-3.5-turbo (the model behind ChatGPT) or gpt-4, check out the Azure OpenAI Chat Model Connector node.
The name of the deployed model to use. Find the deployed models on the Azure AI Studio.
The maximum number of tokens to generate.
The token count of your prompt plus max_tokens cannot exceed the model's context length.
Sampling temperature to use, between 0.0 and 2.0. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 for ones with a well-defined answer. It is generally recommend altering this or top_p but not both.
How many chat completion choices to generate for each input message. This parameter generates many completions and can quickly consume your token quota.
An alternative to sampling with temperature, where the model considers the results of the tokens (words) with top_p probability mass. Hence, 0.1 means only the tokens comprising the top 10% probability mass are considered.
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension KNIME AI Extension (Labs) from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!