OpenAI LLM Connector

This node establishes a connection with an OpenAI Large Language Model (LLM). After successfully authenticating using the OpenAI Authenticator node, you can select an LLM from a predefined list or explore advanced options to get a list of all models available for your API key (including fine-tunes). Note that only models compatible with OpenAI's Completions API will work with this node (unfortunately this information is not available programmatically).

If you a looking for gpt-3.5-turbo (the model behind ChatGPT) or gpt-4, check out the OpenAI Chat Model Connector node.

Options

OpenAI Model Selection

Model ID

Select the OpenAI model ID to be used.

Available options:

  • text-ada-001: Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost.
  • text-babbage-001: Capable of straightforward tasks, very fast, and lower cost.
  • text-curie-001: Very capable, but faster and lower cost than Davinci.
  • text-davinci-002: Can do any language task with better quality, longer output, and consistent instruction-following than the curie, babbage, or ada models.
  • text-davinci-003: Can do any language task with better quality, longer output, and consistent instruction-following than the curie, babbage, or ada models.
  • gpt-3.5-turbo-instruct: Recommended model for all completion tasks. As capable as text-davinci-003 but faster and lower in cost.
Specific Model ID

Select from a list of all available OpenAI models. The model chosen has to be compatible with OpenAI's Completions API.

Model Parameters

Maximum Response Length (token)

The maximum number of tokens to generate.

The token count of your prompt plus max_tokens cannot exceed the model's context length.

Temperature

Sampling temperature to use, between 0.0 and 2.0. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 for ones with a well-defined answer. It is generally recommend altering this or top_p but not both.

Completions generation

How many chat completion choices to generate for each input message. This parameter generates many completions and can quickly consume your token quota.

top_p

An alternative to sampling with temperature, where the model considers the results of the tokens (words) with top_p probability mass. Hence, 0.1 means only the tokens comprising the top 10% probability mass are considered.

Input Ports

Icon

Validated authentication for OpenAI.

Output Ports

Icon

Configured OpenAI LLM connection.

Popular Predecessors

  • No recommendations found

Popular Successors

  • No recommendations found

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.