GPT4All LLM Connector

This connector allows you to connect to a local GPT4All LLM. To get started, you need to download a specific model from the GPT4All model explorer on the website. It is not needed to install the GPT4All software. Once you have downloaded the model, specify its file path in the configuration dialog to use it.

Important Note: GPT4All discontinued support for the old .bin model format and switched to the new .gguf format. Because of this switch, workflows using models in .bin format will no longer work. You can find models in the new format on the GPT4All website or on Hugging Face Hub.

Some models (e.g. Llama 2) have been fine-tuned for chat applications, so they might behave unexpectedly if their prompts don't follow a chat like structure:

User: <The prompt you want to send to the model>
Assistant:

Use the prompt template for the specific model from the GPT4All model list if one is provided.

The currently supported models are based on GPT-J, LLaMA, MPT, Replit, Falcon and StarCoder.

For more information and detailed instructions on downloading compatible models, please visit the GPT4All GitHub repository.

Options

GPT4All Settings

Model path

Path to the pre-trained GPT4All model file eg. my/path/model.gguf.

Input Ports

This node has no input ports

Output Ports

Icon

A GPT4All large language model.

Popular Predecessors

  • No recommendations found

Popular Successors

  • No recommendations found

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.